Category Archives: Uncategorized

2nd Biannual SB CTF

Last weekend Invoca hosted the 2nd biannual Santa Barbara Capture The Flag (CTF) competition, and we are proud to announce that it was a huge success. We had five different teams and over 40 participants ranging from hacker elite to infosec novices. We’re in the process of collecting feedback and holding a retrospective of the event. In the meantime, here is a recap of the weekend.

What is a CTF?

Capture The Flag is an information security competition aimed at increasing the knowledge and efficiency of security testing. The objective is to exploit vulnerabilities which return a “flag” that can be entered into a scoreboard application for points. Most of the events have prizes ranging from honorable mentions to thousands of dollars. There are typically two types of CTF competitions, Jeopardy and Attack-Defense.

Jeopardy style is a set of challenges typically hosted in isolation of other teams consisting of various topics such as web app, mobile app, reverse engineering, cryptography, and steganography. Point values vary based on the difficulty of the challenges and teams compete by overall point count.

Attack-Defense is where teams each have their own system and/or network with vulnerable services. Teams compete by exploiting vulnerabilities in other team’s systems while defending their own by patching vulnerabilities once they’ve identified them.

The Night Begins…

Jesse Adametz, Sr. Cloud Ops Engineer, kicked off the CTF with a talk on the automation used to allow us to spin up hundreds of applications which was needed to support all the teams and challenges. We outlined all the difficulties of orchestrating so many applications and ensuring their availability. This becomes particularly important when your user base is there to compromise what you just spent hours automating and launching!

Here is a link to the slides:
https://docs.google.com/presentation/d/11M_7L8aTDJgAafHkYQHDkDNjcHT3Sd4FZrd7l9nEf10/edit?usp=sharing

Following the kick-off, the teams hunkered down in conference rooms and common areas as they feverishly began attempting to solve the challenges. The “First Blood” was only four minutes into the event and the second was so close we actually awarded a prize to both teams! As the evening progressed and teams settled into their rhythm, we kicked off the Mr. Robot Marathon and posted up in the kitchen area with our energy drinks and coffee. Throughout the night we’d take a brief binge-watching break to field questions about challenges, or joke about the devilishness of our challenge writers. Invoca’s Armin Ahkbari and Bugcrowd’s Jason Haddix were particularly clever with the challenges they wrote this time around.

The night continued and the energy drinks started to lose their effectiveness until there was a single person left standing, who won the award for outlasting all others. When morning rolled around we stacked up breakfast burritos almost as fast as people arrived. Additionally that morning, in typical CTF fashion, we busted out the picks, locks, and handcuffs because… why not? By noon, most teams were back at their battle stations tearing through challenges again. Mr. Robot was now showing the epic Rasberry Pi hack scene while we demolished a table full of sandwiches. Later, we had the pleasure of watching Jason Haddix give his “Bug Bounty Hunting Methodology” talk which he has presented at Defcon in previous years.

The latest version of Jason’s talk.
https://www.youtube.com/watch?v=C4ZHAdI8o1w

We learned a valuable lesson at dinner about advanced notice on taco orders over 10! Thankfully, Lilly’s was not only happy to whip up a bunch of tacos for us but they did so in record time. A short while later we were devouring tacos and fueling up for the second and final night of hacking which had many teams up incredibly late solving challenges. The final episodes of Mr. Robot had aired and we moved on to the hacker classics such as Sneaker, Hackers, and Wargames.

Sunday morning had a stunning number of people still awake or arriving after few hours of sleep. The race was a tight one with only some of the hardest challenges left unsolved. The point values for those challenges made it a race to the final minute, and in a Kentucky Derby style photo finish a team snuck in and upheaved the final tally.

As we wrapped up the morning with prize announcements, we began talking about the fun we all had over the weekend. Feedback is still flowing in but it was fantastic knowing that we had enough content to entertain those new and old to the field of Information Security. We are continuing to collect feedback which we plan to review and then share with everyone. The source code for the challenges and the automation used to launch the infrastructure will also be made available in the coming days and weeks.

 

Thanks to all that attended, the volunteers that organized, and the companies that sponsored.

We will be releasing the challenges in the following repo.

https://github.com/jamesabrown/sbctf_release

 

TL;DR

How We Upgraded A Very Large App from Rails 3 to Rails 4

For small projects, upgrading Rails versions is simple and can be done in a matter of hours or days. On the other hand, upgrading large projects can be quite a headache and take weeks or months. There are plenty of blog posts and upgrade guides out there explaining the mechanics of upgrading from one Rails version to another, but few of them provide the planning, organization, and implementation practices needed to upgrade a large project.

Invoca upgraded our multi-million line Rails project from version 2 to 3 several years ago. At that time we created a branch from our mainline, upgraded to version 3, and started fixing bugs while the rest of the team continued to develop. The bug fixing took months, meanwhile the mainline diverged and created a continual stream of merge conflicts to resolve.

For our recent upgrade from 3 to 4, we wanted a strategy that kept the upgraded code regularly merged back into the mainline and kept running our test suite for both Rails versions. Throughout this post we’ll share some of the techniques we used to streamline our Rails version upgrade.

Keeping it all together

Our main focus when upgrading from Rails 3 to Rails 4 was ensuring that we kept functionality equivalent between versions and minimized the differences. To do this we were able to simultaneously run both versions together in the same repo. Here are a few of the practices we used to combine Rails versions and keep our conflicts at bay.

Working with different Rails versions simultaneously

Bundler Gemfile Option

  • In order to deal with Rails 4 gem dependency issues, we created two separate gemfiles, “Gemfile” and “Gemfile_rails4”. We left “Gemfile” as-is and we upgraded the Rails version and the many dependencies in the “Gemfile_rails4”.
  • We then passed an option to Bundler telling it which environment to use. This allowed us to have both versions of Rails within the same repo.
  • To specify which gemfile to use, we would prepend the BUNDLE_GEMFILE option to `bundle exec`
BUNDLE_GEMFILE=Gemfile_rails4 bundle exec <COMMAND>
  • The same option is used to install the upgraded gems from “Gemfile_rails4”
BUNDLE_GEMFILE=Gemfile_rails4 bundle install
  • If the BUNDLE_GEMFILE option is not set, it will default to “Gemfile”, allowing the rest of the developers to continue working with Rails 3 without needing to change their workflow.
  • To make working with the Bundler option easier, we implemented a Bash script to automatically prepend the option for us. (Those that were consistently working on resolving bugs also implemented a simpler “r4” Bash alias on their local machine)
    • Instead of needing to use:
BUNDLE_GEMFILE=Gemfile_rails4 bundle exec rails server
  • We instead used:
script/r4 rails server

script/r4

#!/bin/bash
# To make your life easier, add this alias to your .bashrc or .bash_profile
# alias r4="BUNDLE_GEMFILE=Gemfile_rails4 bundle exec"

COMMAND="BUNDLE_GEMFILE=`pwd`/Gemfile_rails4 bundle exec $*"
echo $COMMAND
eval $COMMAND

Static Helper Method

  • When implementing code changes for Rails 4, we first went through and implemented all the changes that were backwards compatible. Then, to simplify differences in code specifically for Rails 4, we implemented our own helper method. This method takes two lambdas and executes the first in Rails 4 and the second in Rails 3.
    • Class StaticHelpers
       def self.rails_4_or_3(rails_4_lambda, rails_3_lambda = -> {})
         if Rails::VERSION::MAJOR == 4
           rails_4_lambda.call
         elsif Rails::VERSION::MAJOR == 3
           rails_3_lambda.call
         else
           raise "Rails Version #{Rails::VERSION::MAJOR} not supported."
         end
       end
      end
  • This became our defacto method to check the Rails version and is utilized to execute different sets of code, define modules for specific Rails versions, return specific values, etc.
    • StaticHelpers.rails_4_or_3(
       -> { 
         # Snippet to execute for Rails 4 only
        },
       -> {
         # Snippet for Rails 3 only
       }
      )
      
      StaticHelpers.rails_4_or_3(-> { include Rails4OnlyModule })

 

Automated Testing

  • For every branch, we configured our automated unit testing suite to produce two test builds, one for each Rails version. This allowed us to quickly troubleshoot both versions in parallel to know whether a fix made for Rails 4 created any adverse effects to Rails 3.
  • The results of the first test run where disheartening, over 15,000 test failures! But, we took a methodical approach and started knocking them down. Many, of course, were common failures. In the beginning it was not uncommon for a single change to a test helper to fix hundreds if not thousands of tests. However, toward the end, the fixes started coming slower and resolved less failures. Make sure you dedicated adequate resources and time: we had a team of four working on the project for five months.

Deploy It: One Piece at a Time

  • When our total test count reached a reasonable size, we began smoke testing by running our automated QA test suite against servers running in Rails 4 mode.
    • Along the way we were able to identify and resolve important feature errors that weren’t caught by our automated unit test suite.
  • Once we resolved our errors and failures, we began to switch server groups one at a time. We began with low priority servers such as background job processing servers, and gradually increased priority until we finally switched over our front end, customer-facing servers.
  • By switching dedicated groups singularly, we could focus our attention on the expected behavior of that group and could react quickly if unexpected errors occurred. It also gave us the option to quickly revert the system for that group back to its original working version to give us time to debug.

Good luck!

Hopefully these tips are as beneficial for you as they were for us throughout our upgrade. There are many other important facets to a Rails upgrade that must be determined by how the app works and will need their own specific implementations, and so for that reason every Rails upgrade is its own special snowflake. You’ll most likely end up chasing a plethora of test cases and wonder to yourself how it was possible for your app to become a perpetual Rube Goldberg machine. Despite the trouble that comes with it, the rewarding feeling of upgrading your system will give you plenty of motivation to work through whatever problems that arise. Good luck!

– Omeed Rabani