Answers for AWShttps://answersforaws.com/2014-07-29T17:50:00-07:00CloudNative, Bakery, and the future2014-07-29T17:50:00-07:00Peter Sankauskastag:answersforaws.com,2014-07-29:blog/2014/07/cloudnative-bakery-and-the-future/<p><img alt="Cloud" src="/statics/blog/cloud.jpg"></p>
<p>Well, it finally happened. I started a startup. It is called <a href="http://cloudnative.io/">CloudNative</a>.</p>
<p>It turns out, humans don't scale. Well, at least I don't. I have thoroughly enjoyed consulting and helping clients for the last year and a bit. And writing open source code, recording video episodes, running the Advanced AWS meetup here in SF and winning a Cloud Prize was awesome.</p>
<p>It's time to take it up a notch. In every job I have had, my goal has been to write enough code to replace myself. CloudNative is the continuation of that.</p>
<p>CloudNative aims to make creating:</p>
<ul>
<li>highly available</li>
<li>elastically scalable</li>
<li>fault tolerant</li>
<li>self-healing</li>
<li>secure</li>
<li>reliable</li>
</ul>
<p>systems the <strong>default</strong>, rather than something you hope to get to when you have enough engineers. And at the same time, codifying all the best practices of running in the cloud.</p>
<p>An MVP of the <a href="http://cloudnative.io/">Bakery</a> is now live. If you are using Ansible and want to build AMIs easily, please take it for a spin. It will evolve over the coming months to become a hosted continuous deployment platform, naturally starting with AWS.</p>
<p>With all of that, I will be winding down the consulting side to focus full time on CloudNative. This site will remain up as a reference, but will probably not be updated going forward.</p>
<p>CloudNative has its own blog, and I have just posted everything you ever wanted (and probably didn't want) to know about <a href="http://cloudnative.io/blog/2014/07/paravirtual-vs-hvm-images/">Paravirtual and HVM AMIs</a>.</p>
<p>Thank you for reading. I look forward to helping even more people figure out the cloud.</p>
<p>Kind regards,<br />
Peter Sankauskas</p>Help Wanted2014-04-08T14:50:00-07:00Peter Sankauskastag:answersforaws.com,2014-04-08:blog/2014/04/help-wanted/<p><img alt="Help Wanted" src="/statics/blog/help-wanted.jpg"></p>
<p>There comes a time when a good idea needs resources. That time has come. The time to build a team.</p>
<p><strong>I am looking for engineers who want to create the first AWS continuous deployment SaaS.</strong></p>
<p>I have been working on the <a href="/blog/2014/03/bakery4aws/">Bakery</a> for a little while now and have many people signed up for the beta. I wasn't convinced the market was ready for this, but demand is telling me otherwise. It's time to ramp up.</p>
<p>The product so far is:</p>
<ul>
<li>Ruby on Rails on the back, using the AWS Ruby SDKs (both versions), STS, CloudFormation, etc</li>
<li>On the front will be AngularJS (hooks in place, not started, greenfield)</li>
<li>The baking process needs to support Ansible (done), and Chef by popular demand (expert needed)</li>
<li>The CI integrations needed are <a href="https://circleci.com/">CircleCi</a> (done - those guys rock) and Jenkins (big, fat <code>TODO</code>). Later will be TravisCI, Bamboo and even a GitHub hook, for those that like the wild west.</li>
</ul>
<p>If you have skills, the interest and the capacity to help in any of these areas, <a href="/careers/"><strong>contact me</strong></a> and let's work something out. Part-time is fine.</p>
<p>Answers for AWS will not stay a consulting company. The goal is to create a software company with the mission of hiding the complexities of AWS. The Bakery is just the beginning. There are many cloud tools just itching to be built and productized. This is your chance to be one of the first on the team. <a href="/careers/"><strong>Apply now</strong></a>.</p>Bakery4AWS2014-03-12T13:15:00-07:00Peter Sankauskastag:answersforaws.com,2014-03-12:blog/2014/03/bakery4aws/<p><img alt="Bakery4AWS" src="/statics/blog/bakery.jpg"></p>
<p>I'd like to tell you about a project I have been working on. I believe deploying fully baked AMIs has so many advantages over half-baked or raw deployments where things happen at boot time, that I want to make it easy for everyone to do.</p>
<p>Netflix have been doing this for a long time, and have multiple teams working on it. They have also documented a lot of their process, and released <a href="https://github.com/netflix/aminator">Aminator</a> and <a href="https://github.com/netflix/asgard">Asgard</a> to help with the process. Internally, they have a bakery that does a lot more than just Aminator. This tool is very custom to Netflix, and so this is where my project comes into play.</p>
<p>My clients are mostly startups, where if they have more than 5 engineers working on the main product, they are doing well. They don't have the resources to build and maintain a CI/CD pipeline, and are craving a SaaS solution.</p>
<p>I haven't thought of great name for it yet, so for now I am calling it <a href="http://bakery.answersforaws.com/">Bakery4AWS</a>. If you have a better suggestion, please <a href="/contact/">let me know</a>.</p>
<p>The bake process is the following:</p>
<ol>
<li>Bakery4AWS gets notified by your CI tool of a successful build</li>
<li>Create a new Bake that is the combination of:<ul>
<li>Your application configuration</li>
<li>Your Ansible playbook to build the AMI</li>
<li>A <a href="http://www.packer.io/">Packer</a> configuration specific for this build</li>
</ul>
</li>
<li>Launch the Bakery in your AWS account</li>
<li>Bake the AMI using Packer and the Ansible Provisioner</li>
<li>Return the results of the bake to Bakery4AWS</li>
</ol>
<p>Here is a quick video of how it works</p>
<iframe width="640" height="360" src="//www.youtube.com/embed/0sOPvQ5iCGs?rel=0" frameborder="0" allowfullscreen></iframe>
<p>The goal is to get you closer to continuous deployment by making the AMI creation part seamless - a background process you set and forget. From commit, to a baked AMI with 0, that's right, ZERO, clicks.</p>
<p>If this sounds interesting to you, please sign up and join the beta user list here:</p>
<p><a href="http://bakery.answersforaws.com/">http://bakery.answersforaws.com/</a></p>
<p>And, if you wish it did something different or something more, please <a href="/contact/">let me know</a>.</p>Monitoring in the Cloud2014-03-04T16:50:00-08:00Peter Sankauskastag:answersforaws.com,2014-03-04:blog/2014/03/monitoring-in-the-cloud/<p><img alt="Monitoring" src="/statics/blog/buckets.jpg"></p>
<p>At yesterday's <a href="http://www.meetup.com/AdvancedAWS/events/165578742/">Advanced AWS meetup on monitoring</a>, someone asked if they need something like New Relic if they already have Stackdriver. My answers was "yes", but I wanted to dive deeper into why I think that.</p>
<p>I consider there to be 3 different buckets/types of monitoring</p>
<ul>
<li>Infrastructure</li>
<li>Application</li>
<li>External</li>
</ul>
<p>Infrastructure monitoring is focused on the instances, load balancers, etc. This includes SaaS offerings like <a href="http://www.stackdriver.com/">Stackdriver</a> and <a href="https://www.datadoghq.com/">Datadog</a>. The metrics they are gathering mainly concern CPU, memory, network and disk. To get to the memory (and some disk metrics), OS level integration is needed, which usually means installing an agent of some sort on each instance. If the monitoring software is really good, it will aggregate the metrics that make sense up to the service level. An example of this might be: number of requests per second from all production web instances. Since Stackdriver integrates directly with AWS APIs, it looks at the Auto Scaling Groups and does this automatically.</p>
<p><a href="http://boundary.com/">Boundary</a>, who I also put into the infrastructure bucket, take a network centric approach, monitoring every packet between every instance. This allows them to do a lot of things automatically as well, like mapping dependencies between services and highlighting performance bottlenecks.</p>
<p>These infrastructure monitoring solutions couldn't care less if you are running a Java web app, NodeJS or Ruby on Rails. This is where Application monitoring shines.</p>
<p>Providers like <a href="http://newrelic.com/">New Relic</a> and <a href="http://www.appdynamics.com/">AppDynamics</a> will be able to tell you the response time of each method called in your MVC stack of choice during a request, what database queries are causing issues, stack trace analysis and more. These tools are commonly referred to as Application Performance Monitoring (APM). If you are trying to do root cause analysis, this is usually the tool you would turn to.</p>
<p><img alt="Ping times" src="/statics/blog/world-numbers.jpg"></p>
<p>Finally, you have external monitoring - that is, using your web site/service from various points around the globe. Here you have no shortage of providers. From the old guys like <a href="http://www.keynote.com/">Keynote</a> and <a href="http://alertsite.com/">AlertSite (now Smartbear)</a>, but also the newer fancy ones like <a href="https://www.pingdom.com/">Pingdom</a> and <a href="http://www.monitis.com/">Monitis</a>. If it's basic, it might only give you ping times from a few locations around the world. If the service is more advanced, it might be able to login to your website, execute JavaScript and make sure your AJAXy, Web 2.0 website is in full working order, much like feature specs would when developing rails locally.</p>
<p>These are the three "what am I monitoring" buckets. The ingestion/input side of it. </p>
<p>For output, most of the providers have graphs, and customizable dashboard, and timelines, and annotations, and other eye candy to put on the big monitor in the office. </p>
<p>Notice that I haven't mentioned alerting yet. Getting the information in is one thing, but knowing what to do with it is another. At the very basic level, you set up a min and max bound for the metrics of interest, and integrate your monitoring SaaS of choice into <a href="http://www.pagerduty.com/">PagerDuty</a>, with <s>some sucker</s> an admin on call.</p>
<p>Oh, while I am at it, let me get all glossary on you. An alert is not a notification. People use these words interchangeably, when they mean two very different things. As that sucker who spent 6 years on call, I want to get the definitions right now:</p>
<p><tt><rant></tt></p>
<p>A <strong>notification</strong> is an event you want to be notified about. An <strong>alert</strong> means get your butt out of bed at 3am and go fight the fire. If the website can't be reached from Perth, Australia, but is working fine from New York, please notify me, but don't you dare send me an alert. People thinking these two words are the same cause a lot of lost sleep, and leads to <a href="http://www.healthcareitnews.com/directory/alert-fatigue">alert fatigue</a>. This difference is something not all SaaS providers handle well, so when evaluating them, keep this is mind.</p>
<p><tt></rant></tt></p>
<p>What we are starting to see now are easy ways to add automation when alerts are triggered. For example, an instance is detected to be using all available memory. An alert is fired, and rather than a human dealing with it, the instance is rebooted or terminated. <a href="/blog/2013/07/a-new-paradigm/"><em>#TreatServersLikeCattle</em></a> <em>#LetThereBeSleep</em></p>
<p>The more advanced SaaS offerings are also getting into anomaly detection too, which is really surprising and refreshing (in the sense that I didn't have to manually set a good band for the metric; the system learned what normal was). It's just the beginning for this, and certainly not perfect, but it is progress.</p>
<p>OK, so that is my long winded way of saying "you need to monitor, and monitor in different ways to address the different issues that come up".</p>
<p><img alt="Dodo" src="/statics/blog/dodo.jpg"></p>
<p>There is one more thing. I never said "manage your own monitoring solution". When it comes to the dynamic nature of the cloud, AWS in particular, <a href="http://www.slideshare.net/superdupersheep/stop-using-nagios-so-it-can-die-peacefully">Nagios, Cacti, Ganglia, Zabbix all need to go the way of the Dodo</a>. They are all terrible, and end up costing you far more in engineering effort than you would ever spend on any decent SaaS. Focus on your product and your users. <em>#LeanStartup</em></p>
<p>When evaluating your monitoring solutions, keep in mind that you will probably need more than one. Some started out in one bucket and stayed there, others have branched out into multiple buckets. Choose wisely.</p>
<p>Disclaimer: Part of this is controversial. I have probably missed things. If so, let me know. Also, don't take the above companies as recommendations. Do your research. I'll say it again: Choose wisely. Of course being a consultant, if I can help, please <a href="/contact/">contact me</a> :)</p>March 2014 events2014-02-18T16:10:00-08:00Peter Sankauskastag:answersforaws.com,2014-02-18:blog/2014/02/march-2014-events/<p><img alt="Big Event" src="/statics/blog/big-event.jpg"></p>
<p>There are a few events worth checking out in March, so I though I should give you the heads up to get them on your calendar.</p>
<p>The March 3rd event for the <a href="http://www.meetup.com/AdvancedAWS/events/165578742/">Advanced AWS Meetup</a> is all about monitoring, and is hosted right in downtown San Mateo by the folks at Coupa (about 2 blocks from the Caltrain station). The event is sponsored by <a href="http://www.stackdriver.com/">Stackdriver</a> and will have 3 speakers talking about how they use Stackdriver, PagerDuty and Datadog.</p>
<p>On March 12, <a href="http://www.meetup.com/Netflix-Open-Source-Platform/events/161967242/">Netflix hosts their next NetflixOSS meetup</a> where there will be lightning talks followed by dinner and a mini-expo hall (similar to previous events there).</p>
<p>The big event though is the AWS Summit in San Francisco. More details including free registration can be found here:</p>
<p><a href="https://aws.amazon.com/aws-summit-2014/san-francisco/">https://aws.amazon.com/aws-summit-2014/san-francisco/</a></p>
<p>This is just a 1 day event on March 26 at the Moscone South building, and it looks like Andy Jassy will be giving the keynote (would have preferred Werner, but hey, its free).</p>
<p>I'll be at all 3 events, so if you want to chat, come and find me. Hope to see you there.</p>Advanced AWS Meetup - Anki2014-01-07T09:10:00-08:00Peter Sankauskastag:answersforaws.com,2014-01-07:blog/2014/01/advanced-aws-meetup-anki/<p><img alt="Advanced AWS Meetup" src="/statics/blog/adv-aws-meetup.png"></p>
<p>The January 2014 <a href="http://www.meetup.com/AdvancedAWS/">Advanced AWS Meetup</a> in San Francisco is only 2 weeks away, and there are only 23 spaces left before we need to wait-list people.</p>
<p>This meetup is a little bit special, because <a href="https://twitter.com/iAmTheWhaley">Ben Whaley</a>, the AWS Infrastructure Lead at <a href="http://anki.com/">Anki</a>, will be taking you though how they have architected their applications to handle the rapid growth they are experiencing. Its not every day you get to learn how AWS is used in combination with robotics and artificial intelligence.</p>
<p>Anki is also hosting the event at their office, so if you haven't experienced <a href="http://secure.anki.com/model/starter-kit">Anki Drive</a> yet, come and check it out.</p>
<p><a href="http://www.meetup.com/AdvancedAWS/">http://www.meetup.com/AdvancedAWS/</a></p>
<p>I hope to see you there.</p>Episodes on YouTube2013-12-27T17:40:00-08:00Peter Sankauskastag:answersforaws.com,2013-12-27:blog/2013/12/episodes-on-youtube/<p><img alt="YouTube - Broadcast yourself" src="/statics/blog/youtube.jpg"></p>
<p>Originally, our screencast episodes were hosted on S3 and serve up using a generic HTML5 video tag. As an experiment, all videos are now available on YouTube, with each page using an embed. You can subscribe here:</p>
<div class="g-ytsubscribe" data-channelid="UC8G5-RmD3jDapzqbcGiqUXQ" data-layout="default" data-count="default"></div>
<p>... and the playlist for all episodes is here:</p>
<p><a href="https://www.youtube.com/playlist?list=PL6M727mLU02OZDvn2e1uoBmLKEHqLaCfv">https://www.youtube.com/playlist?list=PL6M727mLU02OZDvn2e1uoBmLKEHqLaCfv</a></p>
<p>YouTube does a lot of extra marketing though its suggested videos that may promote viewing, and subsequently, learning. This is the main motivation for this change, and has nothing to do with cost or performance, both of which S3 handled beautifully.</p>
<p>My concern though is that YouTube is quite aggressive with downgrading the quality of video. All episodes are recorded in 720p (1280x720 pixels). When viewing an episode on a table or phone, the HTML player does not give you the nice quality control button found on their website.</p>
<p>This is an experiment, so if you are seeing video quality issues, please let us know. Switching back is easy.</p>
<p>Happy New Year and all the best for 2014.</p>Half Baked2013-11-27T09:10:00-08:00Peter Sankauskastag:answersforaws.com,2013-11-27:blog/2013/11/half-baked/<p><img alt="Half Baked AMI" src="/statics/blog/turkey-oven-m.jpg"></p>
<p>With Thanksgiving and the holiday season fast approaching, things are heating up in the kitchen. The best way to cook a turkey is to soak it in brine for a day before stuffing and putting it in the oven. No no, the best way to cook a turkey is deep frying it.</p>
<p>The fact is, there is no "best" way. It is a debate with no right answer. </p>
<p>The same goes with baking AMIs:</p>
<ul>
<li>Do you bake the software, configuration and your code into the AMI (à la Netflix)</li>
<li>Do you bake only the software and configuration, and download the code on boot</li>
<li>Do you use a clean OS AMI, and do everything on boot (à la Chef Server/Puppet)</li>
</ul>
<p>Its a scale - fully baked through to half baked and unbaked/raw.</p>
<div class="text-center">
<img src="/statics/blog/baking-scale.png" alt="AMI baking scale" style="border: none; float: none; display: initial;">
</div>
<p>I have watched engineers debate the pros and cons of how much to bake into an AMI and it ends up being a religious (and sometimes heated) discussion. It is certainly a discussion that needs to be had, and a call needs to be made, but there is no one right answer.</p>
<p>To aid in the debate, here are some talking points:</p>
<p>Fully-baked Pros</p>
<ul>
<li>Instance boot up time is as small as it can be - there is no further work to do during the boot sequence</li>
<li>All instances using the same AMI are exactly the same</li>
<li>The AMI built for staging can be reused in production - no need to rebuild</li>
<li>Nothing can go wrong during boot that didn't go wrong before</li>
</ul>
<p>Fully-baked Cons</p>
<ul>
<li>When baking AMIs as part of the build process (CI), there are a lot of AMIs created. You will need to clean them up (perhaps using Janitor Monkey)</li>
<li>No further customizations are done during boot - you are stuck with the same version</li>
</ul>
<p>Unbaked Pros:</p>
<ul>
<li>Can reuse existing Chef, Puppet, etc code (particularly good when migrating to AWS)</li>
<li>No need to manage the lifecycle of AMIs</li>
</ul>
<p>Unbaked Cons:</p>
<ul>
<li>Recipes may need to be downloaded from some central and highly available configuration master</li>
<li>The recipes/playbooks can fail during execution, causing some/all instances to be unusable</li>
<li>Boot up time can be significant. If your application sees large and sudden spikes of traffic, your service will be degraded or unresponsive until the new instances have been configured and can handle traffic</li>
<li>Can cause a thundering herd to bring down something like a Puppet Master depending on how many instances are booting at the same time, all asking for the same information</li>
</ul>
<p>The majority of people using AWS seem to go for some kind of half-baked situation, picking and choosing which tradeoffs make the most sense for how they like to work. <strong>Episode 5</strong> shows you how to use <a href="/episodes/5-baking-amis-with-aminator/">Aminator with Ansible to bake AMIs</a>, but makes no assumption as to how much you want to bake in. <strong>Episode 4</strong> shows you how to <a href="/episodes/4-user-data-cloud-init-cloudformation/">perform customizations on boot using user-data and cloud-init</a>. Both are perfectly valid, and can even be used together.</p>
<p>If I have missed any points, please let me know in the comments, and I'll be happy to edit this post. </p>
<p>Have a Happy Thanksgiving, and enjoy your turkey, no matter how it is prepared.</p>Baking AMIs with Aminator2013-11-25T23:31:00-08:00Peter Sankauskastag:answersforaws.com,2013-11-25:episodes/5-baking-amis-with-aminator/<h2 id="show-notes">Show Notes</h2>
<p>Aminator is a command line tool written in Python by Netflix to make build AMIs easy. Out of the box, Aminator supports Debian and Redhat based OSes by building AMIs using APT or YUM. </p>
<p>To install Aminator on a clean Ubuntu instance:</p>
<div class="codehilite"><pre>sudo apt-get install git, python-pip
git clone https://github.com/Netflix/aminator.git
<span class="nb">cd </span>aminator/
sudo python setup.py install
</pre></div>
<p>This will get you the basics, but by using the CloudFormation template and pre-baked AMI, this is not necessary. The easiest way to get Aminator up and running is by creating a new CloudFormation stack using this template:</p>
<p><a href="https://github.com/Answers4AWS/netflixoss-ansible/blob/master/cloudformation/aminator.json">https://github.com/Answers4AWS/netflixoss-ansible/blob/master/cloudformation/aminator.json</a></p>
<p>The template brings up Aminator inside an AutoScaling Group. This means I could not include a simple entry on the Outputs tab of the CloudFormation stack. So you will need go to the EC2 page and find the Aminator instance (tag <code>Name=Aminator</code>) and SSH into that:</p>
<div class="codehilite"><pre><span class="nv">$ </span>ansible-ec2 ssh --name Aminator -u ubuntu
</pre></div>
<p>To build an AMI, first you will need access to the snapshot of a base AMI. Not many AMI creators (Ubuntu, Amazon, etc) give the public access to the snapshots their official AMIs point to so to get you up and running quickly, here is a list of Ubuntu 12.04 LTS foundation AMIs you can use:</p>
<p><a href="https://github.com/Answers4AWS/netflixoss-ansible/wiki/Foundation-AMIs-for-Aminator">https://github.com/Answers4AWS/netflixoss-ansible/wiki/Foundation-AMIs-for-Aminator</a></p>
<p>First example, build an AMI with Apache installed. We will do this in <code>us-west-1</code>:</p>
<div class="codehilite"><pre><span class="nv">$ </span>sudo aminate -e ec2_apt_linux -B ami-86c0f6c3 apache2
</pre></div>
<p>This will build an AMI with Apache installed, with all of the default configuration. You can then launch an instance from the new AMI, go to the public DNS name of that instance and see the lovable default Apache page saying "It works!"</p>
<p>Using APT and YUM is great, but this means whatever you want to install and configure on the OS needs to be wrapped in a package and potentially in a package repository somewhere. By using Ansible, you do not need to build and upload packages into a repository, and can use an Ansible playbook instead. The CloudFormation template already has the Ansible Provisioner for Aminator installed, but if you are not using that, you can install it by running:</p>
<div class="codehilite"><pre><span class="nv">$ </span>sudo aminator-plugin install ansible
</pre></div>
<p>Aminator also has plugins for <a href="https://github.com/aminator-plugins/chef-solo-provisioner">Chef Solo</a> and <a href="https://github.com/aminator-plugins/eucalyptus-cloud">Eucalyptus</a>.</p>
<p>You tell Aminator what combination of plugins you want to use by writing an environment in <code>/etc/aminator/environments.yml</code>. For Ansible, you might use something like this:</p>
<div class="codehilite"><pre><span class="l-Scalar-Plain">ec2_ansible_linux</span><span class="p-Indicator">:</span>
<span class="l-Scalar-Plain">cloud</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ec2</span>
<span class="l-Scalar-Plain">distro</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">debian</span>
<span class="l-Scalar-Plain">provisioner</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ansible</span>
<span class="l-Scalar-Plain">volume</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">linux</span>
<span class="l-Scalar-Plain">blockdevice</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">linux</span>
<span class="l-Scalar-Plain">finalizer</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">tagging_ebs</span>
</pre></div>
<p>The Ansible Provisioner also has a configuration located at <code>/etc/aminator/plugins/aminator.plugins.provisioner.ansible.yml</code> with this content:</p>
<div class="codehilite"><pre><span class="l-Scalar-Plain">enabled</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">true</span>
<span class="c1"># Location and content of local inventory file</span>
<span class="l-Scalar-Plain">inventory_file_path</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">/etc/ansible</span>
<span class="l-Scalar-Plain">inventory_file</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">local</span>
<span class="l-Scalar-Plain">inventory_file_content</span><span class="p-Indicator">:</span> <span class="p-Indicator">|</span>
<span class="no">127.0.0.1</span>
<span class="c1"># This is the path to all Ansible playbooks on the Aminator server</span>
<span class="c1"># (outside the chroot environment). These will be copied to 'playbooks_path_dest'</span>
<span class="l-Scalar-Plain">playbooks_path_source</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">/usr/local/netflixoss-ansible/playbooks</span>
<span class="c1"># The location to store playbooks on the AMI</span>
<span class="l-Scalar-Plain">playbooks_path_dest</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">/var/lib/ansible/playbooks</span>
<span class="c1"># Set to False to delete all files in 'playbooks_path_dest' before snapshotting</span>
<span class="c1"># the volume</span>
<span class="l-Scalar-Plain">keep_playbooks</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">True</span>
</pre></div>
<p>To use your own Ansible playbooks, copy them to the Aminator instance, and change the value of <code>playbooks_path_source</code>. For the NetflixOSS Ansible Playbooks, creating an Asgard AMI would be done by running:</p>
<div class="codehilite"><pre><span class="nv">$ </span>sudo aminate -e ec2_ansible_linux -B ami-86c0f6c3 asgard-ubuntu.yml --debug
</pre></div>
<p>By adding <code>--debug</code>, you will see the output of the <code>ansible-playbook</code> command.</p>
<p>The Ansible Provisioner also sets one extra variable: <code>ami=True</code>. You can use this variable to have conditional execution in your playbooks. For example, if your playbook installed any services or daemons, you will need to stop them so that the EBS volume can be unmounted successfully. You can use add the following to a Ansible role in your <code>vars/main.yml</code> file:</p>
<div class="codehilite"><pre><span class="nn">---</span>
<span class="l-Scalar-Plain">ami_build</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ami is defined and ami</span>
<span class="l-Scalar-Plain">not_ami_build</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ami is not defined or not ami</span>
</pre></div>
<p>And then use the variables to stop services when the building an AMI, or start them when running the same playbook on a running EC2 instance:</p>
<div class="codehilite"><pre><span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Starting SSH service</span>
<span class="l-Scalar-Plain">service</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">name={{ ssh_service_name }} state=started</span>
<span class="l-Scalar-Plain">when</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">not_ami_build</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Stopping SSH service</span>
<span class="l-Scalar-Plain">service</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">name={{ ssh_service_name }} state=stopped</span>
<span class="l-Scalar-Plain">when</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ami_build</span>
</pre></div>
<p>When using Aminator, it is also useful to know about <a href="https://github.com/Netflix/aminator/wiki/Foundation-AMI">Foundation AMIs</a> and Base AMIs. A Foundation AMI is a copy of an OS AMI where your account has access to the EBS snapshots the AMI points to. For example, the <a href="http://cloud-images.ubuntu.com/locator/ec2/">Ubuntu Cloud AMIs</a> are public, but the snapshots they point to are not (<a href="https://groups.google.com/forum/#!topic/ec2ubuntu/E5HkfHmAmDE">Ubuntu discussion</a>), which prevents them from being used as Base AMIs by Aminator.</p>
<p>A Base AMI can be either a Foundation AMI, or customized AMI you have built. The Base AMI Netflix use already has Oracle Java, Tomcat, and a few other services already installed and configured. This is an optimization that reduced the amount of time needed to build AMIs.</p>
<h2 id="resources">Resources</h2>
<ul>
<li><a href="https://github.com/Netflix/aminator/">Aminator</a></li>
<li><a href="https://github.com/ansible/ansible">Ansible</a></li>
<li><a href="https://github.com/aminator-plugins/ansible-provisioner">Ansible Provisioner for Aminator</a></li>
<li><a href="/code/netflixoss/">NetflixOSS Ansible Playbooks</a></li>
<li><a href="https://github.com/pas256/ansible-ec2">ansible-ec2</a></li>
</ul>We won a NetflixOSS Cloud Prize2013-11-14T23:10:00-08:00Peter Sankauskastag:answersforaws.com,2013-11-14:blog/2013/11/we-won-a-netflixoss-cloud-prize/<p><a href="/statics/blog/netflixoss-cloud-prize-winners-big.jpg"><img alt="NetflixOSS Cloud Prize winners at AWS re:invent 2013" src="/statics/blog/netflixoss-cloud-prize-winners-m.jpg"></a></p>
<p>This year's AWS re:invent conference was an amazing experience. My first <a href="/blog/2013/11/reinvent-2013-gameday/">Game Day</a>, some excellent and long anticipated <a href="http://aws.amazon.com/about-aws/whats-new/">announcements by AWS</a>, and the Answers for AWS logo (along with my pretty face) on the keynote big screen in front of 9000 attendees, and who knows how many on the live stream. Yes, that is me standing next to Werner Vogels, CTO of Amazon.</p>
<p>In case you missed it, here is the keynote on Youtube:</p>
<iframe width="640" height="360" src="//www.youtube.com/embed/Waq8Y6s1Cjs?rel=0" frameborder="0" allowfullscreen class="text-center"></iframe>
<p>Scroll to about 10:30 to see the NetflixOSS Cloud Prize announcements.</p>
<p>Our submission won <strong>Best Usability Enhancement</strong>. The motivation for the submission was easy - there is a lot of value in the NetflixOSS projects, but getting started with them is non-trivial. There are OS packages to install and configure, Java and Tomcat configurations to mess with, and then the project's <code>.properties</code> files to configure. Not to mention some of them require access to the AWS API.</p>
<p>The first step was building <a href="/code/netflixoss/">Ansible Playbooks</a> to get each project up an running on a single EC2 instance easily. Then, using those playbooks along with Aminator and the <a href="https://github.com/aminator-plugins/ansible-provisioner">Ansible Provisioner for Aminator</a>, bake AMIs and distribute them to all regions using <a href="/code/distami/">DistAMI</a>. Finally, we wrote <a href="/resources/netflixoss/cloudformation/">CloudFormation templates</a> to bring up the AMI inside an AutoScaling Group, with the correct launch config, IAM role and security group.</p>
<p>The goal was not to build highly-available and scalable deployments, but to make it easy to try out some of the NetflixOSS components and see if you like them. More robust deployments will be coming in the future, but require dynamic CloudFormation templates tailored to your AWS account.</p>
<p>The other two submissions I made were for Best New Monkey - <a href="/code/backup-monkey/">Backup Monkey</a> which keeps EBS volume snapshots on rotation, and <a href="/code/graffiti-monkey/">Graffiti Monkey</a> which goes around tagging AWS resources (particularly useful with Cost Allocation Tagging). But then came <a href="https://github.com/justinsb">Justin Santa Barbara</a> with his Barrel of 12 Chaos Monkeys, which takes evil to a whole new level, and completely deserved the award for <strong>Best New Monkey</strong>.</p>
<p><a href="https://twitter.com/aspyker">Andrew Spyker</a> from IBM built the <strong>Best Example Application</strong> with ACME Air, using a lot of the NetflixOSS components including Eureka, Hystrix and Karyon. The <strong>Best Portability Enhancement</strong> was won by <a href="https://twitter.com/grze">Chris Grzegorczyk</a> and <a href="https://twitter.com/vicnastea">Vic Iglesias</a> for enabling NetflixOSS based applications to run on <a href="http://www.eucalyptus.com/">Eucalyptus</a> private cloud, and proving AWS API compatibility unmatched by other private cloud competitors.</p>
<p>Congratulations to all of the <a href="http://techblog.netflix.com/2013/11/netflix-open-source-software-cloud.html">NetflixOSS Cloud Prize winners</a>, and thank you to <a href="https://twitter.com/adrianco">Adrian Cockcroft</a> and team for organizing it. </p>re:invent 2013 Gameday2013-11-12T23:10:00-08:00Peter Sankauskastag:answersforaws.com,2013-11-12:blog/2013/11/reinvent-2013-gameday/<p><img alt="AWS re:invent 2013 - Game Day" src="/statics/blog/game-day.png"></p>
<p>Day 1 of AWS re:invent 2013 included a fun idea called Game Day. Teams of 3 are given a small application to build using various services, they then give their opponents access to their account where the opponent is free to do whatever unthinkable evil is possible. Then it is a race to see who can get their application back up and running.</p>
<p>There were prizes for First to Recover, Best Montage, and Most Evil. Our team was called "Team AWeSome Waffles" comprised of Michael Conlon and Matt Wilson from SocialWare, and myself.</p>
<p>The process was not meant to be challenging, and it wasn't. In a nutshell:</p>
<ol>
<li>Build an AMI from based on Amazon Linux, with a few extra bits in there</li>
<li>Create an IAM Role with full access to S3 and SQS</li>
<li>Create an input and output SQS queue, and an S3 bucket to store the montage images in</li>
<li>Use an AutoScaling group with the new AMI, and pass along the above S3 and SQS details via user-data.</li>
<li>Send messages to the input queue, and verify that montage images come out of the other end</li>
</ol>
<p>I asked beforehand if we had to follow the instructions exactly (as they required using the AWS console and CLI tools), or if we could use CloudFormation. CloudFormation was acceptable, and since I had no idea what kind of evil would befall our account, our plan was delete, delete, recreate. That should be pretty fast, and using CloudFormation was a great way to automate that.</p>
<p>The templates and other code are available on GitHub here:</p>
<p><a href="http://github.com/Answers4AWS/aws-gameday-2013">http://github.com/Answers4AWS/aws-gameday-2013</a></p>
<p>When we got access to our opponents account, the evil began. Here is most (but not all because we couldn't remember it all) of what we did:</p>
<ul>
<li>S3 policy to deny putting objects, and deleting buckets</li>
<li>SQS policy to deny all access</li>
<li>Recreate AMI with changes to script - version 1</li>
<li>Recreate AMI with Python and Ruby deleted - version 2</li>
<li>Regenerate keypairs but with same name</li>
<li>Remove security group rules, but not the groups</li>
<li>Use Asgard to change min/max of ASG to 0</li>
<li>Use Asgard to prevent instances from launching in ASG</li>
<li>Delete all instances</li>
<li>S3 lifecycle to expire everything from yesterday</li>
</ul>
<p>When it came time to repairing out account, it was eerie. Looking around, it didn't look like much had changed at all. The AMI ID was the same, there were no policies on SQS, S3 and even the instance was still running. We tried sending messages to the input queue, and they were processed, but nothing came out of the other end. OK, back to the plan - delete, delete, recreate.</p>
<p>That process took just minutes with CloudFormation, and when it was up, messages were processed as expected. Note that our game plan would not have been so easy with the evil we did, so this is by no means fool proof. That makes the competition quite subjective, but no less fun.</p>
<p>Even with that, we did not win fastest to recover, or most evil. Team HuddleUp took out most evil. One of the evil hacks was the change the kernel the AMI used, causing issues when booting. Nice!</p>
<p>Regardless, this was a very fun exercise, and highly recommended for anyone attending next year. Thanks to Miles Ward and team or organizing it.</p>Updates to open source code2013-11-11T12:10:00-08:00Peter Sankauskastag:answersforaws.com,2013-11-11:blog/2013/11/updates-to-open-source-code/<p><img alt="New and Improved" src="/statics/blog/new-and-improved.jpg"></p>
<p>In the lead up to AWS re:invent 2013, our open source projects have received a few updates, so I thought it best to summarize them.</p>
<h3 id="netflixoss-anisble-playbooks"><a href="/code/netflixoss/">NetflixOSS Anisble Playbooks</a></h3>
<ul>
<li>There are now CloudFormation templates for more projects to make it as easy as possible to get started.</li>
<li>Aminator has been updated to version 2.0.174, which means it supports Aminator Plugins. The Ansible provisioner now actually works.</li>
<li>Asgard is now at version 1.3.1, so the playbook has been updated, and there are new AMIs and an updated CloudFormation template.</li>
<li>The Genie playbook has been released. This is designed to run on the master node of an EMR cluster.</li>
<li>Eureka has been updated to 1.1.121, so again there are new AMIs and updated CloudFormation templates.</li>
<li>All playbooks have been updated to work with Ansible 1.3.4 as well. This means using the only the Jinja2 style of variables and no more dollar variables.</li>
<li>New Foundation AMIs for use with Aminator. These Foundation AMIs use ext4 instead of XFS because of the UUID issue when building multiple AMIs at the same time.</li>
</ul>
<h3 id="distami"><a href="/code/distami/">DistAMI</a></h3>
<ul>
<li>Now has the ability to copy AMIs to other regions without making them public.</li>
<li>Can share an AMI with a set of AWS Account IDs.</li>
<li>Reduced set of dependencies (now just needs boto).</li>
</ul>
<h3 id="backup-monkey"><a href="/code/backup-monkey/">Backup Monkey</a></h3>
<ul>
<li>Removed dependencies on Nose for installation, since it is only used for development/testing.</li>
<li>Removed distribute_setup as it has been deprecated.</li>
</ul>
<h3 id="graffiti-monkey"><a href="/code/graffiti-monkey/">Graffiti Monkey</a></h3>
<ul>
<li>Same as Backup Monkey: Removed dependencies on nose and distribute.</li>
</ul>
<p>While Graffiti Monkey is already useful, particularly when used with Cost Allocation Tagging, there are still many more features to be added to it.</p>
<p>As always, please let us know any feedback you have. Thanks.</p>user-data, cloud-init and CloudFormation2013-10-25T15:01:00-07:00Peter Sankauskastag:answersforaws.com,2013-10-25:episodes/4-user-data-cloud-init-cloudformation/<h2 id="show-notes">Show Notes</h2>
<p>Instance meta-data can be accessed by hitting this URL from within an EC2 instance:</p>
<div class="codehilite"><pre>http://169.254.169.254/
</pre></div>
<p>On Ubuntu, you can use the <code>ec2metadata</code> script to list out one or all of the fields. On Amazon Linux, use the <code>ec2-metadata</code> script.</p>
<div class="codehilite"><pre><span class="nv">$ </span>ec2metadata
ami-id: ami-a73264ce
ami-launch-index: 0
ami-manifest-path: <span class="o">(</span>unknown<span class="o">)</span>
ancestor-ami-ids: unavailable
availability-zone: us-east-1a
block-device-mapping: ami
root
instance-action: none
instance-id: i-91d66fe8
instance-type: t1.micro
<span class="nb">local</span>-hostname: ip-10-73-174-101.ec2.internal
<span class="nb">local</span>-ipv4: 10.73.174.101
kernel-id: aki-88aa75e1
mac: unavailable
profile: default-paravirtual
product-codes: unavailable
public-hostname: ec2-54-205-181-234.compute-1.amazonaws.com
public-ipv4: 54.205.181.234
public-keys: <span class="o">[</span><span class="s1">'ssh-rsa AAAAB...'</span><span class="o">]</span>
ramdisk-id: unavailable
reserveration-id: unavailable
security-groups: mysg
user-data: unavailable
</pre></div>
<p>Both Amazon Linux and Ubuntu come with a package called <code>cloud-init</code> that checks the meta-data for <code>user-data</code>, and performs various actions including running scripts and chef recipes on boot.</p>
<p>An example <code>user-data</code> script to update Ubuntu packages and install Apache is:</p>
<div class="codehilite"><pre><span class="c">#!/bin/bash</span>
apt-get update
apt-get upgrade -y
apt-get install apache2 -y
<span class="nb">echo</span> <span class="s2">"<html><body><h1>Welcome</h1>"</span> > /var/www/index.html
<span class="nb">echo</span> <span class="s2">"I was generated from user-data and cloud-init"</span> >> /var/www/index.html
<span class="nb">echo</span> <span class="s2">"</body></html>"</span> >> /var/www/index.html
</pre></div>
<p>You can pass CloudFormation parameters to instances via <code>user-data</code> by using either the <code>AWS::EC2::Instance</code> or <code>AWS::AutoScaling::LaunchConfiguration</code> resources in your template. Strings inside the template can use the <code>Fn::Join</code> intrinsic function. An example template is here:</p>
<div class="codehilite"><pre><span class="p">{</span>
<span class="nt">"Description"</span><span class="p">:</span> <span class="s2">"Episode 4 example of user-data and cloud-init"</span><span class="p">,</span>
<span class="nt">"Parameters"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"KeyPair"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Description"</span><span class="p">:</span> <span class="s2">"Name of the keypair to use for SSH access"</span><span class="p">,</span>
<span class="nt">"Type"</span><span class="p">:</span> <span class="s2">"String"</span>
<span class="p">},</span>
<span class="nt">"Environment"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Description"</span><span class="p">:</span> <span class="s2">"Name of this environment"</span><span class="p">,</span>
<span class="nt">"Type"</span><span class="p">:</span> <span class="s2">"String"</span><span class="p">,</span>
<span class="nt">"Default"</span><span class="p">:</span> <span class="s2">"Production"</span>
<span class="p">},</span>
<span class="nt">"Role"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Description"</span><span class="p">:</span> <span class="s2">"The role this server should be"</span><span class="p">,</span>
<span class="nt">"Type"</span><span class="p">:</span> <span class="s2">"String"</span><span class="p">,</span>
<span class="nt">"Default"</span><span class="p">:</span> <span class="s2">"Web"</span>
<span class="p">}</span>
<span class="p">},</span>
<span class="nt">"Resources"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"MyElasticLoadBalancer"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Properties"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"AvailabilityZones"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Fn::GetAZs"</span><span class="p">:</span> <span class="s2">""</span>
<span class="p">},</span>
<span class="nt">"HealthCheck"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"HealthyThreshold"</span><span class="p">:</span> <span class="mi">3</span><span class="p">,</span>
<span class="nt">"Interval"</span><span class="p">:</span> <span class="mi">30</span><span class="p">,</span>
<span class="nt">"Target"</span><span class="p">:</span> <span class="s2">"HTTP:80/"</span><span class="p">,</span>
<span class="nt">"Timeout"</span><span class="p">:</span> <span class="mi">5</span><span class="p">,</span>
<span class="nt">"UnhealthyThreshold"</span><span class="p">:</span> <span class="mi">5</span>
<span class="p">},</span>
<span class="nt">"Listeners"</span><span class="p">:</span> <span class="p">[</span>
<span class="p">{</span>
<span class="nt">"InstancePort"</span><span class="p">:</span> <span class="mi">80</span><span class="p">,</span>
<span class="nt">"LoadBalancerPort"</span><span class="p">:</span> <span class="mi">80</span><span class="p">,</span>
<span class="nt">"Protocol"</span><span class="p">:</span> <span class="s2">"HTTP"</span>
<span class="p">}</span>
<span class="p">]</span>
<span class="p">},</span>
<span class="nt">"Type"</span><span class="p">:</span> <span class="s2">"AWS::ElasticLoadBalancing::LoadBalancer"</span>
<span class="p">},</span>
<span class="nt">"MySecurityGroup"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Properties"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"GroupDescription"</span><span class="p">:</span> <span class="s2">"Allow access to MyInstance"</span><span class="p">,</span>
<span class="nt">"SecurityGroupIngress"</span><span class="p">:</span> <span class="p">[</span>
<span class="p">{</span>
<span class="nt">"CidrIp"</span><span class="p">:</span> <span class="s2">"0.0.0.0/0"</span><span class="p">,</span>
<span class="nt">"FromPort"</span><span class="p">:</span> <span class="mi">22</span><span class="p">,</span>
<span class="nt">"IpProtocol"</span><span class="p">:</span> <span class="s2">"tcp"</span><span class="p">,</span>
<span class="nt">"ToPort"</span><span class="p">:</span> <span class="mi">22</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="nt">"FromPort"</span><span class="p">:</span> <span class="mi">80</span><span class="p">,</span>
<span class="nt">"IpProtocol"</span><span class="p">:</span> <span class="s2">"tcp"</span><span class="p">,</span>
<span class="nt">"SourceSecurityGroupName"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Fn::GetAtt"</span><span class="p">:</span> <span class="p">[</span>
<span class="s2">"MyElasticLoadBalancer"</span><span class="p">,</span>
<span class="s2">"SourceSecurityGroup.GroupName"</span>
<span class="p">]</span>
<span class="p">},</span>
<span class="nt">"SourceSecurityGroupOwnerId"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Fn::GetAtt"</span><span class="p">:</span> <span class="p">[</span>
<span class="s2">"MyElasticLoadBalancer"</span><span class="p">,</span>
<span class="s2">"SourceSecurityGroup.OwnerAlias"</span>
<span class="p">]</span>
<span class="p">},</span>
<span class="nt">"ToPort"</span><span class="p">:</span> <span class="mi">80</span>
<span class="p">}</span>
<span class="p">]</span>
<span class="p">},</span>
<span class="nt">"Type"</span><span class="p">:</span> <span class="s2">"AWS::EC2::SecurityGroup"</span>
<span class="p">},</span>
<span class="nt">"MyASG"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Properties"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"AvailabilityZones"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Fn::GetAZs"</span><span class="p">:</span> <span class="s2">""</span>
<span class="p">},</span>
<span class="nt">"Cooldown"</span><span class="p">:</span> <span class="mi">120</span><span class="p">,</span>
<span class="nt">"LaunchConfigurationName"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Ref"</span><span class="p">:</span> <span class="s2">"MyLaunchConfig"</span>
<span class="p">},</span>
<span class="nt">"LoadBalancerNames"</span><span class="p">:</span> <span class="p">[</span>
<span class="p">{</span>
<span class="nt">"Ref"</span><span class="p">:</span> <span class="s2">"MyElasticLoadBalancer"</span>
<span class="p">}</span>
<span class="p">],</span>
<span class="nt">"MaxSize"</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="nt">"MinSize"</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="nt">"Tags"</span><span class="p">:</span> <span class="p">[</span>
<span class="p">{</span>
<span class="nt">"Key"</span><span class="p">:</span> <span class="s2">"Name"</span><span class="p">,</span>
<span class="nt">"PropagateAtLaunch"</span><span class="p">:</span> <span class="s2">"true"</span><span class="p">,</span>
<span class="nt">"Value"</span><span class="p">:</span> <span class="s2">"Episode 4"</span>
<span class="p">}</span>
<span class="p">]</span>
<span class="p">},</span>
<span class="nt">"Type"</span><span class="p">:</span> <span class="s2">"AWS::AutoScaling::AutoScalingGroup"</span>
<span class="p">},</span>
<span class="nt">"MyLaunchConfig"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Properties"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"ImageId"</span><span class="p">:</span> <span class="s2">"ami-ef277b86"</span><span class="p">,</span>
<span class="nt">"InstanceType"</span><span class="p">:</span> <span class="s2">"t1.micro"</span><span class="p">,</span>
<span class="nt">"KeyName"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Ref"</span><span class="p">:</span> <span class="s2">"KeyPair"</span>
<span class="p">},</span>
<span class="nt">"SecurityGroups"</span><span class="p">:</span> <span class="p">[</span>
<span class="p">{</span>
<span class="nt">"Ref"</span><span class="p">:</span> <span class="s2">"MySecurityGroup"</span>
<span class="p">}</span>
<span class="p">],</span>
<span class="nt">"UserData"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Fn::Base64"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Fn::Join"</span><span class="p">:</span> <span class="p">[</span>
<span class="s2">"\n"</span><span class="p">,</span>
<span class="p">[</span>
<span class="s2">"#!/bin/bash"</span><span class="p">,</span>
<span class="s2">"apt-get update"</span><span class="p">,</span>
<span class="s2">"apt-get upgrade -y"</span><span class="p">,</span>
<span class="s2">"apt-get install apache2 -y"</span><span class="p">,</span>
<span class="s2">"echo \"<html><body><h1>Welcome</h1>\" > /var/www/index.html"</span><span class="p">,</span>
<span class="p">{</span>
<span class="nt">"Fn::Join"</span><span class="p">:</span> <span class="p">[</span>
<span class="s2">""</span><span class="p">,</span>
<span class="p">[</span>
<span class="s2">"echo \"<h2>Environment: "</span><span class="p">,</span>
<span class="p">{</span>
<span class="nt">"Ref"</span><span class="p">:</span> <span class="s2">"Environment"</span>
<span class="p">},</span>
<span class="s2">"</h2>\" >> /var/www/index.html"</span>
<span class="p">]</span>
<span class="p">]</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="nt">"Fn::Join"</span><span class="p">:</span> <span class="p">[</span>
<span class="s2">""</span><span class="p">,</span>
<span class="p">[</span>
<span class="s2">"echo \"<h2>Role: "</span><span class="p">,</span>
<span class="p">{</span>
<span class="nt">"Ref"</span><span class="p">:</span> <span class="s2">"Role"</span>
<span class="p">},</span>
<span class="s2">"</h2>\" >> /var/www/index.html"</span>
<span class="p">]</span>
<span class="p">]</span>
<span class="p">},</span>
<span class="s2">"echo \"</body></html>\" >> /var/www/index.html"</span>
<span class="p">]</span>
<span class="p">]</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">},</span>
<span class="nt">"Type"</span><span class="p">:</span> <span class="s2">"AWS::AutoScaling::LaunchConfiguration"</span>
<span class="p">}</span>
<span class="p">},</span>
<span class="nt">"Outputs"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"URL"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Description"</span><span class="p">:</span> <span class="s2">"URL of the sample website"</span><span class="p">,</span>
<span class="nt">"Value"</span><span class="p">:</span> <span class="p">{</span>
<span class="nt">"Fn::Join"</span><span class="p">:</span> <span class="p">[</span>
<span class="s2">""</span><span class="p">,</span>
<span class="p">[</span>
<span class="s2">"http://"</span><span class="p">,</span>
<span class="p">{</span>
<span class="nt">"Fn::GetAtt"</span><span class="p">:</span> <span class="p">[</span>
<span class="s2">"MyElasticLoadBalancer"</span><span class="p">,</span>
<span class="s2">"DNSName"</span>
<span class="p">]</span>
<span class="p">}</span>
<span class="p">]</span>
<span class="p">]</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
</pre></div>
<p>The same template and <code>user-data</code> script is easier to write using <a href="/blog/2013/10/cloudformation-templates-with-troposphere/">troposphere</a>.</p>
<h2 id="resources">Resources</h2>
<ul>
<li><a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html">Instance meta-data and user-data documentation</a></li>
<li><a href="http://aws.amazon.com/code/1825">ec2-metadata script</a></li>
<li><a href="http://cloudinit.readthedocs.org/en/latest/topics/examples.html">Cloud-Init</a></li>
<li><a href="https://help.ubuntu.com/community/CloudInit">Cloud-Init Ubuntu documentation</a></li>
<li><a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonLinuxAMIBasics.html#CloudInit">Cloud-Init on Amazon Linux</a></li>
<li><a href="http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html#cfn-ec2-instance-userdata">CloudFormation EC2 meta-data documentation</a></li>
<li><a href="http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html">CloudFormation parameters</a></li>
<li><a href="http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-join.html">CloudFormation join function</a></li>
</ul>CloudFormation templates with troposphere2013-10-22T10:51:00-07:00Peter Sankauskastag:answersforaws.com,2013-10-22:blog/2013/10/cloudformation-templates-with-troposphere/<p><img alt="troposphere" src="/statics/blog/troposphere.jpg"></p>
<p>CloudFormation templates are great for automating the creation and destruction of AWS resources, but hand coding JSON is prone to errors and mistakes. A project called troposphere has been gaining traction and approaches writing CF templates a little differently.</p>
<p>Instead of writing JSON, you create objects with the troposphere library using Python. Each object represents one resource in AWS such as an instance, an EIP or security group. The library can even catch errors due to its built in property and type checking.</p>
<p>Since troposphere is a Python library, you install it by doing:</p>
<div class="codehilite"><pre>sudo pip install troposphere --upgrade
</pre></div>
<p>Here is a slightly more than trivial example:</p>
<table class="codehilitetable"><tr><td class="linenos"><div class="linenodiv"><pre> 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48</pre></div></td><td class="code"><div class="codehilite"><pre><span class="c">#!/usr/bin/python</span>
<span class="c"># Import troposphere</span>
<span class="kn">from</span> <span class="nn">troposphere</span> <span class="kn">import</span> <span class="n">Template</span><span class="p">,</span> <span class="n">Ref</span><span class="p">,</span> <span class="n">Output</span><span class="p">,</span> <span class="n">Join</span><span class="p">,</span> <span class="n">GetAtt</span><span class="p">,</span> <span class="n">Parameter</span>
<span class="kn">import</span> <span class="nn">troposphere.ec2</span> <span class="kn">as</span> <span class="nn">ec2</span>
<span class="c"># Create a template for resources to live in</span>
<span class="n">template</span> <span class="o">=</span> <span class="n">Template</span><span class="p">()</span>
<span class="n">keypair</span> <span class="o">=</span> <span class="n">template</span><span class="o">.</span><span class="n">add_parameter</span><span class="p">(</span><span class="n">Parameter</span><span class="p">(</span>
<span class="s">"KeyPair"</span><span class="p">,</span>
<span class="n">Type</span><span class="o">=</span><span class="s">"String"</span><span class="p">,</span>
<span class="n">Description</span><span class="o">=</span><span class="s">"The name of the keypair to use for SSH access"</span><span class="p">,</span>
<span class="p">))</span>
<span class="c"># Create a security group</span>
<span class="n">sg</span> <span class="o">=</span> <span class="n">ec2</span><span class="o">.</span><span class="n">SecurityGroup</span><span class="p">(</span><span class="s">'MySecurityGroup'</span><span class="p">)</span>
<span class="n">sg</span><span class="o">.</span><span class="n">GroupDescription</span> <span class="o">=</span> <span class="s">"Allow access to MyInstance"</span>
<span class="n">sg</span><span class="o">.</span><span class="n">SecurityGroupIngress</span> <span class="o">=</span> <span class="p">[</span>
<span class="n">ec2</span><span class="o">.</span><span class="n">SecurityGroupRule</span><span class="p">(</span>
<span class="n">IpProtocol</span><span class="o">=</span><span class="s">"tcp"</span><span class="p">,</span>
<span class="n">FromPort</span><span class="o">=</span><span class="s">"22"</span><span class="p">,</span>
<span class="n">ToPort</span><span class="o">=</span><span class="s">"22"</span><span class="p">,</span>
<span class="n">CidrIp</span><span class="o">=</span><span class="s">"0.0.0.0/0"</span><span class="p">,</span>
<span class="p">)]</span>
<span class="c"># Add security group to template</span>
<span class="n">template</span><span class="o">.</span><span class="n">add_resource</span><span class="p">(</span><span class="n">sg</span><span class="p">)</span>
<span class="c"># Create an instance</span>
<span class="n">instance</span> <span class="o">=</span> <span class="n">ec2</span><span class="o">.</span><span class="n">Instance</span><span class="p">(</span><span class="s">"MyInstance"</span><span class="p">)</span>
<span class="n">instance</span><span class="o">.</span><span class="n">ImageId</span> <span class="o">=</span> <span class="s">"ami-ef277b86"</span>
<span class="n">instance</span><span class="o">.</span><span class="n">InstanceType</span> <span class="o">=</span> <span class="s">"t1.micro"</span>
<span class="n">instance</span><span class="o">.</span><span class="n">SecurityGroups</span> <span class="o">=</span> <span class="p">[</span><span class="n">Ref</span><span class="p">(</span><span class="n">sg</span><span class="p">)]</span>
<span class="n">instance</span><span class="o">.</span><span class="n">KeyName</span> <span class="o">=</span> <span class="n">Ref</span><span class="p">(</span><span class="n">keypair</span><span class="p">)</span>
<span class="c"># Add instance to template</span>
<span class="n">template</span><span class="o">.</span><span class="n">add_resource</span><span class="p">(</span><span class="n">instance</span><span class="p">)</span>
<span class="c"># Add output to template</span>
<span class="n">template</span><span class="o">.</span><span class="n">add_output</span><span class="p">(</span><span class="n">Output</span><span class="p">(</span>
<span class="s">"InstanceAccess"</span><span class="p">,</span>
<span class="n">Description</span><span class="o">=</span><span class="s">"Command to use to SSH to instance"</span><span class="p">,</span>
<span class="n">Value</span><span class="o">=</span><span class="n">Join</span><span class="p">(</span><span class="s">""</span><span class="p">,</span> <span class="p">[</span><span class="s">"ssh -i "</span><span class="p">,</span> <span class="n">Ref</span><span class="p">(</span><span class="n">keypair</span><span class="p">),</span> <span class="s">" ubuntu@"</span><span class="p">,</span> <span class="n">GetAtt</span><span class="p">(</span><span class="n">instance</span><span class="p">,</span> <span class="s">"PublicDnsName"</span><span class="p">)])</span>
<span class="p">))</span>
<span class="c"># Print out CloudFormation template in JSON</span>
<span class="k">print</span> <span class="n">template</span><span class="o">.</span><span class="n">to_json</span><span class="p">()</span>
</pre></div>
</td></tr></table>
<p>This code creates a security group that allows SSH access, and then creates an instances that uses that security group. As a parameter, it take in the name of the Keypair to use, and outputs the SSH command to use to access the machine.</p>
<p>You can see the actual <a href="https://gist.github.com/pas256/7104312">CloudFormation template in this gist</a>. That is not something that should be coded by hand… just look at all those quotes and nesting.</p>
<p>The library is still young, and will mature over time, but is already super useful.</p>Reserved Instances2013-10-17T10:01:00-07:00Peter Sankauskastag:answersforaws.com,2013-10-17:episodes/3-reserved-instances/<h2 id="show-notes">Show Notes</h2>
<p>Reserved Instances</p>
<ul>
<li>are a billing optimization</li>
<li>guarantee you capacity when launching instances</li>
<li>can be modified after purchase</li>
<li>can be sold on the marketplace</li>
<li>are available for EC2, RDS, Redshift and ElastiCache</li>
</ul>
<h2 id="resources">Resources</h2>
<ul>
<li><a href="http://aws.amazon.com/ec2/reserved-instances/">Reserved Instances</a></li>
<li><a href="http://promptcloud.com/ec2-ondemand-vs-reserved-instance-pricing.php">Amazon EC2 Ondemand VS Reserved Instance Pricing</a></li>
<li><a href="http://mikekhristo.com/ec2-ondemand-vs-reserved-instance-savings-calculator/">EC2 On-Demand vs Reserved Instance Cost Savings Calculator</a></li>
<li><a href="http://aws.amazon.com/ec2/reserved-instances/marketplace/">Reserved Instance Marketplace</a></li>
</ul>Ansible and AWS2013-10-15T12:51:00-07:00Peter Sankauskastag:answersforaws.com,2013-10-15:episodes/2-ansible-and-aws/<h2 id="show-notes">Show Notes</h2>
<p>Install Ansible and dependencies</p>
<div class="codehilite"><pre>git clone git@github.com:ansible/ansible.git
<span class="nb">cd </span>ansible
<span class="nb">source</span> ./hacking/env-setup
sudo pip install paramiko PyYAML jinja2 --upgrade
</pre></div>
<p>Set up EC2 inventory plugin as the default inventory for Ansible</p>
<div class="codehilite"><pre>sudo mkdir /etc/ansible
sudo chown <span class="nv">$USER</span> /etc/ansible
<span class="nb">cd</span> /etc/ansible
cp ~/ansible/plugins/inventory/ec2.* .
mv ec2.py hosts
./hosts
</pre></div>
<p>Create <code>.boto</code> config file</p>
<div class="codehilite"><pre>cat > ~/.boto
<span class="o">[</span>Credentials<span class="o">]</span>
<span class="nv">aws_access_key_id</span> <span class="o">=</span> <your_access_key_here>
<span class="nv">aws_secret_access_key</span> <span class="o">=</span> <your_secret_key_here>
</pre></div>
<p>To run inventory at any time:</p>
<div class="codehilite"><pre>/etc/ansible/hosts
</pre></div>
<p>Add SSH keypair to <a href="http://sshkeychain.sourceforge.net/mirrors/SSH-with-Keys-HOWTO/SSH-with-Keys-HOWTO-6.html">SSH agent</a></p>
<div class="codehilite"><pre>ssh-add ~/.ssh/id_rsa
</pre></div>
<p>Test SSH connection to instance without specifying the keypair on the command line:</p>
<div class="codehilite"><pre>ssh ubuntu@ec2-1-2-3-4.compute.amazonaws.com
</pre></div>
<p>Ansible ping to all instances, SSHing as the <code>ubuntu</code> user:</p>
<div class="codehilite"><pre>ansible -m ping -u ubuntu all
</pre></div>
<p>Ansible ping to all instances, SSHing as the <code>ec2-user</code> user:</p>
<div class="codehilite"><pre>ansible -m ping -u ec2-user all
</pre></div>
<p>Targeting groups of instances:</p>
<div class="codehilite"><pre>ansible -m ping -u ubuntu us-east-1
ansible -m ping -u ubuntu <span class="s1">'us-west-2:&security_group_web'</span>
ansible -m ping -u ubuntu tag_Name_Episode2
</pre></div>
<p>Refresh EC2 inventory cache</p>
<div class="codehilite"><pre>/etc/ansible/hosts --refresh-cache
</pre></div>
<h3 id="install-aws-cli-playbook">Install AWS CLI Playbook</h3>
<div class="codehilite"><pre>mkdir playbooks
<span class="nb">cd </span>playbooks
</pre></div>
<p><code>install-awscli.yml</code></p>
<div class="codehilite"><pre><span class="nn">---</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Install AWS CLI</span>
<span class="l-Scalar-Plain">user</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ubuntu</span>
<span class="l-Scalar-Plain">sudo</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">True</span>
<span class="l-Scalar-Plain">hosts</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">all</span>
<span class="l-Scalar-Plain">tasks</span><span class="p-Indicator">:</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Install Python PIP</span>
<span class="l-Scalar-Plain">apt</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">pkg=python-pip state=latest</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Install boto via PIP</span>
<span class="l-Scalar-Plain">pip</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">name=boto state=latest</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Install AWS CLI</span>
<span class="l-Scalar-Plain">pip</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">name=awscli state=latest</span>
</pre></div>
<p>Execute playbook:</p>
<div class="codehilite"><pre>ansible-playbook -l us-west-2 install-awscli.yml
</pre></div>
<p>Create local inventory file</p>
<div class="codehilite"><pre>cat > /etc/ansible/local
<span class="o">[</span>localhost<span class="o">]</span>
127.0.0.1
</pre></div>
<p><code>provision.yml</code></p>
<div class="codehilite"><pre><span class="nn">---</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Example of provisioning servers</span>
<span class="l-Scalar-Plain">hosts</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">127.0.0.1</span>
<span class="l-Scalar-Plain">connection</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">local</span>
<span class="l-Scalar-Plain">tasks</span><span class="p-Indicator">:</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Create security group</span>
<span class="l-Scalar-Plain">local_action</span><span class="p-Indicator">:</span>
<span class="l-Scalar-Plain">module</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ec2_group</span>
<span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ep2</span>
<span class="l-Scalar-Plain">description</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Access to the Episode2 servers</span>
<span class="l-Scalar-Plain">region</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">us-west-2</span>
<span class="l-Scalar-Plain">rules</span><span class="p-Indicator">:</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">proto</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">tcp</span>
<span class="l-Scalar-Plain">from_port</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">22</span>
<span class="l-Scalar-Plain">to_port</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">22</span>
<span class="l-Scalar-Plain">cidr_ip</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">0.0.0.0/0</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Launch instances</span>
<span class="l-Scalar-Plain">local_action</span><span class="p-Indicator">:</span>
<span class="l-Scalar-Plain">module</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ec2</span>
<span class="l-Scalar-Plain">region</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">us-west-2</span>
<span class="l-Scalar-Plain">keypair</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">answersforaws</span>
<span class="l-Scalar-Plain">group</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ep2</span>
<span class="l-Scalar-Plain">instance_type</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">m1.small</span>
<span class="l-Scalar-Plain">image</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ami-8635a9b6</span>
<span class="l-Scalar-Plain">count</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">2</span>
<span class="l-Scalar-Plain">wait</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">yes</span>
<span class="l-Scalar-Plain">register</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ec2</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Add EP2 instances to host group</span>
<span class="l-Scalar-Plain">local_action</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">add_host hostname={{ item.public_ip }} groupname=ep2</span>
<span class="l-Scalar-Plain">with_items</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ec2.instances</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Add tag to instances</span>
<span class="l-Scalar-Plain">local_action</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ec2_tag resource={{ item.id }} region=us-west-2 state=present</span>
<span class="l-Scalar-Plain">with_items</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ec2.instances</span>
<span class="l-Scalar-Plain">args</span><span class="p-Indicator">:</span>
<span class="l-Scalar-Plain">tags</span><span class="p-Indicator">:</span>
<span class="l-Scalar-Plain">Name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">EP2</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Wait for SSH to be available</span>
<span class="l-Scalar-Plain">pause</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">minutes=1</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">Configure provisioned servers</span>
<span class="l-Scalar-Plain">hosts</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ep2</span>
<span class="l-Scalar-Plain">user</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">ubuntu</span>
<span class="l-Scalar-Plain">sudo</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">True</span>
<span class="l-Scalar-Plain">tasks</span><span class="p-Indicator">:</span>
<span class="p-Indicator">-</span> <span class="l-Scalar-Plain">include</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">tasks/install-awscli.yml</span>
</pre></div>
<p>Run playbook</p>
<div class="codehilite"><pre>ansible-playbook -i /etc/ansible/local provision.yml
</pre></div>
<h2 id="resources">Resources</h2>
<ul>
<li><a href="https://github.com/ansible/ansible">Ansible</a></li>
<li><a href="http://docs.pythonboto.org/en/latest/boto_config_tut.html">Boto Configration</a></li>
<li><a href="http://www.ansibleworks.com/docs/intro_patterns.html">Ansible - Selecting Targets</a></li>
<li><a href="http://www.ansibleworks.com/docs/modules.html">Ansible - Modules</a></li>
<li><a href="http://www.ansibleworks.com/docs/playbooks_roles.html#roles">Ansible - Roles</a></li>
<li><a href="http://cloud-images.ubuntu.com/locator/ec2/">Ubuntu AMIs</a></li>
<li><a href="https://github.com/Answers4AWS/netflixoss-ansible">NetflixOSS Ansible Playbooks</a></li>
</ul>FoxyProxy2013-09-25T12:21:00-07:00Peter Sankauskastag:answersforaws.com,2013-09-25:episodes/1-foxyproxy/<h2 id="show-notes">Show Notes</h2>
<p>Create a new SOCKS v5 proxy</p>
<div class="codehilite"><pre>Host: localhost
Port: 8157
</pre></div>
<p>URL Patterns</p>
<div class="codehilite"><pre><span class="p">|</span> Pattern Name <span class="p">|</span> URL pattern <span class="p">|</span>
<span class="p">|</span> --------------- <span class="p">|</span> --------------------- <span class="p">|</span>
<span class="p">|</span> subnet <span class="p">|</span> *://10* <span class="p">|</span>
<span class="p">|</span> localhost <span class="p">|</span> *://localhost* <span class="p">|</span>
<span class="p">|</span> EC2 external <span class="p">|</span> *ec2*.amazonaws.com* <span class="p">|</span>
<span class="p">|</span> EC2 internal <span class="p">|</span> *.ec2.internal* <span class="p">|</span>
</pre></div>
<p>If you are using HBase and want to see the region servers, and one more URL pattern:</p>
<div class="codehilite"><pre><span class="p">|</span> Pattern Name <span class="p">|</span> URL pattern <span class="p">|</span>
<span class="p">|</span> ----------------- <span class="p">|</span> --------------------- <span class="p">|</span>
<span class="p">|</span> Compute internal <span class="p">|</span> *compute.internal* <span class="p">|</span>
</pre></div>
<p>SSH options</p>
<div class="codehilite"><pre>ssh -o <span class="nv">DynamicForward</span><span class="o">=</span><span class="m">8157</span> hadoop@ec2...
</pre></div>
<h2 id="resources">Resources</h2>
<ul>
<li><a href="http://getfoxyproxy.org/">FoxyProxy</a></li>
<li><a href="http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/UsingEMR_Ganglia.html">Ganglia on EMR</a></li>
<li><a href="http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-hbase-launch.html">HBase on EMR</a></li>
</ul>Unified AWS Command Line Tool2013-09-04T10:47:00-07:00Peter Sankauskastag:answersforaws.com,2013-09-04:blog/2013/09/unified-aws-command-line-tool/<p>Long, long ago, in a time almost forgotten, this seemed like a good idea:</p>
<p><a href="/images/old-cli-tools.png"><img alt="AWS CLIs" src="/images/old-cli-tools.png"></a></p>
<p>Over 15 different command line tools, each maintained separately for the various services AWS offered. This was a real pain because they used different languages including Java (Sun, Oracle or openjdk - your guess), Python, Ruby (for EMR, it only worked with Ruby 1.8.7 which has been deprecated by the rest of the world), just to name a few. To add to the pain, they rarely shared configuration files, and each required their own environment variables and a spot on your <code>PATH</code>. <a href="http://www.urbandictionary.com/define.php?term=PITA">PITA</a>.</p>
<p>Now all of that is over. AWS have been working on a unified CLI for some time, and yesterday, they release <a href="http://aws.typepad.com/aws/2013/09/new-aws-command-line-interface-cli.html">AWS CLI version 1.0.0</a>. Since the new AWS CLI is written in Python, installing it is as easy as:</p>
<div class="codehilite"><pre>sudo pip install awscli
</pre></div>
<p>Set your default region in your <code>.bashrc</code> or <code>.profile</code> file:</p>
<div class="codehilite"><pre># AWS
export AWS_DEFAULT_REGION="us-west-2"
</pre></div>
<p>(TIP: you can run <code>source ~/.bashrc</code> to add it to the current shell)</p>
<p>Then, create a <code>.boto</code> file in your home directory with your credentials like so:</p>
<div class="codehilite"><pre># Boto (Python AWS Library) config file
[Credentials]
aws_access_key_id = AKAABCDEFGHIJKLMNOP
aws_secret_access_key = bbbbbsecretkeycannotbeguessedbbbbbb
</pre></div>
<p>This config file is used by many other AWS tools including our own <a href="/blog/2013/08/sharing-amis-with-distami/">DistAMI</a>, <a href="/blog/2013/08/backup-monkey-first-release/">Backup Monkey</a> and <a href="https://github.com/Answers4AWS/graffiti-monkey">Graffiti Monkey</a>, so there is nothing else to do once you have this file.</p>
<p>Now you can use over 20 AWS services from the command line:</p>
<div class="codehilite"><pre>aws ec2 describe-instances
</pre></div>
<p>You can always get help incrementally too:</p>
<div class="codehilite"><pre>aws help
aws ec2 help
aws ec2 describe-instances help
</pre></div>
<p>So ditch all those old CLI tools with a big <code>rm -rf</code>, and enjoy using one powerful tool to do it all.</p>Backup Monkey - First Release2013-08-29T22:27:00-07:00Peter Sankauskastag:answersforaws.com,2013-08-29:blog/2013/08/backup-monkey-first-release/<p><img alt="Backup" src="/images/backup-comic.gif"></p>
<p>The <a href="https://aws.amazon.com/premiumsupport/trustedadvisor/">AWS Trusted Advisor</a> suggests that you always have recent snapshots of your EBS volumes. Here at Answers for AWS, we agree, and have released Backup Monkey to automate that process for you.</p>
<p><a href="https://github.com/Answers4AWS/backup-monkey">Backup Monkey</a> loops through all EBS volumes in a given region, and creates snapshots of them. It then keeps track of those snapshots, and keeps them on rotation, deleting the oldest snapshot when the time comes.</p>
<p>By default, Backup Monkey keeps only the last 3 snapshots, but you can keep more or less with the <code>--max-snapshots-per-volume</code> command line parameter.</p>
<p>The most convenient way to run Backup Monkey is on a schedule using Cron or something similar. For example, if you were to run Backup Monkey once per day with the following options:</p>
<div class="codehilite"><pre>backup-monkey --max-snapshots-per-volume 7 --region us-east-1
</pre></div>
<p>...then you would have a rolling 7 day backup of all your EBS volumes in <code>us-east-1</code>.</p>
<p>You can install Backup Monkey using the usual PyPI goodness:</p>
<div class="codehilite"><pre>sudo pip install backup_monkey
</pre></div>
<p>...and since it is using <a href="http://boto.cloudhackers.com/en/latest/">Boto</a>, you probably already have it configuring if you are using the <a href="http://aws.amazon.com/cli/">AWS CLI tools</a>.</p>
<p>As it is now, Backup Monkey is fairly simple. If you would like some other features added to it, just create an <a href="https://github.com/Answers4AWS/backup-monkey/issues">Issue</a> and lets get the conversation rolling. Feedback is the best way to make a product great.</p>Sharing AMIs with DistAMI2013-08-26T21:51:00-07:00Peter Sankauskastag:answersforaws.com,2013-08-26:blog/2013/08/sharing-amis-with-distami/<p><img alt="Sharing is caring" src="/images/sharing-is-caring.png"></p>
<p>Making AMIs is easy with tools such as <a href="https://github.com/Netflix/aminator/">Aminator</a> and <a href="http://www.packer.io/">Packer</a>. And copying an AMI to another region is a <a href="http://aws.amazon.com/about-aws/whats-new/2013/03/12/announcing-ami-copy-for-amazon-ec2/">single API call</a> now. However, if you produce AMIs for the public, distributing them to all regions and making the AMIs and the underlying EBS Snapshots public is still fairly manual. To automate this, we are releasing <a href="https://github.com/Answers4AWS/distami">DistAMI</a>.</p>
<p>With DistAMI, once you have made a single AMI in any region you like, you can distribute it by running a command like</p>
<div class="codehilite"><pre>distami ami-1234abcd
</pre></div>
<p>This will copy the AMI and snapshot to all regions, and make them all publicly accessible. By default, DistAMI does this in serial. To speed things up, you can add the <code>-p</code> option to copy to all regions in parallel. You can also run DistAMI from you laptop, and specify the region the AMI is in:</p>
<div class="codehilite"><pre>distami --region ap-southeast-2 -p ami-1234abcd
</pre></div>
<p>DistAMI also copies all of the tags associated with the AMI and Snapshots to all regions as well. Just be sure to tag the original AMI and Snapshot before running DistAMI to make finding them easy.</p>
<p>DistAMI is open source with an Apache 2 license, written in Python, and uses the Boto library to make API calls to AWS. If you are using the <a href="http://aws.amazon.com/cli/">AWS CLI tools</a>, you have no new configuration to do. <a href="http://boto.cloudhackers.com/en/latest/boto_config_tut.html">Boto automatically searches</a> for environment variables, <code>.boto</code> files, and if DistAMI is running on an EC2 instance with an IAM Role, Boto will find the credentials for that as well.</p>
<p>To install DistAMI, you run the (hopefully) familiar <code>pip</code> program</p>
<div class="codehilite"><pre>sudo pip install distami
</pre></div>
<p>(I include the <code>sudo</code> because that is what I need on my Macbook, and also what is needed on Ubuntu, but it is not strictly necessary)</p>
<p>There are many additional features that could be added to DistAMI, but being a big fan of <a href="http://en.wikipedia.org/wiki/Lean_Startup">Lean</a>, I am not adding any of them unless other users tell me to. Feel free to create an <a href="https://github.com/Answers4AWS/distami/issues">Issue</a> if you have an idea, or if you are comfortable with Python, fork the repository and submit a pull request. We are always interested in feedback.</p>
<p>I hope you find it useful.</p>Ansible Provisioner for Aminator2013-08-13T22:04:00-07:00Peter Sankauskastag:answersforaws.com,2013-08-13:blog/2013/08/ansible-provisioner-for-aminator/<p><img alt="Hard Drive Platters" src="/images/hard-drive-platters.jpg"></p>
<p><a href="https://github.com/Netflix/aminator">Aminator</a> lets you bake an Amazon Machine Image (AMI) using a variety of provisioners including <code>apt</code> and <code>yum</code>. Now there is also an <a href="https://github.com/Netflix/aminator/wiki/Ansible-Provisioner-for-Aminator">Ansible Provisioner</a>. The <a href="https://github.com/Netflix/aminator/pull/121">PR</a> for this should be merged in shortly. This means you can use your favorite Ansible playbook to configure a running instance, or to build an AMI.</p>
<p>Aminator works by taking an existing AMI (called a <a href="https://github.com/Netflix/aminator/wiki/Foundation-AMI">Foundation AMI</a>), getting the snapshot it is backed by, and creating an EBS volume from it. With that volume attached, it can then run scripts and install programs in a <code>chroot</code> environment. This is basically the process <a href="https://twitter.com/esh">Eric Hammond</a> has been using for years to <a href="http://alestic.com/2011/06/ec2-ami-security">make public AMIs securely</a>.</p>
<p>The only requirement for using the Ansible Provisioner is for the Foundation AMI to have <a href="http://www.ansibleworks.com/docs/gettingstarted.html#getting-ansible">Ansible installed</a> already. To save you time, I have created Foundation AMI for Ubuntu 12.04 LTS available in all AWS regions. You can find the list here:</p>
<p><a href="https://github.com/Answers4AWS/netflixoss-ansible/wiki/Foundation-AMIs-for-Aminator">https://github.com/Answers4AWS/netflixoss-ansible/wiki/Foundation-AMIs-for-Aminator</a></p>
<h2 id="get-started-with-aminator-the-easy-way">Get started with Aminator - the easy way</h2>
<p>Getting up and running with Aminator is now easier thanks to a CloudFormation script that does all the tedious stuff for you (creating a security group, an IAM role, an ASG, and the instance). If you have the <a href="http://aws.amazon.com/cli/">AWS CLI tools</a> installed, you can launch Aminator by doing this:</p>
<div class="codehilite"><pre><span class="nv">$ </span>aws cloudformation create-stack <span class="se">\</span>
--stack-name Aminator <span class="se">\</span>
--template-url https://answers4aws.s3.amazonaws.com/aminator.json <span class="se">\</span>
--parameters <span class="nv">ParameterKey</span><span class="o">=</span>InstanceType,ParameterValue<span class="o">=</span>t1.micro,ParameterKey<span class="o">=</span>KeyName,ParameterValue<span class="o">=</span>mykey
</pre></div>
<p><strong>NOTE:</strong> <code>mykey</code> is the name of the KeyPair you want to use to SSH to the instance.</p>
<p>Once the CloudFormation script has complete, you can find the EC2 instance by running:</p>
<div class="codehilite"><pre><span class="nv">$ </span>aws cloudformation describe-stacks
...
<span class="s2">"StackName"</span>: <span class="s2">"Aminator"</span>,
<span class="s2">"StackStatus"</span>: <span class="s2">"CREATE_COMPLETE"</span>,
<span class="nv">$ </span>aws ec2 describe-instances --filters <span class="nv">Name</span><span class="o">=</span>tag:Name,Values<span class="o">=</span>Aminator <span class="p">|</span> grep <span class="s2">"PublicDnsName"</span>
<span class="s2">"PublicDnsName"</span>: <span class="s2">"ec2-12-12-12-12.compute.amazonaws.com"</span>,
</pre></div>
<p>and then SSH to it:</p>
<div class="codehilite"><pre><span class="nv">$ </span>ssh -i /path/to/mykey.pem ubuntu@ec2-12-12-12-12.compute.amazonaws.com
</pre></div>
<p>This particular instance comes with the <a href="https://github.com/Answers4AWS/netflixoss-ansible">NetflixOSS-Ansible playbooks</a> already installed, which means you can make your own Asgard, Eureka, Edda or Aminator AMIs. To create an Asgard AMI:</p>
<div class="codehilite"><pre><span class="nv">$ </span>sudo aminate -e ec2_ansible_linux -B ami-6637760f asgard-ubuntu.yml
</pre></div>
<p>At the end of this, you will have your very own Asgard AMI.</p>
<p>As always, please send any feedback you have, and feel free to fork and modify any of this to suit your own needs.</p>
<p>Happy Aminating!</p>New AMIs for NetflixOSS: Asgard and Eureka2013-07-16T17:05:00-07:00Peter Sankauskastag:answersforaws.com,2013-07-16:blog/2013/07/new-amis-for-netflixoss-asgard-and-eureka/<p><img alt="AMI Icon" src="/images/ami-icon-120.png" title="AMI Icon"></p>
<p>I have release a few AMIs for some of the <a href="http://netflix.github.io/">NetflixOSS</a> projects. Right now, there are AMIs for <a href="https://github.com/Netflix/asgard">Asgard</a> and <a href="https://github.com/Netflix/eureka">Eureka</a> in the three US regions. If you need the AMI in another region, please let us know.</p>
<h3 id="asgard">Asgard</h3>
<p>The Asgard AMI is called <code>asgard-1.2-awsanswers-ubuntu-12.04-amd64-ebs-20130716-2158</code>. Underneath, it is the x86_64 (amd64) version of Ubuntu 12.04 LTS 'precise'. Here are the AMIs:</p>
<ul>
<li>us-east-1</li>
<li><code>ami-b687f9df</code> - <a href="https://console.aws.amazon.com/ec2/home?region=us-east-1#launchAmi=ami-b687f9df" target="_blank">Launch</a></li>
<li>us-west-1</li>
<li><code>ami-c8052d8d</code> - <a href="https://console.aws.amazon.com/ec2/home?region=us-west-1#launchAmi=ami-c8052d8d" target="_blank">Launch</a></li>
<li>us-west-2</li>
<li><code>ami-eb31a2db</code> - <a href="https://console.aws.amazon.com/ec2/home?region=us-west-2#launchAmi=ami-eb31a2db" target="_blank">Launch</a></li>
</ul>
<p>There are <a href="https://github.com/Answers4AWS/netflixoss-ansible/wiki/AMIs-for-NetflixOSS#instructions">a few instructions to follow</a> during the launch wizard, mostly for the security group.</p>
<h3 id="eureka">Eureka</h3>
<p>The Eureka AMI is called <code>eureka-1.1.98-awsanswers-ubuntu-12.04-amd64-ebs-20130716-2243</code>. The OS is also Ubuntu 12.04 LTS 'precise'. AMI list:</p>
<ul>
<li>us-east-1</li>
<li><code>ami-f685fb9f</code> - <a href="https://console.aws.amazon.com/ec2/home?region=us-east-1#launchAmi=ami-f685fb9f" target="_blank">Launch</a></li>
<li>us-west-1</li>
<li><code>ami-a4052de1</code> - <a href="https://console.aws.amazon.com/ec2/home?region=us-west-1#launchAmi=ami-a4052de1" target="_blank">Launch</a></li>
<li>us-west-2</li>
<li><code>ami-7b30a34b</code> - <a href="https://console.aws.amazon.com/ec2/home?region=us-west-2#launchAmi=ami-7b30a34b" target="_blank">Launch</a></li>
</ul>
<p>After <a href="https://github.com/Answers4AWS/netflixoss-ansible/wiki/AMIs-for-NetflixOSS#instructions-1">following the launch instructions</a>, you will be up an running within 5 minutes.</p>
<p>All of these AMIs are build using the <a href="https://github.com/Answers4AWS/netflixoss-ansible">NetflixOSS Ansible Playbooks</a>. <a href="https://github.com/ansible/ansible">Ansible</a> makes it very easy to build and configure servers, whether they are running instances, or building AMIs.</p>
<p>This is just the beginning. There will be more NetflixOSS AMIs coming out soon. If there is one in particular you would like to see, or you have feedback, please <a href="/contact/">Contact Us</a>.</p>A New Paradigm2013-07-02T17:31:00-07:00Peter Sankauskastag:answersforaws.com,2013-07-02:blog/2013/07/a-new-paradigm/<p>While we understand pretty quickly how much easier, cheaper and faster it is to get started with Amazon Web Services than with traditional data center providers, it is taking much longer for people to comprehend the power of AWS.</p>
<p>Infrastructure is dynamic now, not static. Instead of talking to a sales rep for an hour to maybe get some servers a month from now, we can make a single API call and have 1 or 1000 servers in under 10 minutes. When we don't need them anymore, we don't need to search for an early termination fee hidden in the contract someone else signed last year, we simply make another API call and the servers are gone. We <a href="blog/2013/06/turn-off-the-lights/">turn them off</a> as easily as flicking a switch.</p>
<p>It's not just servers that are changing. Its not even just infrastructure that's changing. It is the way business is now conducted. <a href="https://twitter.com/izzyazeri">Izzy</a> over at <a href="http://www.stackdriver.com/">Stackdriver</a> was kind enough to share this slide with me, which sums it up nicely:</p>
<p><img alt="Major Shift in Application Landscape" src="/images/stackdriver_major_shift.png"></p>
<p>This shift is making it easier than ever to start a new business, by lowering barriers to entry and enabling it to move quickly to find product-market fit. AWS has released over 30 services, none of which require up-front contracts, and all are pay-as-you-go. Zero CapEx.</p>
<p><img alt="Dystopia Scales" src="/images/netflix_dystopia_scale.png"></p>
<p><a href="https://twitter.com/adrianco">Adrian Cockcroft</a> from <a href="http://netflix.com/">Netflix</a>, the company responsible for a third of the internet's traffic every night, describes the future as</p>
<blockquote>
<p>a <a href="http://www.slideshare.net/adrianco/dystopia-as-a-service">dystopian world</a> of buggy apps changing several times a day, running on ... something I can't see, that only exists for a few hours</p>
</blockquote>
<p>If your business is not moving at this speed, you are not learning fast enough. This style of business applies to the Fortune 1000 as much as it does to <a href="http://www.amazon.com/gp/product/0307887898/ref=as_li_qf_sp_asin_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0307887898&linkCode=as2&tag=awan09-20">The Lean Startup</a><img src="http://ir-na.amazon-adsystem.com/e/ir?t=awan09-20&l=as2&o=1&a=0307887898" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" />. Sure, you can get better performance for a cheaper price by not using AWS, but will you be able to innovate as quickly and adjust your course, or will your optimal, efficient company die because the market shifted and you couldn't adapt?</p>
<p>In his <a href="http://www.youtube.com/watch?v=PW1lhU8n5So">2012 re:invent keynote</a>, <a href="https://twitter.com/Werner">Werner Vogals</a>, CTO of Amazon spent the better part of an hour extolling the new paradigm and new way of architecting applications. He repeated this at the <a href="http://www.youtube.com/watch?v=oo1W92Teqx4">NYC AWS Summit</a> to reinforce the point he is so passionate about. It's not just a bunch of new services, its a new mental model.</p>
<p>Answers for AWS have invested heavily in learning the best techniques and methods to help our clients adopt this new way of doing business. Whether it is how to architect your application to run smoothly on dynamic hardware, or which tools and services are available so you don't have to make another hire, or even how to dip your toe in the water by migrating a small piece of your application to AWS.</p>
<p>The time has come for engineers, architects, and business executives to embrace the new paradigm and forget the old, slow moving world... or risk getting left behind.</p>Turn off the lights2013-06-25T15:34:00-07:00Peter Sankauskastag:answersforaws.com,2013-06-25:blog/2013/06/turn-off-the-lights/<p><img alt="Turn off the lights" src="/images/turn_off_the_lights.png" title="Turn off the lights">
When you go to bed at night, you turn off the lights. When you're the last to leave the office, you turn off the lights and lock the door. This is a good habit. Yet when the weekend comes, many companies keep development and testing servers running, sitting idle and costing the business money. Why is that?</p>
<p>One reason might be forgetfulness - Friday afternoon comes along, someone opens the beer fridge, and thinking about what was running gets pushed further down the stack.</p>
<p>Another might be because the team or people treat their <a href="http://www.gregarnette.com/blog/2012/05/cloud-servers-are-not-our-pets/">servers like pets instead of like cattle</a>. Back in the traditional data center space, it takes weeks to negotiate a contract, and perhaps months to get everything racked, stacked and cabled up. This creates a personal attachment with the servers, they are <em>your</em> servers, and so you name each one. Your server is a pet. Contrast this with a Farmer Joe, who has a paddock full of cattle. He certainly didn't name each one of them, and sure, it might be disappointing when one dies, but that is 1 out of a few hundred. He has no attachment to each one.</p>
<p>One final reason might be the lack of automation. If it is difficult to bring up a set of servers, then the thought of tearing it down just for the weekend is terrifying. This screams of bad practices. Hardware fails all the time. You need to be prepared for this, or suffer the conquests of a long mean time to recovery. Have you heard of <a href="http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html">chaos monkey</a>? What about the rest of the <a href="https://github.com/Netflix/SimianArmy">Simian Army</a>?</p>
<p>So get your team thinking about farming, <a href="http://www.opscode.com/blog/2012/02/14/automate-all-the-things/">automate all the things</a>, and next time you leave the room - turn off the lights.</p>