OpenLDAP pass-through auth to Active Directory

Yes, this is hysterically historical.  I’m keeping it here for safe keeping.

Control Tier Authentication and Authorization: Files, OpenLDAP, and Pass Through to Active Directory

in brief
You want to enable flexible authentication and authorization schemes for your Control Tier server.
*# Control Tier first checks the “fileRealm” files for usernames, passwords and roles.
*# On failure, Control Tier then checks against an OpenLDAP server which is setup to acts as a proxy for the corporate Active Directory, while also providing it’s own branches for Control Tier roles (and any other apps.)

 

Control Tier --> files -> OpenLDAP --> Active Directory
(users (roles) (users)
and
roles)
 ou=roles,dc=corp,dc=example,dc=net <-- OpenLDAP
ou=people,dc=corp,dc=example,dc=net <-- Active Directory

Continue reading

Mayfiles and Dinosaurs – Metamorphosis and Epigenetics in Devops

Well, I guess it had to come to this.  Rob Hirschfeld brought up the wonderfully preposterous notion of puppies growing up to be dinosaurs.  And as a good scientist, and a profound thinker in DevOps, Rob’s statement is based upon his direct observation.  He states that our most beloved pets can become tyrants (Tyrannosaurus Rex, aptly named) in our lives in operations.

Continue reading

Development Foo – using vim and sshfs to propel development

Ahoy, mateys!

I hack on Crowbar a lot. And here’s how I run my show:

1) dev/build box running ubuntu-12.04
2) Crowbar Admin box running whatever latest stuff i just built – and mostly works
3) a test node box, where I used Crowbar to deploy whatever I’m working on
4) another test node box, where I’ve used crowbar to deploy what I’ve been working on.

Continue reading

The Image/Config Event Horizon

Here’s a few paragraphs of my thoughts about Functional Operations. By Functional Operations I mean that which emerges out of the meeting of “golden images” (delicately crafted base images that once deployed need config management) and “config management” (the config data and rules for applying it to a live golden image.)

As the “golden” image itself becomes integrated into the config management, it changes just like any other variable in the config management database. What’s changed in our toolkit?  What benefits do we get?

Continue reading

FuncOps: Orchestration and The Mothership Connection

It’s going to take some time to understand and articulate the full awesomeness of Crowbar 2′s approach to DevOps. One of the ways is to envision “Functional Operations,” similar to “Functional Programming”… this is early, but it’s a peek at our thinking. Paraphrasing Parliament Funcadelic: “Make my Func the FuncOps, I want to get FuncOps.”

What Erlang teaches us about orchestration.

Erlang’s design allows for intergalactic scalability and concurrency. Let’s riff off of a great design!

Continue reading

If you’re not using an Object Store, you’re not writing cloud software

As I listen to my esteemed colleagues in the OpenStack world, and the DevOps and Cloud world in general, say that there is no demand for Object Storage, I get all sad. Why? Because that means that they’re mounting volumes. It means they’re mounting volumes and storing their precious there. That means that the cloud platform is most likely doing something complicated and expensive to replicate this data. That means that Amazon’s S3 didn’t really change anything and we’re just developing enterprise software again.

The great limitation of Object Storage is doing seek operations on files. Seeks break down into reads and writes.

Reads include the CDNs aplenty that have solved this problem, but it’s a read-only solution for jumping around in video. I don’t have a problem with that, as it’s an API based service that is easily implemented and accessed.

However, the other big use case is writes, and the culprit is SQL (and some NoSQL) Database Files. Seeks through those files are critical to their operation.

I’m interested in what other requirements are driving the mounting of block devices, and what’s distracting app developers from the great awesomeness of Object Stores like Swift.

Building Crowbar “Pebbles”

Getting Going with Pebbles and Grizzly

On Ubuntu 12.04

Prep your environment

apt-get install rpm2cpio rpm build-essentials python-dev python-pip libxslt

Echo the /etc/sudoers partial into place so you can run all the commands as your user:

echo "<your username> ALL = NOPASSWD: /bin/mount, /bin/umount, /usr/sbin/debootstrap, /bin/cp, /usr/sbin/chroot" >> /etc/sudoers

Clone the repo, doing a git pull if you already have the repo, and would like all the great changes and fixes we’ve made recently. When running ./dev setup you’ll need your github username and password, so we can make a scratch space for you. PLUS, if you git init your .crowbar_build_cache, you can go a long way to saving yourself worry. There’s plenty of information on that in the README.* files once you clone the Crowbar repo in the first step.

git clone https://github.com/crowbar/crowbar.git
./dev setup
./dev fetch
./dev sync

Switch to the Pebbles release, and the OpenStack branch.

./dev switch pebbles/openstack-os-build
./dev clone-barclamps
./dev fetch
./dev sync
./dev switch pebbles/openstack-os-build

Run the build. Nota Bene: indicating --pfs is a bit dangerous. It indicates that the latest stable OpenStack code will be pulled from the Internet and cached in your Crowbar.ISO. With that, you’ll be installing the latest stable bits. The more conservative approach is to install from packages, omiting the --pfs

./dev build --update-cache --pfs --os ubuntu-12.04

It will take FOREVER, unless you’re on a massively powerful machine and an OC48. :-)

In my experience, Python’s pip system failed quite regularly to get the bits it was looking for. It doesn’t have a retry mechanism as far as I can tell.

In my next blog entry I’ll discuss install of the ISO, setup of Crowbar and the deployment of OpenStack Grizzly!

Orchestration, Consistency and Community Cookbooks

Patterns for Creating Shareable Cookbooks and Two Consistency Models

The Story Till Now

by Judd Maltin

Chef gives you all the flexibility to do things in one giant disorganized pile. It also has enough features to give you many, many possible gradations of organization: specialization, separation, abstraction and re-use. The execution of those features in a coordinated manner, that is, “orchestration” is also available to you. But orchestration is not thoroughly discussed or demonstrated in the Chef documentation. How to go about organizing these ideas and action is the problem I’m looking at here in depth.

Two years ago I met Jesse Robbins. He asked me what was the most important thing OpsCode could do to help Chefs. I told him they had to develop patterns to make sharing cookbooks easier. He followed up with me, but I chickened out. I didn’t reply. Frankly, I’m a sysadmin and Perl hacker. I did have tremendous operations experience as developer and third-level support of the New York Stock Exchange’s Identity Management Service.  Their operations culture was amazingly successful, but as far from the cloud as could be imagined. I hadn’t lived in Ruby land, and there was much for me to still grok about the problems Chef was trying to solve. I hadn’t been exposed to the wonderful concepts that programmers and computer scientists had been applying to similar problems for decades. I was at a loss to even being thinking about the subject.

Now

Fast forward to now, two years later, and the Chef community is abuzz, developing patterns based on mature concepts, and Chef’s tools to ease the re-use and collaborative aspects of Chef community cookbooks. The community is also just starting to pick up on the important and related subject, “orchestration”

Shareable cookbooks

There is lively debate and collaboration in community on the creation of patterns to produce shareable cookbooks. Brian Berry’s blog, DevOpsAnywhere, digs right into the subject and cites many important influences. The Food Fight Show is doing a great job in bringing in guests to talk about the issues. While bloggers are a-bloggin’ about it, but OpsCode themselves remain quiet about it in their official docs. My understanding is that such quiet is a strategy to keep newbies productive, and avoid the problem of newbies being overwhelmed by strict and potentially difficult to implement patterns.

orchestration

Orchestration too has been a quiet subject in official OpsCode channels. OpsCode employees themselves have not been so quiet. Sean Omeara has contributed some great posts on the subject in his blog A Fistful of Servers. Matt Ray has supplied “Spiceweasel,” a great tool to get your Chef environment off the ground. But clear articulation of orchestration strategies is only starting to come to the surface as the application of Chef expands to the very biggest public and private clouds, and used in some bare-metal provisioning systems, like our beloved Crowbar. An “orchestrator” is really, I’ll show, just like us.  It’s another user of the recipes, like you and I are when we add cookbooks and roles to your runlist and wait for or kick chef-client. What makes orchestrators special is that they’re experts, or “expert systems.”  Like us. We’re experts.

Without Clarity, Nothing Happens Quickly: The Interface

Let’s get into the concepts that make cookbooks shareable. Primarily it’s about interfaces. Clear, well supported and documented interfaces make for great cookbooks.  One could hack one’s cookbooks in a semi-patternless formation and still share them. It’s just that it would take forever for users to grok them and put the to use. Programmers have been doing this for decades. Sysadmins, not so long. The first pattern that Chefs have been picking up is “attribute driven recipes,” and their put their default attributes into the cookbook/attributes/default.rb file.

A good example

Check out this awesome cookbook by the community – mostly by the folks around AT&T and Rackspace.  It defines a really clear and robust interface for one product in a complex offering – the Keystone product in the OpenStack project:

https://github.com/att-cloud/cookbook-keystone/blob/master/README.md

Driven by datA – the nouns

To drive anything, you need to give it commands. A command is a sentence, i.e. “Stop apache.” In Chef, most all data are attributes of one kind or another, and they are our nouns. To be driven by data, recipes need to be treated as verbs that do things. That data is often specific configuration items for the target service. For example, an item of such data might be the auth_port that the Keystone server is listening on.

auth_port: Port Keystone server is listening on

Attributes might also control the flow of recipes.  Based on logic within the recipe, the attributes might influence the order of execution. An attribute might tell a recipe to install the packages before the configuration files are laid down, or the attribute might assert that it’s better to lay down the config files before installing the packages.

The point here is that you could just set some attributes and run the default recipe, and the system will do pretty much what you want.  But that’s not a very robust interface. You probably want more fine-grained control. Thus, resources and verbs.

verbAL RECIPES

That AT&T readme.md above details some actions available from its cookbooks.

:create_tenant: Create a tenant

In fact, recipe names themselves might be best considered verbs or functions, because that’s what they’re doing on a run_list.  run_list: “recipe[keystone::server]” is very much calling a function. I’d demur from calling it an object, because recipes themselves do not have attributes, unless you want to start monkeypatching. (ChefServerAPI experts correct me here, please.)

https://github.com/att-cloud/cookbook-keystone/tree/master/recipes

https://github.com/att-cloud/cookbook-keystone/blob/master/attributes/default.rb


https://github.com/att-cloud/cookbook-keystone/blob/master/recipes/server.rb

Cookbook patterns – “library” and “application” cookbooks

All the foregoing is really helpful in creating what’s become known as “library” type cookbooks. Library cookbooks have very clean, generic LWRPs and recipes to act on your nodes, and include sane defaults in their attributes/*.rb files to provide attribute data.  But library cookbooks are eager to be wrapped in application cookbooks.

Application cookbooks do the attribute over-riding and the recipe calling to express the wonderful snowflake that is your deployment of that application. Some chefs use them in place of roles, as we’ll see below. They’ll “recipe_include” all the cookbooks and recipes they require in the order they desire, and know that they will also comply with namespaces and versioning. Some chefs even go so far as to use monkeypatching in the application cookbook to change the underlying library cookbook. Tools such as chef-rewind enable such evil brilliance. If you’ve found a cookbook that helps you, but doesn’t have the features you need, you would considering patching it and creating a pull request – or wrapping the cookbook in an application cookbook and chef-rewinding it to your needs.

Orchestration

knowing what you’re deploying

This is where we bring in the concept of a “model.”  The model is the abstract representation of all the parts that make up your environment. Your cookbooks are an important part of this model.  Your nodes are too.  And now that you have sane interfaces to your cookbooks, we can talk about “The Rest of Chef.”  And the rest of Chef is just as loose and TMTOWTDI as cookbooks.

As noted in the introduction the “orchestrator” (conductor? we’ll use “operator”) is you or process that you wrote to make change happen on deployment. The operator uses tools like roles, environments, and even attributes to control the deployment of an infrastructure, implementing the model.  Spiceweasel helps out a lot too, at least for day 1 or disaster recovery. But hacking Spiceweasel’s yaml file and running it every time you need to take an infrastructure step would simply be onerous.

In the following example, we have a nice repo of the tools that an operator would use to deploy an OpenStack infrastructure. The op would edit the environment/production.rb file, launch some node instances and being applying roles to nodes in the appropriate order, making sure that the appropriate services came up properly before deploying the next set of roles.

https://github.com/opscode/openstack-chef-repo

https://github.com/opscode/openstack-chef-repo/blob/master/roles/keystone.rb

In this example, you the operator are the “expert system,” watching the deployment of your infrastructure, and adding roles as you move through stages.

Just recently on the Chef mailing list, Adam Jacob expanded on putting chef application data in the app repo, albeit in relationship with a continuous integration server. It’s a separate repo with the and using it at deploy time. http://lists.opscode.com/sympa/arc/chef/2013-01/msg00410.html

Eventual Consistency of independent actors

The predominant pattern in Chef orchestration is “eventual consistency.” In this pattern roles are applied to nodes by an operator by adding them to the node’s run list.  The roles bring in runlists full cookbooks, which are themselves batteries of ordered resources to be applied to the nodes. The nodes act as independent actors, running chef-client to configure themselves. As they run their reconfiguration task, chef-client, some aspects succeed, and others fail.  Nodes’ chef-client runs will continue unabated, with the goal of arriving into consistency by executing all of the resources successfully.  Operators may make changes to attributes and run-lists to try to get the chef-client to run to completion.  Or conditions might change on the node or externally so that the cookbooks functions run to completion.  Two common methods of querying external dependencies are direct interrogation (such as a network service coming online), or most commonly, a series of searches against the the chef-server’s datastore returning data patterns that are acceptable to accomplishing the task at hand. As chef-client is run on a schedule on the nodes, their configurations across nodes “eventually” become consistent and the service all works together. If the applications and dependent systems have been designed for this, there may not be any loss of service.

The drawback of “eventual consistency” is in its eventuality. There are time-sensitive use-cases that makes eventual consistency something of a pain, testing the patience of administrators. The wrong combination of “not yet positives” might lead a cookbook that was not very defensively written to erase its database.  There are other drawbacks – such as reduced predictability and increased service loss risk. Most of these risks are well accounted for in typical public-cloud applications, and it’s just fine for their purposes. However, the power of configuration management isn’t only attractive to cloud users or application deployers – it’s also attractive to hardware systems managers. These systems managers often, however, need finer grain control and the assurances of strictly stepped, cross-node orchestration.

constant consistency of centralized orchestration

The question I like to ask of Chef which it has a difficult time answering is: “How far along are we in deploying my system?” Controlled orchestration ensures that initiations of services are coordinated to reduce failure.  Controlled orchestration is an expert system that monitors each of the nodes and works through the necessary steps to achieve the end state.

How is the overall view of the deployment expressed?

  • node run_lists executing to completion
  • attributes expressing the desired state

Systems like Crowbar are the containers for the expert systems. By basically expressing the proper order and relationships between cookbooks or groups of cookbooks, complex systems can be deployed with little operator interaction, but with great operator awareness and control. I’ve also used tools like RunDeck to control my deployments in a similar fashion. In both cases, I’ve effectively orchestrated my chef-client runs across my nodes by building up run-lists and setting attributes, then executing the chef-client runs, and monitoring them to ensure that the ends I’ve desired were reached. Only once the ends are reached to my expert-system satisfaction, do I go onto the next phase of the deployment.

There is no reason all of this inter-role and inter-node dependency information couldn’t be encoded in the cookbooks, roles and environments. It’s just that the task is onerous. By diverting the control of the tasks to a higher-level tool, nuts and bolts need not clutter the architectural drawing’s notions of space and interaction.

The Patterns in Developing For Central Controller Orchestration

All this is not to say that you would have different recipes for the different orchestration paradigms.  With some careful attention, both can be quite happily accommodated.

Along with the foregoing “application and library” cookbooks, we’ll add a suggestion of “great caution” when using the “search” feature from within recipes.  Search is often singled out as the tool that most enables all the goodness of independent actor orchestration. But with a centralized automated operator as the source of incipient truth, asserting “this is how I want things to be,” search results by the nodes themselves become a second source of truth. The wrong truth. Search results reflect the way things are according to the chef server and chef-client’s last checkin, not the intended result.

To take advantage of both orchestration paradigms in your recipes, a simple conditional would be enough.  Here I hack a bit on the only part of these excellent cookbooks that use search: (https://github.com/att-cloud/cookbook-openstack-common/blob/master/libraries/roles.rb)

   section = 'foo'
   if node['roles'].include?(role)
      # If we're on a node that contains the searched-for role, just
      # return the node hash or subsection
      section.nil? ? node : node[section]
   elsif node['crowbar']['roles'].attribute?(role)
      # Crowbar asserts that role is applied to other_node
      node_name = node['crowbar']['roles'][role]
      other_node = Chef::Node.load(node_name)
      section.nil? ? other_node : other_node[section]
   else
      # Otherwise, let's look up the role based on the Chef environment
      # of the current node and the searched-for role
      query = "roles:#{role} AND chef_environment:#{node.chef_environment}"
      result, _, _ = ::Chef::Search::Query.new.search :node, query

      if result.empty?
        log("Searched for role #{role} by found no nodes with that role in run list.") { level :debug }
        nil
      else
        section.nil? ? result[0] : result[0][section]
      end
    end</pre>

Adam Jacob provides these two pro-tips which overlap quite well with controlled consistency:

  • Do not model deployment of change via roles. Use cookbook versions and conditional statements in cookbooks and attributes instead. Demoting your database server to slave? Do it through setting an attribute. It still remains a database.
  • Make smaller, more composable roles, and servers with very small run-lists (ideally a single role).
  • Similarly, it’s not a bad thing to consider having roles that are basically aggregates of other roles. Again, the benefit of the abstraction is that you can make those changes in one place – the role, and have it reflected in many places, and with a much wider area of effect. For example, you want a full-stack role composed entirely of the piece-part roles that comprise the bulk of your production infrastructure. If you move your vision forward from initial development loop to managing many systems with multiple participants, the value of the roles comes through.

Epliogue

Well, that’s about it for now.  Chef is maturing quickly, and it’s safe to say that it’s flexible enough to handle a variety of orchestration paradigms.

Below is a list of the sites and posts that I pull together for this:

I look forward to your feedback!  I’m eager to refine these ideas in a less talky voice and move to a more proscriptive presentation.