IT Grand Prix Day 1 – Application Virtualization taking KIPP DC by storm!

So here we are kicking off the first official day of the IT Grand Prix in Washington DC!     We were supplied with a USB Key with our puzzle, an Internet connection USB card, and a $500 cash card for our incidentals! Our host David Elfassy gave us a ‘puzzle’ to solve in order to find out which non-profit organization we’re going to be going out to help.    Through deductive reasoning, over-thinking the problem, WAAAY OVER THINKING the problem, and some more deductive reasoning… :) We were able to find the non-profit we would be destined to help!

Our Journey begins... with USB and Credit cards! Hi! Can you see me.. I can see you through the KIPP DC!

Welcome to KIPP DC!

KIPP DC is a network of high-performing, college-preparatory charter schools in Washington D.C., which serve the city’s under-resourced communities. At KIPP DC, there are no shortcuts: outstanding educators, more time in school, a rigorous college-preparatory curriculum, and a strong culture of achievement and support help our students make significant academic gains and continue to excel in high school and college.

But what does that mean to you and me?  They’re a business just like yours and anyone else’s who suffer many of the same challenges we all do.  And one of those challenges is how to manage the demanding needs of students, faculty and staff, while being able to stay strategic, forward thinking and be proactive instead of merely reacting to the future of computing.    The challenge with Daniel Nerenberg and I on the Red Team were helping them work with, was addressing Application Virtualization.  Our esteemed coopetition fellows Gordon Ryan, Andrew Bettany of the Blue Team are helping with a Windows 7 Deployment strategy and plan.

What do you get when you combine IT Pro’s, Non-Profits, Coffee and @cxi?
Application Architecture in a sugar cube! image

Innovative solutions the likes of which can only be explained with stirrer sticks and sugar packets!    If you know anything about me, you know that I love tactile reference of physical infrastructure, solely for the logic that we have the ability to MOVE objects around (unlike a whiteboard where we end up getting messy… and it’s not nearly as cool and impactful)   I felt it might be useful for a layman’s explanation of what I just said, so in simple terms…  I like to play with stuff you can find around you :)

The above picture is an architectural breakdown of application virtualization presented to a user use-case environment.   Basically look at it as the sticks are ‘boundaries’ in the first square picture, that physically shows a single ‘machine’ with applications living inside of it, with all of the constraints and conflicts which also happen to live within that environment.    However, in the second picture, you have a breakdown of applications virtualized into separate packages, yet there’s also a model of shared package layers and abstraction… Lots of abstraction! This video helps explain it a little further!

After our sit down talk and interview with Director of Operations, Edward Han and IT Manager, Adam Roberts we got to work on a plan of how Application Virtualization would help out Kipp DC with their organizational challenges.  

Hard at work, notice Gordo has the SAME expression in both photos?! :) Chuck Norris loves you and wants to bust up your geekness Daniel Nerenberg hard at work!

As you can see here, the Blue Team is hard at work.  I’m not sure if they were working on their slides, or checking out some new Chuck Norris statistic, but nonetheless… We all got down to business creating collateral, producing and providing information which would help out those folks and how it would apply (in our case) to what Application Virtualization with App-V could do for them!

I would be remiss in my duty to not share with you what I shared with them.  Use it in the context of your organization of course (as I will not be sharing the incidentals of their organization here! :))

Microsoft’s Application Virtualization collateral!

And here are a few videos produced by the folks on Sequencing and actually deploying using App-V (or Softgrid for you legacy folks like me :))

Though I’m sure some of you are saying “Hey, what about VMware ThinApp – Why didn’t you talk about that?!”  It’s true.   Our mission was to discuss App-V, fortunately I had Daniel with me who is an MVP in App-V too!  However, so you don’t feel left out, here are a few videos of Deploying VMware ThinApp from start to finish in 20 minutes! – Enjoy! :)

Following this, we wrapped up for the day, grabbed our (heavy) bags, and headed out to where the Bus would take us to our next destination: Marriott Brooklyn Bridge in New York City!

Can we fit more into this outlet? I don't think we should try... I'm sure there's a snake down here somewhere! It's getting hot in here!

However, like the geeks we are… All the way at the back of the bus is… a Power Outlet! … Ooh! Two outlets actually! So the guys had their ‘friendmaker’ power strip multipliers, and we plugged them in, with enough arsenal to run a small business (or even a mid-sized enterprise with the kind of gear we’re sporting! :))   And just so you get a good feel for what we’re looking at here… I’m sitting in the literal hot seat!   See all the cables? Oh wait, what’s that you see? It’s 109.5 DEGREES DOWN THERE?!?!? Yea, it’s pretty hot at my feet :)

Red Team on the Back of the Bus! 

I hope you enjoyed Day 1 – This was only the beginning on this whirlwind adventure tour of madness, insanity, education, and Technology! We’re Technology Focused! :)

EMC 20% Unified Storage Guarantee: Final Reprise

Hi! You might remember me from such blog posts as: EMC 20% Unified Storage Guarantee !EXPOSED! and the informational EMC Unified Storage Capacity Calculator – The Tutorial! – Well, here I’d like to bring to you the final word on this matter! (Well, my final word.. I’m sure well after I’m no longer discussing this… You will be, which is cool, I love you guys and your collaboration!)

Disclaimer: I am in no way saying I am the voice of EMC, nor am I assuming that Mike Richardson is infact the voice of NetApp, but I know we’re both loud, so our voices are heard regardless :)

So on to the meat of the ‘argument’ so to speak (That’d be some kind of vegan meat substitute being that I’m vegan!)

EMC Unified Storage Guarantee

Unified Storage Guarantee - EMC Unified Storage is 20% more efficient. Guaranteed.

I find it’d be useful if I quote the text of the EMC Guarantee, and then as appropriate drill down into each selected section in our comparable review on this subject.

It’s easy to be efficient with EMC.

EMC® unified storage brings efficiency to a whole new level. We’ve even created a capacity calculator so you can configure efficiency results for yourself. You’ll discover that EMC requires 20% less raw capacity to achieve your unified storage needs. This translates to superior storage efficiency when compared to other unified storage arrays—even those utilizing their own documented best practices.

If we’re not more efficient, we’ll match the shortfall

If for some unlikely reason the capacity calculator does not demonstrate that EMC is 20% more efficient, we’ll match the shortfall with additional storage. That’s how confident we are.

The guarantee to end all guarantees

Storage efficiency is one of EMC’s fundamental strengths. Even though our competitors try to match it by altering their systems, turning off options, changing defaults or tweaking configurations—no amount of adjustments can counter the EMC unified storage advantage.

Here’s the nitty-gritty, for you nitty-gritty types
  • The 20% guarantee is for EMC unified storage (file and block—at least 20% of each)
  • It’s based on out-of-the-box best practices
  • There’s no need to compromise availability to achieve efficiency
  • There are no caveats on types of data you must use
  • There’s no need to auto-delete snapshots to get results

This guarantee is based on standard out-of-the-box configurations. Let us show you how to configure your unified storage to get even more efficiency. Try our capacity calculator today.

Okay, now that we have THAT part out of the way.. What does this mean? Why am I stating the obvious (so to speak)  Let’s drill this down to the discussions at hand.

The 20% guarantee is for EMC unified storage (file and block—at least 20% of each)

This is relatively straight-forward.  It simply says “Build a Unified Configuration – which is Unified” SAN is SAN, NAS is NAS, but when you combine them together you get a Unified Configuration! – Not much to read in to that.  Just that you’re likely to see the benefit of 20% or greater in a Unified scenario, than you are in a comparable SAN or NAS only scenario.

It’s based on out-of-the-box best practices

I cannot stress this enough.   Out-Of-Box Best practices.   What does that mean?    Universally, I can build a configuration which will say to this “20% efficiency guarantee” Muhahah! Look what I did! I made this configuration which CLEARLY is less than 20%! Even going into the negative percentile! I AM CHAMPION GIVE ME DISK NOW!".   Absolutely.  I’ve seen it, and heard it touted (Hey, even humor me as I discuss a specific use-case which me and Mike Richardson have recently discussed.)    But building a one-off configuration which makes your numbers appear ‘more right’ v using your company subscribed best practices (and out of box configurations) is what is being proposed here.   If it weren’t for best practices we’d have R0 configurations spread across every workload, with every feature and function under the sun disabled to say ‘look what I can doo!”

So, I feel it is important to put this matter to bed (because so many people have been losing their time and sleep over this debate and consideration)  I will take this liberty to quote from a recent blog post by Mike Richardson – Playing to Lose, Hoping to Win: EMC’s Latest Guarantee (Part 2)    In this article written by Mike he did some –great- analysis.  We’re talking champion.  He went through and used the calculator, built out use-cases and raid groups, really gave it a good and solid run through (which I appreciate!)   He was extremely honest, forthright and open and communicative about his experience, configuration and building this out with the customer in mind.   To tell you the truth, Mike truly inspired me to follow-up with this final reprise.

Reading through Mike’s article I would like to quote (in context) the following from it:

NetApp Usable Capacity in 20+2 breakdown

The configuration I recommend is to the left.  With 450GB FC drives, the maximum drive count you can have in a 32bit aggr is 44.  This divides evenly into 2 raidgroups of 20+2.  I am usually comfortable recommending between 16 and 22 RG size, although NetApp supports FC raidgroup sizes up to 28 disks.  Starting with the same amount of total disks (168 – 3 un-needed spares), the remaining disks are split into 8 RAID DP raidgroups. After subtracting an additional 138GB for the root volumes, the total usable capacity for either NAS or SAN is just under 52TB.

I love that Mike was able to share this image from the Internal NetApp calculator tool (It’s really useful to build out RG configurations) and it gives a great breakdown of disk usage.

For the sake of argument for those who cannot make it out from the picture, what Mike has presented here is a 22 disk RAID-DP RG (20+2 disks – Made up of 168 FC450 disks with 7 spares) I’d also like to note that snapshot reserve has been changed from the default of 20% to 0% in the case of this example.

Being I do not have access to the calculator tool which Mike used, I used my own spreadsheet run calculator which more or less confirms what Mike’s tool is saying to be absolutely true!   But this got me thinking!    (Oh no! Don’t start thinking on me now!)    And I was curious.   Hey, sure this deviates from best practices a bit, right? But BP’s change at times, right?

So being that I rarely like to have opinions of my own, and instead like to base it on historical evidence founded factually and referenced in others… I sent the following txt message to various people I know (Some Former Netappians’s, some close friends who manage large scale enterprise NetApp accounts, etc (etc is for the protection of those I asked ;))

The TXT Message was: “Would you ever create a 20+2 FC RG with netapp?”

That seems pretty straight forward.   Right? Here is a verbatim summation of the responses I received.

  • Sorry, I forgot about this email.  To be brief, NO.
  • “It depends, I know (customer removed) did 28, 16 is the biggest I would do”
  • I would never think to do that… unless it came as a suggestion from NetApp for some perfemance reasons… (I blame txting for typo’s ;))
  • Nope we never use more then 16
  • Well rebuild times would be huge.

So, sure this is a small sampling (of the responses I received) but I notice a resonating pattern there.   The resounding response is a NO.   But wait, what does that have to do with a hole in the wall?   Like Mike said, NetApp can do RG sizes of up to 28 disks.   Also absolutely 100% accurate, and in a small number of use-cases I have found situations in which people have exceeded 16 disk RG’s.   So, I decided to do a little research and see what the community has said on this matter of RG sizes. (This happened out of trying to find a Raid6 RG Rebuild Guide – I failed)

I found a few articles I’d like to reference here:

  • Raid Group size 8, 16, 28?

    • According to the resiliency guide Page 11:

      NetApp recommends using the default RAID group sizes when using RAID-DP.

    • Eugene makes some good points here –

      • All disks in an aggregate are supposed to participate in IO operations.  There is a performance penalty during reconstruction as well as risks; "smaller" RG sizes are meant to minimize both.

      • There is a maximum number of data disks that can contribute space to an aggregate for a 16TB aggregate composed entirely of a give disk size, so I’ve seen RG sizes deviate from the recommended based on that factor (You don’t want/need a RG of 2 data+2parity just to add 2 more data disks to an aggr….). Minimizing losses to parity is not a great solution to any capacity issue.

      • my $0.02.

    • An enterprise account I’m familiar has been using NetApp storage since F300 days and they have tested all types of configurations and have found performance starts to flatline after 16 disks.  I think the most convincing proof that 16 is the sweet spot is the results on spec.org.  NetApp tests using 16 disk RAID groups.

  • Raid group size recommendation

      • Okay, maybe not the best reference considering I was fairly active in the response on the subject in July and August of 2008 in this particular thread.  Though read through it if you like, I guess the best take away I can get from it (which I happened to have said…)
        • I was looking at this from two aspects: Performance, and long-term capacity.
        • My sources for this were a calculator and capacity documents.
        • Hopefully this helped bring some insight into the operation  and my decisions around it.
          • (Just goes to show… I don’t have opinions… only citeable evidence Well, and real world customer experiences as well;))
    • Raid group size with FAS3140 and DS4243
      • I found this in the DS4243 Disk Shelf Technical FAQ document
      • WHAT ARE THE BEST PRACTICES FOR CONFIGURING RAID GROUPS IN FULLY LOADED CONFIGURATIONS?
      • For one shelf: two RAID groups with maximum size 12. (It is possible in this case that customers will configure one big RAID group of size 23–21 data and 2 parity; however, NetApp recommends two RAID groups).
    • Managing performance degradation over time
    • Aggregate size and "overhead" and % free rules of thumb.
    • Why should we not reserve Snap space for SAN volumes?
      • All around good information, conversation and discussion around filling up Aggr’s – No need to drill down to a specific point.

So, what does all of this mean other than the fact that I appear to have too much time on my hands? :)

Well, to sum up what I’m seeing and considering we are in the section titled ‘out of box best practices’

  1. Best Practices and recommendations (as well as expert guidance and general use) seem to dictate a 14+2, 16 disk RG
    1. Can that number be higher.  Yes, but that would serve to be counter to out-of-box best practices, not to mention it seems your performance will not benefit as seen in the comments mentioned above (and the fact that spec.org tests are run in that model)
  2. By default the system will have a reserve, and not set to 0% – so if I were to strip out all of the reserve which is there for a reason – my usable capacity will go up in spades, but I’m not discussing a modified configuration; I’m comparing against a default, out-of-box best practices configuration, which by default calls for a 5% aggr snap reserve, 20% vol snap reserve for NAS and a SAN Fractional Reserve of 100%
    1. Default Snapshot reserve, and TR-3483 helps provide backing information and discussion around this subject. (Friendly modifications from Aaron Delp’s NetApp Setup Cheat Sheet)
  3. In order to maintain these ‘out of box best practices’ and enable for a true model of thin provisioning (albeit, not what I am challenging here, especially being that Mike completely whacked the reserve space for snapshots – Nonetheless… in our guarantee side of the house we have the ‘caveat’ of “There’s no need to auto-delete snapshots to get results” – Which is simply saying, Even if you were to have your default system out of box, in order to achieve, strive and take things to the next level you would need to enable “Volume Auto-Grow” on NetApp, or it’s sister function “Snap Auto Delete” the first of which is nice as it’s not disruptive to your backups, but you can’t grow when you’ve hit your peak! So your snapshots would then be at risk.   Don’t put your snapshots at risk!
  4. Blog posts are not evidence for updating of Best Practices, nor does it change your defaults out of box.   What am I talking about here?  (Hi Dimitris!)   Dimitri wrote this –great- blog post NetApp usable space – beyond the FUD whereby he goes into the depth and discussion of what we’ve been talking about these past weeks, he makes a lot of good points, and even goes so far as to validate a lot of what I’ve said, which I greatly appreciate.    But taking things a little too far, he ‘recommends’ snap reserve 0, fractional reserve 0, snap autodelete on, etc.    As a former NetApp engineer I would strongly recommend a lot of ‘changes’ to the defaults and the best practices as the use-case fit, however I did not set a holistic “Let’s win this capacity battle at the sake of compromising my customers data”   And by blindly doing exactly what he suggested here, you are indeed putting your data integrity and recovery at risk.   

I’ve noticed that.. I actually covered all of the other bullet points in this article without needing to actually drill into them separately.  :) So, allow me to do some summing up on this coverage.

If we compare an EMC RAID6 Configuration to a NetApp RAID-DP Configuration, with file and block (at least 20% of each) using out of box default best practices, you will be able to achieve no compromise availability, no compromise efficiency regardless of data type, with no need to auto-delete your snapshots to gain results.   So that’s a guarantee you can write home about, 20% guaranteed in ‘caveats’ you can fit into a single paragraph (and not a 96 page document ;))

Now, I’m sure, no.. Let me give a 100% guarantee… that someone is going to call ‘foul’ on this whole thing, and this will be the hot-bed post of the week, I completely get it.   But what you the reader really are wondering “Yea, 20% Guarantee.. Guarantee of what? How am I supposed to learn about Unified?”

Welcome to the EMC Unified Storage – Next Generation Efficiency message!

Welcome to the EMC Unisphere – Next Generation Storage Management Simplicity

I mean, obviously once you’re over the whole debate of ‘storage, capacity, performance’ you want to actually be able to pay to play (or, $0 PO to play, right? ;))

But I say.. Why wait?  We’re all intelligent and savvy individuals.  What if I said you could in the comfort of your own home (or lab) start playing with this technology today with little effort on your behalf.     I say, don’t wait.   Go download now and start playing.

For those of you who are familiar with the infamous Celerra VSA as published in Chad’s blog numerous times New Celerra VSA (5.6.48.701) and Updated “SRM4 in a box” guide things have recently gone to a whole new level with the introduction of Nicholas Weaver’s UBER VSA!  Besser UBER : Celerra VSA UBER v2 – Which takes the ‘work’ out of set up.  In fact, all set up requires is an ESX Server, VMware Workstation, VMware Fusion (or in my particular case, I do testing on VMware Viewer to prove you can do it) and BAM! You’re ready to go and you have a Unified array at your disposal!

Celerra VSA UBER Version 2 – Workstation
Celerra VSA UBER Version 2 – OVA (ESX)

Though I wouldn’t stop there, if you’re already talking Unified and playing with File data at all, run don’t walk to download (and play with) the latest FMA Virtual Appliance! Get yer EMC FMA Virtual Appliance here!

Benefits of Automated File Tiering/Active Archiving

But don’t let sillie little Powerpoint slides tell you anything about it, listen to talking heads on youtube instead :)

I won’t include all of the videos here, but I adore the way the presenter in this video says ‘series’ :) – But, deep dive and walk through in FMA in Minutes!

    Okay! Fine! I’ve downloaded the Unified VSA, I’ve checked out FMA and seen how it might help.. but how does this help my storage efficiency message? What are you trying to tell me?  If I leave you with anything at this point, let’s break it down into a few key points.

    • Following best practices will garner you a 20% greater efficiency before you even start to get efficient with technologies like Thin Provisioning, FAST, Fast Cache, FMA, etc
    • With the power of a little bandwidth, you’re able to download fully functional Virtual Appliances to allow you to play with and learn the Unified Storage line today.
    • The power of managing your File Tiering architecture and Archiving policy is at your finger tips with the FMA Virtual Appliance.
    • I apparently have too much time on my hands.  (I actually don’t… but it can certainly look that way :))
    • Talk to your TC, Rep, Partner (whoever) about Unified.   Feel free to reference this blog post if you want, if there is nothing else to learn from this, I want you – the end user to be educated :)
    • I appreciate all of your comments, feedback, positive and negative commentary on the subjectI encourage you to question everything, me, the competition, the FUD and even the facts.   I research first, ask questions, ask questions later and THEN shoot.    The proof is in the pudding.  Or in my case, a unique form of Vegan pudding.

    Good luck out there, I await the maelstrom, the fun, the joy.   Go download some VSA’s, watch some videos, and calculate, calculate, calculate!   Take care! – Christopher :)

    #EMCWorld – It’s like Twitter, with EMC.. and the World?!

    Talk about coming down to the last minute! Yes! It’s almost here! EMC World will be in full swing next week!

    I don’t know about you, but I’ll likely be busy in various sessions, meetings, and who knows how many other things – all the while trying to Tweet, and Blog to the best of my ability – Oh, and doing everything in my power to meet every single one of you who I know so well and dearly.. and yet, we’ve not had the opportunity to meet in person!

    So, what you might be saying is.. hashtags aside, how will we stay in touch? Sync up, get to know who is where, doing what, and so on?

    Other than Len Devanna’s great “Bloggers Lounge” breakdown which shows you who will be where and what not..

    I’ve also created a List you can follow on Twitter! My simple list @cxi/emcworld2010 will enable you to follow all of us madness folks – Oh, and wait there’s more! If you’re on Twitter, and want to get yourself added to this list for mutual communication and collaboration.. Just let me know and I’ll be sure to add you to this list! This is an opt-in sort of list, as I do prefer to respect your boundaries of sorts!   So, follow along, be sure to hunt me down and say hi, and let’s have an amazing time! Oh.. and learning stuff, yea learning stuff too! :)

    Look for a potential slew of blog posts, tweets and other crazy times (Did someone say non-stop twitpics?! :))  See you there next week! FYI: I’ll be arriving early on Sunday!

    Act now to get 20% off or a Second Shot on your Microsoft Certification Exams!

    I get a lot of emails, tweets, IM’s, comments and more about getting 20% off your certification exam!  And I know I’ve discussed this in the past Increasing your Microsoft Certification Discount from 10% to 20%, Certification and MeasureUp Discounts 20% off Certs! and You deserve a Second Shot at Microsoft Exam’s, until June 30th, 2010! – and even more than that, but I won’t bore you with the other redundant links!

    What this really means is… I’ve said this a damn lot! So why am I saying it again?!!$!?   Because June 30th is coming up, and that is when all of the vouchers will expire – Second shot AND the 20% off vouchers! So what this means to you is… Let’s get rid of all of my vouchers!

    I currently have vouchers for EVERY country which give you a “Second shot” on your exams – so if you want insurance if you fail, let me know and I’ll hook you up!

    If you want 20% off on your Certification Exams though, I’m limited to the following countries:

    AUSTRALIA
    AUSTRIA
    BELGIUM
    BRUNEI
    CANADA
    DARUSSALAM
    ENGLAND
    GERMANY
    GREENLAND
    GUAM
    ICELAND
    IRELAND
    ITALY
    JAPAN
    LUXEMBOURG
    NETHERLANDS
    NEW CALEDONIA
    NEW ZEALAND
    NORWAY
    PAPUA NEW GUINEA
    PORTUGAL
    SINGAPORE
    SPAIN
    SWEDEN
    SWITZERLAND
    UNITED ARAB EMIRATES
    UNITED STATES
    VANUATU

    It bears repeating – I only have vouchers with 20% off in the above mentioned countries.   It’s either 20% off or Second Shot – Only!

    As far as how many of these vouchers I have available.. most of the countries I have 10 or 20 total.. except for the US, where I have ~150 vouchers left.  So don’t be afraid to ask for as many vouchers as you need!

    Now get out there and get certifying before this opportunity passes us all!

    Cloud Camp Chicago 2010 – Mar 5th, 6th – Get your Cloud on!

    That’s right! Cloud Camp is coming to Chicago!  What?! When?! Where? Who, whatomfg?!@? (And Yes, this is a FREE Event – Thanks to our sponsors who ponied up the cash! :))

    Well, let’s lay out the details.. Yes, this is indeed the (un)conference Cloud Camp, which is ever so popular world wide!

    I totally dragged this right over.. and I like how it's kind of blackedout, so I'll leave that.. ;)

    CloudCamp is an unconference where early adopters of Cloud Computing technologies exchange ideas. With the rapid change occurring in the industry, we need a place where we can meet to share our experiences, challenges and solutions. At CloudCamp, you are encouraged to share your thoughts in several open discussions, as we strive for the advancement of Cloud Computing. End users, IT professionals and vendors are all encouraged to participate.

    Okay, now that you have a fairly decent idea of WHAT it is, let’s cover the good and raw details!   The schedule below is likely to change as it gets finalized, but what I can guarantee is that Friday evening there will be an Executive round table and panel, which will be moderated and also open to questions from the audience and Twitter (I’ll be monitoring it on a hash tag I’ve yet to define..)  I’ll also do what I can to ensure we have one or more uStream live feeds which have made Cloud Camp so popular and successful in the past!

    CloudCamp Executive Panel Event

    Friday March 5th starts at 4:30PM
    Agenda
    •    4:30PM – 5:30PM Registration, Happy Hour & Networking
    •    5:30PM – 6:00PM Break
    •    6:00PM – 7:00PM Panel of Experts consisting of local corporate executives, industry experts and professors addressing how Cloud Computing is impacting their organizations and the business climate at large
    •    7:00PM onwards – Social Networking Continues at a local Establishment

    CloudCamp Chicago
    Saturday March 6th starts at 12PM
    Agenda
    •    12PM – 1PM – CloudCamp Networking and Registration
    •    1:00PM – 5:30PM – CloudCamp Un-Conference
    o    1:00 – 1:30 – Lightning Talks (5 minutes each)
    o    1:30 – 2:00 – Un-Panel to Select Topics
    o    2:00 – 3:00 – Topic Breakout Sessions
    o    5:00 – 5:15 – Reconvene and Share Takeaways with all Attendees
    o    5:15 – 5:30 – Wrap-Up and Calls to Action
    •    5:30PM onwards – Social Networking Event

    Being that this event will consist of two days, (Friday evening Round Table, and Saturday all day adventure) there are two separate registration links so you can choose which is more fitting for you.    So if you’re more business focused and want to strategize around Cloud Friday may be a better fit for you, but if you’re deeply technical and don’t care about the business, it’s all Saturday!

    However, many of you are like me, and care about both sides of the coin and will register accordingly.  I’m not just saying that because I’m helping organize, coordinate and more the event.. It’s also because I am focused on both sides of the house, as it were :)  So good times if you want to meet me I’ll be there in either case!

    Register: CloudCamp Executive Panel, Mar 5, 2010

    Register: CloudCamp Chicago, Mar 6, 2010

    And for all other general purpose information, feel free to visit the CloudCamp Chicago portal page.

    This event will be hosted at the ITA – I’ve attended numerous events here in the past, it’s a nice facility and definitely worth a visit!

    Illinois Technology Association (ITA)
    200 S. Wacker Drive, 15th Floor
    Chicago, IL 60606

    So, I look forward to seeing you there, and if you’re attending entirely online (like I have a tendency to do for other CloudCamp events, be sure to follow me on Twitter @cxi and I’ll be live-tweeting from the event and sharing live video from multiple sources!)  Thanks, and get out there and register before all the slots fill up!