Get Your Dockerized ECS on in AWS

EMC launched ECS 2.0 at DockerCon 2015 and a great big bonus was included – you can download a community edition for FREE.  The devs were an early adopter of containerization so this software defined object storage platform runs as, you guessed it, a Docker container.  Let’s deploy it in AWS.

This will walk you through the steps to run the free community supported, non-prod version of ECS in AWS as a Docker container using a CentOS instance.  You can start at section 2 to deploy the container to a bare metal or vSphere CentOS machine.  If you do, be sure to add a disk of at least 100GB to be used for ECS persistent storage.

Updated 7/13/15: Note- There are regular updates occurring to the installation scripts located at the EMCECS Github page.  Please review the documentation there to ensure you are using the latest commands.

Overview of Steps:

  1. Launch and Configure a CentOS Instance in AWS

  2. Execute ECS Deployment Script

  3. Configure ECS with Script

Launch and Configure a CentOS Instance in AWS

  1. Login to the AWS Console and navigate to EC2
    ecsaws_35
  2. Click Launch Instance.  Excited yet?ecsaws_34
  3. Choose AWS Marketplace, enter CentOS, and click Select.ecsaws_32
  4. Updated 6/29: Thanks @mcowger for pointing out a lower cost instance option. Select the instance M4.2xlarge R3.xlarge ($0.35/hr) as this meets the minimum requirements of 30GB of RAM and 4 CPUs.  Then, click Next. The cloud costs money.  This could add up if you leave it running!  NOTE: ECS can run with less than 30GB of RAM with some tweaks not covered here.ecsaws_31
  5. Select or create a new VPC.  Click Next.ecsaws_30
  6. Click Add New Volume, enter a size of at least 100GB, and select Magnetic for the volume type.  Again, this surely is not free captain cloud builder.ecsaws_28
  7. next…
    ecsaws_27
  8. Configure the security group to allow the ports necessary to access ECS.  I recommend restricting access to specific IP addresses.  The initial deployment will have a well known default password.
    ecsaws_26 ecsaws_25
  9. No need to boot from SSD unless you want to pay for it.ecsaws_24
  10. OMG Launch it already!  Ain’t the cloudz so simple?  SHAMELESS PLUG ALERT: you too can have a simple on-prem cloud > EMC Federation Hybrid Cloud gets you there quickly.  Tell them I sent you.  If only I got referral payments…ecsaws_23
  11. Create a new key pair unless you already have existing keys to use.  Do not lose this or give it to your neighbor.  They will hack your ECS instance.ecsaws_22
  12. Launch Status.  Houston, we have a new instance in the cloud.  Time to make it rain ECS swift or S3 objects.ecsaws_21
  13. Select your new instance and copy your public address.  You will need this to navigate to your fabulous new cloud instance.  Tell the CIO you are using the cloud.  RUN!
    ecsaws_19
  14. Open terminal on your Mac.  PC users go get a Mac or an ssh client.  Execute:
    ssh -i <yourkey>.pem centos@<youraddressfromabove>
  15. yesecsaws_15
  16. Updates.  Yummy ->
    sudo yum update
  17. Install required bits ->
    sudo yum install git tar wget

Execute ECS Deployment Script

You can perform these steps on any docker capable machine with enough CPU and RAM.

  1. Clone the ECS repository from github:
    git clone https://github.com/EMCECS/ECS-CommunityEdition

    ecsaws_11

  2. cd ECS-CommunityEdition/ecs-single-node/

    ecsaws_10

  3. Get the device name for the volume you added and the NIC name.
    sudo fdisk -l

    then

    ifconfig -a

    Also note the IP address of eth0 for later.ecsaws_8

  4. Run the configuration script substituting what you have from the previous commands.  Here is what it looks like in my super CentOS machine in the cloud.
    sudo python step1_ecs_singlenode_install.py --disks xvdb --ethadapter eth0 --hostname <yourhostname>
  5. If you failed to mess up these steps you will eventually see a prompt instructing you to navigate to the web interface. See what I did there?
    1. SUCCESS!!  You just deployed a cloud-scale object platform with geo-distribution capabilities.  Nice.
    2. Put it on your resume.ecsaws_6
  6. You can verify the container is running with a
    sudo docker ps

    command.ecsaws_5

  7. You can now navigate to the web interface but be sure to continue on to the next section to configure the object settings.


ecsaws_4

Configure ECS with Script

  1. This section is quick.  One script.  You can do it. The main thing you need to be sure to change is the ECSNodes flag.  Use your NIC IP from above.  The other options can be customized if you want. ecsaws_2
    sudo python step2_object_provisioning.py --ECSNodes=10.0.1.10 --Namespace=ns1 --ObjectVArray=ova1 --ObjectVPool=ovp1 --UserName=emccode --DataStoreName=ds1 --VDCName=vdc1 --MethodName
  2. Let the script complete.  Notice it is using the ECS REST API to configure the virtual data center, virtual array, and more.
    ecsaws_1

Now that you have a fully functional Elastic Cloud Storage platform deployed, go push all of your objects up there.  You can use it for HDFS too.  Check out the ECS 2.0 Product Documentation Index to really geek out.  Also check out the handiwork of the EMC {code} team.

Deploying ViPR Services Docker Container on AWS

Update 7/2/15: EMC Software Defined Solutions team launched ECS 2.0 at DockerCon and announced the availability of a free community supported version of ECS available at emc.com/getECS.  This essentially replaced the project I refer to in this post.  Head over to my post that walks you through deploying ECS 2.0 in AWS or directly on CentOS.

EMC is all-in on Open Source, code sharing, and APIs to cater to developers. The @EMCCode team has many fantastic projects in the works.  Go check it out at http://emccode.github.io.  Some of them like this project I am writing about will be public very soon but for now it is available internal to EMC only.  Consequently, some details are not available here until the final public release.  This post walks you through everything to get you setup with an instance running Docker in AWS and then deploying the <secret_codename> Docker container that functions as a standalone ViPR Services instance.  You can use this to test S3, Swift, and Atmos object functionality.  Note that some ViPR Services capabilities are not available in this standalone instance.

If you have not played with Docker, this is a good introduction and will get you using your first container.

Deploy Docker in AWS

  1. Login to AWS Instance Wizard
  2. Select the Amazon Linux AMI or your preferred distribution.
  3. Choose the instance type.  You need 12-16GB of RAM for the instance.  Go with the  r3.large with 2 CPUs and 15GB of RAM.
  4. Go with the defaults on the Configure Instance Details page unless you have specific networking requirements.
  5. Review and click Launch. We will configure the security group later.
  6. Create a new key pair or use an existing one.  This is what you will use to connect to the new instance.  NOTE: I recommend setting up billing alerts at this point to prevent you from getting a surprise bill the size of your teenage daughter’s texting usage charges.
  7. Launch the instance.
  8. Browse to your instance(s), select the one you created, and click connect for instructions for connecting.
  9. SSH into your instance
    ssh -i key.pem ec2-user@<ip_address>
  10. Install Docker
    sudo yum install -y docker ; sudo service docker start

Deploy <Secret_Codename>

Now that Docker is deployed in your AWS linux instance you are ready to deploy the <secret_codename> Docker container.

  1. Download the container
    sudo docker pull emccode/<secret_codename>
  2. Next, run the container and bind the ports from the container to the host using “-p”
    docker run -tid -p 10101:10101 -p 10301:10301 -p 10501:10501 emccode/<secret_codename>:latest
  3. The container ID is displayed.  Replace <id> with the value.  This connects to the persistent bash session.  Use <ctrl-p> and <ctrl-q> to leave this session but keep the container running.
     docker attach <id>
  4. You just unleashed the ViPR.
  5. Open the above ports.
    1. Browse to your AWS instance and select the ViPR instance.
    2. Click on the Security Group name below to view the settings.
    3. Click Edit.
    4. Click Add Rule and then specify the ports we used earlier: 10101, 10301, 10501.  Configure the Source to a specific IP, range, or Anywhere.
    5. Save
  6. Get your secret key
    cat /StandaloneDeploymentOutput

Client Access

The below s3cmd examples are copied from https://github.com/emccode/<secret_codename>/blob/master/clients.md

  1. Install s3cmd
  2. Configure s3cmd
    • s3cmd --configure
  3. Configure s3cmd to use:
    • Access key: wuser1@sanity.local
    • Secrect key: YourSecretKeyHere
    • Encryption password:
    • Path to GPG program:
    • Use HTTPS protocol: no (in simulator version)
  4. Add HTTP Proxy:
    • HTTP Proxy server name: <ip_of_vipr>
    • HTTP Proxy server port: 10101

List buckets

  • s3cmd ls

Create bucket

  • s3cmd mb s3://new-bucket

Copy file to bucket

  • s3cmd put README.md s3://new-bucket

Copy file from bucket

  • s3cmd get s3://new-bucket/README.md test.md

Command help

  • s3cmd --help

15 minutes to Storage as a Service

In most organizations, IT is quickly coming to realize that they must become an efficient service provider that is easy and fast to transact with or their workloads will rapidly move to a public cloud provider.  Now before you start throwing stones, I am part of the crowd that strongly believes there are things that should and will not ever move to a public cloud service.  Key intellectual property is a good example.  That is another post….

So you want to build out a service catalog?  Where do you start?  How about a full-blown, heterogenous storage service catalog in 15 minutes that automates storage management?  That sounds like an excellent start!  Watch the video below as I install ViPR Controller as a vSphere vApp, configure ViPR, discover physical assets, create virtual assets, and finally order storage from the service catalog – in 15 minutes. Then visit a recent post where I will be regularly updating links to valuable resources for ViPR and ViPR SRM. I intentionally did not edit this video so you can see that even with the wait time involved with deploying the OVF, booting ViPR, waiting for services, and so on, you can accomplish this all easily and quickly.  An edited version with voice-over is coming soon.

I used virtual Isilon nodes which you can download here or by going to support.emc.com, navigating to support by product, search for Isilon OneFS, select Tools, and grab the virtual nodes.  I will post a walkthrough of how to deploy them and configure OneFS to use ViPR so you can play with both in a lab.  Note, this is for lab testing only.  Get ViPR for free with no time-bomb at emc.com/getvipr.

ViPR and ViPR SRM Resources and Getting Started

…Updated 9/30/2014 with new links…

The below list will continue to grow as I find specific resources that are valuable to those looking to learn about, deploy, and leverage EMC ViPR and ViPR SRM.

An overall excellent public community devoted to ViPR Software-Defined-Storage, ViPR SRM, and ECS:
https://community.emc.com/community/products/vipr

ViPR Product Documentation Index. This contains concepts, install and configure guides, download links, and important information like the support matrix.

ViPR quick install: I created this youtube video to show installing and configuring ViPR to running self service provisioning in under 15 minutes. It is still rough with no voice over but a great way to see the basic steps and speed of deployment.

ViPR SRM Product Documentation Index.  Well organized index of concepts, install and configure guides, and a listing of all SolutionPacks that are available and their install and their usage.  SolutionPacks plug-in to ViPR SRM to provide reporting for storage (EMC and 3rd party), SAN fabrics, virtualization, SQL, Oracle, and more.

ViPR SRM Quick Start steps:

  1. Deploy OVF template.  A small eval environment can use the single VM OVF.
  2. Login to http://<IP/hostname>:58080/APG
  3. Go to Administration -> Centralized Management
  4. Click Ok to save the default server configuration
  5. Use the SolutionPack Center to configure reporting for each component (ie: VNX, VMAX, vCenter)
  6. Enjoy the visibility into your environment
  7. Check out the topology views by clicking explore in the left pane and selecting a host or vm.
  8. See the ViPR SRM Product Documentation Index for more details.

Download ViPR Free!

The links below require an EMC Support login:

ViPR support and Download full ViPR paid version (requires a license key)

Download ViPR SRM (includes a 30-day trial).  Follow the link and download “ViPR SRM 3.5SP1 vApp Deployment”  See above for quick start steps.

Software Defined Objects

I get a kick out of all of the ___-as-a-Service acronyms that the industry has invented.  We now have another common phrase – Software Defined <Data Center/Network/Storage>.  These are software based solutions that abstract and pool heterogenous hardware resources and then layer on intelligent software to provide an easy to consume service typically with rich REST APIs for programatic access.  Hence my creation, Software Defined Objects, to label just one of the capabilities of ViPR.  But first we must cover some basics about objects. We will see object in more and more enterprises as they transition to mobile, web, and big data applications.  Why is object gaining in popularity? Why would someone choose object over a filesystem?

First, a definition of what makes an object.  I am borrowing Chuck Hollis’ because I cannot state it any simpler.  You can find a very useful analogy on his post. An object looks like this:

  • an arbitrary amount of bits (data, code, whatever)
  • a unique identifier
  • an aribrary amount of metadata to describe it to different parties
  • and some sort of access method to invoke or use it

Access methods are typically REST API based.  Common examples out there are OpenStack Swift, Amazon S3, EMC Atmos, and Centera.

File, meet Object

If IT is fortunate, the DevOps team is requesting an object store for a new web application.  I say fortunate because often they are taking this object store out to a public cloud service like Amazon or rolling their own in a dreaded “Shadow IT” project.  I say embrace object and all of the next generation capabilities it has to offer.  Better yet, offer an object store to the dev team before they ask.  You are becoming the IT service broker after all. Here is why the developers want an object store as opposed to a filesystem.

Geo-Distribution

Objects know nothing about the location restrictions that filesystems must deal with on a daily basis.  While a filesystem is constrained by a single location, object stores can span distances allowing an object to be accessed anywhere and live in many locations globally.  Access anywhere with locality awareness that serves up the closest copy to the user based on their location is a huge advantage of object storage.  Imagine the listing on ebay.com with images that are made available to the potential buyer of an auction item who lives in Europe but the seller is in the US.  The buyer’s experience would be poor if they retrieved the images across the pond.  The user need not care about the location of the object but simply needs to request it from the namespace and let the object store do the heavy lifting of locating, retrieving, and serving the object from the nearest location if it is available.  Replication is core to the architecture of most object platforms for this reason and is not just a method of providing resiliency although that is a tremendous benefit.

Global Namespace

That last point requires the ability to have a global namespace across all locations.  http://www.emc.com/namespace/objectguid is an example.  No matter the geographic location of the object or application retrieving the object, it will still be found at the same namespace location.  Compare this to accessing a CIFS or NFS share that has a specific, single geographic location.

Meta-Data

Want to store key data about the objects/files you are storing.  Most file use cases out there today have some kind of relational database storing the metadata pointing back to the file.  This does not scale well when dealing with millions or billions of files.  Even worse are implementations I have witnessed where the files are actually embedded inside a database structure.  Unstructured data buried in a structured relational database.  Yuck!  Object stores combine the metadata with the object on the storage itself allowing for seemingly endless scale.

Scalability

There are scale-limiting factors of filesystems – overall size restrictions of a file system and number of files or directories.  Because objects are stored in a flat structure rather than directories, there is no need to store directory structure data which is often why filesystems begin to struggle at scale.

ViPR Object

ViPR Data Services (DS) layers various data structures on top of existing enterprise storage infrastructure – today VNX, Isilon, and NetApp NFS filesystems – and soon commodity hardware.  With this software-only solution that runs as a scale-out cluster of VMs, IT organizations now have a simple to deploy and manage method to provide Object and Hadoop as a service to their development teams.  The supported Object APIs in ViPR Data Services are Atmos, OpenStack Swift, and Amazon S3.  Keep in mind, the ViPR DS is not managing a standalone instance of OpenStack Swift but instead IS the object store itself.  Data traverses through ViPR DS.  This is a common misunderstanding when talking with customers and EMCers.  Expect to see future data services added such as File and Block – think ScaleIO as a ViPR DS!  Chad Sakac recently made it very clear that much of our portfolio will run in the data plane as a software only solution by the end of 2014.  My guess is there will be tight integration with ViPR controller to manage these software defined services.

Access methods battle it out

A powerful option ViPR DS provides is the ability to simultaneously access stored data via Object or HDFS.  Don’t battle over where or how data is stored or, worse, move massive amounts of the same data around for different purposes.  Think of the duplication of data that occurs if an application writes objects or files but then you want to run analytics in your Hadoop cluster.  Traditionally, this requires you to move the data from the object store or filesystem to the Hadoop cluster.  Holy duplication of data!  Instead, create the objects with the application via REST API and then access that same data in-place with the Hadoop cluster.  Therefore, no duplication of data and much faster access to the data as it exists in realtime.

Real World

Briefly, here are a couple observations from my interactions with customers.

  • Developers are more frequently demanding object rather than block or file storage.  DevOps shops love REST APIs rather than needing to deal with LUNs or file/folder structures in their code.
  • Many customers are pursuing OpenStack Swift as a means to leverage commodity hardware instead of proprietary storage systems even if object is not the data type they need.  This leads to interesting, maybe bizarre, solutions such as placing a file gateway in front of the object store.  Now you are back to the same limitations as before.  Pay attention to EMC World as you may hear “Commodity” mentioned.
  • Many pay-for products are sprouting up to assist with and provide enterprise support the free Open Source options.  SwiftStack and InkTank are a couple examples.  This goes along with my belief that while free Open Source projects may be a way to get to cheaper commodity platforms, often the effort to roll these out and consequently maintain them is overlooked.  Value added software options like ViPR can deliver these capabilities to the enterprise faster, easier, and in a supportable package.
  • Use cases include storing large amounts of unstructured data such as log files, images, and web content.

What are you seeing out there?

A New Year, a brand new SRM 3 at EMC

This post was featured on EMC Advanced Software Division’s rethinkstorage.com.  Head over there too for SDDC perspectives from our management team and product gurus.

If one of your New Year’s resolutions is to improve storage resource management and gain better visibility into your environment then you are in luck. EMC released Storage Resource Management Suite (SRM Suite) 3.0 on January 30, 2014. This is a major new product release with awesome enhancements and a complete consolidation of the software suite known affectionately as the SRM Suite.  Previous versions of SRM Suite consisted of three discrete products:

  • Watch4Net for single pane of glass monitoring and reporting to enterprise infrastructure
  • Storage Configuration Advisor for SAN and array compliance validation against EMC best practice and support matrices
  • ProSphere for end-to-end topology mapping with capacity trend analysis with chargeback

SRM Suite 3.0 now includes all of the above in a single interface, single agentless polling architecture, and a single back-end!  It even installs simply via a vApp OVF template.  Early access customers who previously used ECC now feel that they have a management solution to supersede ECC. SRM exceeds their needs by eliminating ongoing agent maintenance, providing end-to-end visibility with thorough alerting, and presenting a more intuitive interface.

With the shipment of SRM Suite 3.0, EMC has delivered on its promise to consolidate multiple products into one with the best features of each previous software component preserved in the new offering. Look for an installation and configuration post shortly but in the meantime, “These are a few of my favorite things…”

New Look and Feel
New dashboards guide you through typical operations, reporting, and troubleshooting workflows. You can now see combined information across different components in your infrastructure for complete end-to-end analysis because storage is not the only thing that matters in a private cloud.

simple dashboard workflow

Capacity Trend Analysis and Chargeback
You can now perform enterprise wide capacity trend analysis to determine storage growth rates and when new capacity will be needed.  Context sensitive tables allow you to drill-down into the details about LUN Consumers and Disk Contributors.  Now you can predict when virtual machines or physical hosts will run out of capacity plus you will avoid the frantic request to expand or create new datastores.

Gain rapid insight into VMs that are nearly reached their file system capacities

Gain rapid insight into VMs that are nearly reached their file system capacities

We all appreciate heat maps for quickly determining trouble areas.  This one shows what VM fileysystems are running out of space.

Chargeback maps resource utilization to VMs, custom departments, or projects and can even include costs for each class of storage service.  This example shows the out-of-the-box chargeback details for projects provisioned via the self-service catalog in ViPR.

How much is a department or project costing you per month, last week, over the last quarter? This is at your fingertips and works out of the box with ViPR!

How much is a department or project costing you per month, last week, over the last quarter? This is at your fingertips and works out of the box with ViPR!

 

End-to-end Topology
Viewers quickly understand the relationship of a VM through the entire stack – VM, vSphere Host, SAN, and storage.

End-to-end topology lets you see the relationships of the VM, vSphere host, SAN fabric, and storage.

End-to-end topology lets you see the relationships of the VM, vSphere host, SAN fabric, and storage.

Drill into each component to determine what ports are being used.  Also, selecting a VM, host, or storage changes the context to reveal performance or capacity data about that particular component.

expanded topology view

Drill down into the details to determine what array or host ports are connected to the fabric ports.

Want to know where a particular virtual hard disk lives on your storage array?  Say goodbye to the time consuming processes of cross-referencing spreadsheets or viewing multiple management interfaces to map a VM back to the storage array.

Get a detailed mapping of VMDK to Datastore to Array LUN

Get a detailed mapping of VMDK to Datastore to Array LUN

Performance Trend Analysis
Oh the dreaded performance troubleshooting.  The DBA walks up to you and tells you his database is performing poorly.  Do you A) Tell him to stop complaining, B) Open all of the various points of management for your infrastructure, C) Summon SRM, or D) Blame the network guys?

Our suggestion is that you invoke SRM Suite 3.0 since it grants you the ability to execute rapid root cause analysis of performance issues at all levels.  Take a look at the database VM to see if disk IO is experiencing some kind of bottleneck.  Looks good?  Export any report to a PDF, image, or email it directly to the DBA, manager, and other curious people.  You can even create a custom view that contains tables, charts, and graphs from other reports by simply pinning them to build a dashboard.

Overview of VM performance and capacity

Overview of VM performance and capacity

Performance Concern vm expanded

This VM appears to have disk performance problem. Quick and easy root cause analysis for the entire stack.

Drill into the details of the virtual disk performance to get a better picture of what is happening. That’s odd, the latency is high but IOPS are low, then suddenly latency drops off and IOPS spike to over 30k.  It looks like VNX FAST Cache kicked in and began servicing the workload from flash in real-time.
VNX FAST Cache is a latency killer and IOPS master

VNX FAST Cache is a latency killer and IOPS master

There are a ton of canned reports that customers over the last few weeks have loved.  Here are a few others:

  • VM over/under sized – to help you understand if the CPU, Memory, or Disk are over/under sized
  • LUN not masked/mapped – find those pesky LUNs that are not being used and wasting space!
  • VMs running with snapshots
  • Datastore trend analysis with precise time that a Datastore will reach 90% full based on the growth rate – a life saver!
  • Pin view – grab any number of reports into one single view for a dashboard or to share via PDF with a co-worker.

Finally, here is a summary of other notable new features:

  • Quick installation of the product into Windows, Linux, and VMware environments with a vApp for vSphere
  • Windows host agent that collects data without requiring any credentials on the host or a completely agentless option for easy deployment and maintenance-free operations
  • Host capacity utilization reporting that allows the storage admin to see how existing capacity is being utilized at the host level
  • Management of SLAs and Chargeback reports in terms of FAST Policy or disk/array characteristics
  • A new use-case driven user interface
  • Performance monitoring and troubleshooting
  • Topology maps, end-to-end tabular views of the data
  • Consolidated monitoring of health, configuration, compliance breaches, and threshold-based alerts
  • Tabular summaries of information about zonesets, zones, and zone members
  • Global search across all discovered configuration items
  • A consolidated Alert console that includes:
    • Alerts for breaches of EMC’s eLab Support Matrix and configuration best practices
    • EMC PowerPath alerts
    • Threshold alerts based on performance data
    • Health alerts collected from Brocade switches, Cisco switches, VMware, and VNX and VMAX arrays

With the shipment of SRM Suite 3.0, I hope you fulfill at least one of your New Year’s resolutions!

ViPR: The Harmony Remote for Your Data Center

I find myself speaking more and more about ViPR with customers and partners.  The common thing amongst everyone is a general misunderstanding of what ViPR does for them.  In this post I will walk through my talk track which uses a simple analogy that I think we can all at some level relate back to our lives.

Data growth is explosive with the budgets for maintaining that growth staying the same or shrinking. Not only is data growing but we are seeing all sorts of new data types like Hadoop and Object along side of traditional File and Block.  IDC labels the next generation of applications as Platform 3.  This next platform demands infrastructure agility that most enterprise environments cannot provide.  At a storage only perspective you start to see the picture below with many storage platforms, vendors, and each with its own element managers.  This complexity drives the cost up nearly as fast as the data growth itself.

complexity

 

This leads to an example you can hopefully relate to.  I have a home theater that consists of many components – a TV, VCR (kidding!), Surround Receiver, DVD, Roku, Home Theater PC, and a cable box.  home theaterWhat happens when I want to watch cable TV or a DVD or even better when my wife wants to watch any of the above?  There are a series of steps that must be performed in a certain order otherwise I might have audio but missing the video or vice versa.  There are many remotes that we juggle to control each device and you must jump between each one to hit the buttons in the proper order.

remotes

My wife has never enjoyed this feature of our TV room.  Believe it or not, she does not like tinkering with the electronics like I do.  She just wants to watch House Hunters and does not care that the audio comes out in hi-def surround sound with 1080p quality through HDMI source 2.  A common scene at our house after wrestling with remote controls is below… This may be a slight exaggeration as she is pretty accommodating of my technology snafus.  For the purpose of this post, you may imagine IT is the guy and the biz is the wife.  🙂

D@mN home theater There is a solution.  First, you can get a universal remote to consolidate all remotes down to one.

universal remote

This is great for giving one point of control but you still need to understand which buttons to press in which order to get everything to work properly.  If you go away for a week you will likely forget the order and maybe make a tragic mistake that gets in between you and Breaking Bad.  This error at home is easy to overlook, but what about in IT?

You were only thinking Harmony Remote because it is in the title.  harmonyYes, the answer is a Logitech Harmony Remote which not only consolidates all remotes down to one by abstracting the control away from the individual components, it also has powerful built-in macros that trigger events to happen automatically in the same specific order every single time.

Happy wife, happy life.  Need I say more?

happy wife, happy life

 

ViPR Controller software (get it for FREE, no hardware required) is a universal remote control that abstracts the control plane of the underlying heterogenous storage arrays providing a single ridiculously simple management interface.  ViPR is also an automation engine that handles all tasks automagically for storage management – provisioning, replication, expansion, zoning, mounting a volume, snaps/clones, creating distributed VPLEX volumes across metro distances, RecoverPoint journals/source/target, and more.  All of this out of the box without any need to write custom scripts.  All accessible through a self-service web interface or REST APIs.  Powerful.  Did I mention we just launched it as FREE for non-production?

ViPR slide

 

Consequently, IT transforms from labor intensive and slow processes to immediate access to low cost, elastic compute, storage & network infrastructure.  This is just scratching the surface of ViPR’s capabilities.  This is really exciting stuff and further makes my decision to become part of the EMC Advanced Software Division a clear win.  Those of you that are along for this ride and leveraging these automation and orchestration tools are building a killer resume.  Enjoy!  Come back for more soon on these other transformational functionalities:

  • Chargeback/Showback
  • Self-Service
  • Data Services – Object and Hadoop on your existing storage infrastructure.
  • Commodity storage
  • Project Nile
  • A fast installation guide for ViPR

Update 1/28/14: I just learned that @keithnorbie presented a similar concept at VMworld 2013 titled “How SDDC is like a harmony remote” for a vBrownbag session.  I guess it is true, great minds think alike. Check it out.

New title and role and more blogging

Many are aware at this point that I transitioned to a new role at EMC in the last few months. This month I round out the end of my third year at EMC. It has been a blast as I have had the awesome opportunity to work beside the best sales and engineering force on the planet. Everyone is always willing to lend a hand, share their experiences, and teach new things not to mention the education that EMC provides is way beyond what I had at any other job.

So, the new gig. It was time for a change. I made the decision to explore options within EMC to grow my career, peer network, and expertise. Among the options was the chance to start a new role as an Advanced Software Division Specialist for our MidMarket Division. The new buzzword these days is Software-Defined ‘Everything’ (data center, storage, networking). The Advanced Software Division is defining Software Defined Storage at EMC. My thinking was, hey, the industry is changing rapidly and moving to this whole Software Defined strategy so what better place for me to be than right at the front of it helping to pave the way at EMC…?? Bam, done.

Software consulting is not new to me but I have spent the last few years advising customers on primarily hardware storage platforms.  But wait, nowadays storage is just a bunch of software intellectual property executing on commodity x86 hardware platforms, right?  “What is ASD at EMC?” everyone asks.  The Advanced Software Division is comprised of everything SDDC at EMC including the new and exciting ViPR Software Defined Storage and the Storage Resource Management Suite for cloud monitoring and reporting, plus UIM, SAS, and more.  Uh, you didn’t think I was going to miss this opportunity to shamelessly plug my new tech, did you?

Finally, as a technical expert in this area, you should expect to see way more blogging and sharing of these technologies and related SDDC tech like Puppet Labs (go Portland!), OpenStack, Amazon AWS, and maybe a post or two about my somewhat-new 11 month old son or geek personal tech!  Looking forward to hearing from you for things you want to read or learn about now that I am “Keen on SDDC.”  First question – corny title?  Now go follow me on twitter dangit!  The first three new followers who mention this post get a starbucks gift card and you better be a human and not some annoying spammer!

Happy Holidays!  See you next year.

@jeremykeen

 

New venture at EMC for 2011

Today marked the beginning of a new venture as I logged my first day with EMC.  Thrilling, mentally tiring, and massive are words that describe my experience on the first day.  Thrilling as any new job should be, mentally tiring due to extreme information overload, and massive describes the new ecosystem I now belong to.

Here’s to 2011!