I’ve been working with vRealize Automation (vRA) for a few years now, and I’ve seen folks do some pretty impressive things with it. Continue reading
I’ve been working with vRealize Automation (vRA) for a few years now, and I’ve seen folks do some pretty impressive things with it. Continue reading
In Part 1 of this series, we did a conceptual walk-through of 3 basic NSX + vRA7 Micro-Segmentation approaches. We compared the pro’s and con’s of each approach, and talked about the situations in which you would be most likely to employ each one. In this article, we’ll actually be doing the hands-on setup for each approach. Also, since this article focuses solely on Micro-Segmentation functionality (the NSX Distributed Firewall), we will not need to configure any Logical Switches, VXLANs, or Edges/routing. Continue reading
Recently I did a video illustrating virtual disk synchronization capabilities with Tintri SyncVM. Our latest 4.0 Tintri OS takes SyncVM a step further by allowing file level restores from snapshots. Currently this only works on VMware, but it is compatable with both Linux and Windows.
Let’s start by navigating to my linux demo machine from the Tintri UI via the search option
After searching for the ‘cl-linux-file’ demo machine, simply right click on the VM and select ‘Restore VM/Files’
Next, simply select the ‘Guest OS File’ radio button and then select the snapshot you wish to restore a file from on the drop down menu. I chose to uncheck the ‘Auto detach disks in 48 hours’ option because I will manually detach the snapshot when I’m finished with the restore. Then click ‘Restore’
You will see the progress in the back ground of the snapshot getting added as an additional disk. Once it reaches a 100%, you can login to the VM and mount the drive.
On my linux VM instance, I have to do a rescan to detect newly added SCSI devices. This is a very simple script (named scan.sh as you can see in the illustration) that scans and then mounts the disk under a mount point I simply named ‘recover’
After running scan.sh, you can see that I now have a new disk on /dev/sdb1 mounted under the ‘recover’ mount point.
Now I can simply navigate to ‘/recover/home/clucas’ and restore the file named ‘large.file’ to ‘/home/clucas’ by doing a simple copy. Then just navigate to ‘/home/clucas’ and verify the file is there.
Now that the file is recovered, I can umount the drive and then detach the snapshot from my VM back in the Tintri UI.
That’s it! Very simple to easily restore files directly within the guest OS using SyncVM file restore. The process is exactly the same on Windows, however you just use the disk manager to ‘online’ the disk that was added.
For most of you that know me, you are already aware I left VMware around 5 months ago to join Tintri. VMware is a great company and I’m very grateful for having the opportunity. While at VMware, I had several customers that deployed Tintri storage appliances and I never met a customer who simply didn’t rave about it. When the opportunity presented itself, I was extremely excited to take on a new journey.
Having been on board now for a little over 5 months, I simply can’t believe how simple, high performing, and feature rich our product line is. The following demo illustrates a recent feature release known as SyncVM. Not only can you synchronize an entire VM to multiple points in time, you can sync individual vDisks from other VMs.
This demonstration shows the simple process of synchronizing a production DB down to a test system. Then reverts back to the test systems previous state. Stay tuned as even more advanced SyncVM features will be announced soon!
With VMworld fast approaching, some are eager to party and catch up with Twitter friends. Wading through crowds vying for free USB keys and lighted rubber balls excites them. Others wonder how they will endure serial smalltalk. Or how they can make an appearance at an insanely loud, hot, alcohol-fueled soiree, and still execute a perfectly-timed exit.
This post will deal with the latter. You know who you are. You’re the guy or gal who loves technology, wants to be knee-deep in a tech conference. You would prefer about 12,000 less people engaging only in meaningful, passionate, technical conversations.
First things first. Realize your introversion doesn’t make you a freak who prefers being in your shell at all times. It simply means you derive your energy from within. You recharge your internal batteries alone, digesting your thoughts with few distractions. Extroverts recharge their batteries right there on the show floor. While they yell over loud music and megaphones to tell people what they had for dinner, they’re gaining energy. You are one to analyze a band’s musical prowess, while analyzing their tonal structures. Extroverts are the ones kicking you in the face crowd-surfing at the same concert.
Introverts can make gigantic tech conferences easier to digest, and condition their batteries at the same time.
1. Talk to people
The popular stereotype is that introverts don’t like people, and don’t like to carry on conversations. This couldn’t be further from the truth. Introverts love to carry on conversations with anyone who will engage on a topic we feel passionate about. Once engaged, it is hard to stop some introverts from talking. This is one reason I love attending Tech Field Day events. You’re in a pre-selected group of your peers, who are guaranteed to have passionate opinions, and want to engage on the topics you care about. Introverts generally think about a topic pretty deeply before discussing. According to Susan Cain, in the book QUIET –
This “may also help explain why they’re so bored by small talk. “If you’re thinking in more complicated ways, – then talking about the weather or where you went for the holidays is not quite as interesting as talking about values or morality.””
Take advantage of vendors who are dying to tell you all about the intricacies of their products, and schedule one on one time with them. Find a few vendors you really want to learn more about. Most of the better ones will have times when you can sit for a on on one briefing, or quick “Genius Bar” type conversation with one of their engineers. I highly recommend doing this. It’ll get you engaged, and you’ll be talking to someone who is passionate, and deeply technical (most of the time).
2. Get a hotel that’s as close as possible to the conference center
Sounds like a no brainer, right? While all personality types want to minimize their walking distance, and maximize their conference time, this is especially important for the introvert. It allows you the flexibility to head back to the room, and catch your breath during the day, if you need to. When you feel your batteries running down, go ahead and skip that session you had scheduled. It’s going to be online later anyway. Head back, and wind down for an hour. Recharge, and veg out. This can make a dramatic difference in your day. If you’re stuck with all those people, and constantly shuffling from session, to hall, to crowded meals, you’ll be completely wiped before dinner.
3. Go out for meals
Yea. I know. Your company paid for a conference where meals were included. But unless you’re in Vegas, the catering is generally horrible anyway. Hit Tripadvisor, or Yelp to find close restaurants you’d like to try, and get away from the crowds for a bit. If you find other conference attendees at these places, guess what. . . They’re likely doing the same thing as you, and if you end up getting into a conversation with them, it will be engaging. They likely hate smalltalk as well, and want to share some of their complex thoughts with an equally complex thinker.
4. Don’t skip parties
Make sure you go to at least a couple parties. Most of the time, you can find fellow introverts hanging out, sipping slowly, drifting toward the door. If you do, execute a casual greeting, with all the tentativeness you’d want from them. If they do want to chat, it won’t be some asymmetrical, bombastic conversation, where you’re competing for volume. It’ll likely be on a technical topic you can appreciate, and will value. Exchange cards with that person. This is how we introverts can network without the high schmooze factor, and wasting valuable energy.
5. Don’t forget labs
If you need a break to recharge, you can always go do some labs. Nobody will bother you there, and it’s pretty quiet. Don’t stress out about missing sessions you wanted to see. Again, they’ll be available online just a few weeks after the conference.
6. General Sessions are great from hang spaces
Most of the hang spaces at the conference will be broadcasting the general sessions live. If you’re not feeling up to the crowds, you don’t need to stand in a sea of people, waiting to get a decent seat at these. Just head over to the beanbags, and watch from there.
Most of all, have fun. Don’t try to take it ALL in. Prioritize. There’s too much for even the most extroverted to experience all of VMworld.
Monitoring, analytics and predictive analysis, are all necessary elements that make up any successful IT Shop World Wide! As environments and business’ move towards 100% uptime and continue to drive forward it’s absolutely critical to know the health of all elements that make up the IT portfolio. There are many different operations framework solutions on the market today. Many focus around the elements of ITIL and allow operations team to respond to issues in a timely fashion and drive down the mean-time to resolution. ITIL focuses on Incident, Problem, Change and Release Management. This blog post however, will focus on the analytics and heuristic elements that surround the Virtual Infrastructure.
Organizations are driving further and further towards virtualizing everything with a “Virtual 1st” strategy. In fact, the vast majority of the shops I meet with on a daily basis are well down that path moving towards 90% + virtualized! It’s clear that VMware is the market leader in the hypervisor space, so with that the choice of these shops is to utilize native VMware tools. And no I didn’t mean the VMware tools you install in a guest VM 😉 Recently VMware renamed the vCenter Operations Manager (vCOps) to vRealize Operations Manager (vROps).
Tintri + vRealize Operations Manager
At Tintri we focus on removing the challenges that conventional storage brings to the virtual infrastructure. We are the standard for running Virtual workloads efficiently and reliably while moving management from the traditional LUN or Volume level down to the individual VM. This places us right in the wheelhouse for pulling all the rich Tintri visualizations and deep insights into vROps. This also means everything Tintri knows about the VI, our customers’ main analysis engine will also know too!! You’ll correlate specific VMs and workloads to performance problems, identify sources of issues, and predict when and if an environment will potentially run out of resources. For all existing Tintri customers the Management Pack for vROps is 100% free! All that’s required is vROps 5.8 in Advanced Edition and well at least a single Tintri VMstore! Since the Management Pack uses the Tintri REST API you’ll need to be at OS version 3.1.x. Just like everything Tintri does, it’s THAT SIMPLE 🙂
Let’s take a deeper look into the technology and features.
Tintri’s Management Pack for VMware vROps provides a holistic view and deep insight into the health and overall efficiency of the Tintri Infrastructure. At first glance you’ll quickly notice that all of the same rich Tintri information that’s typically found in the VMstore UI or through Tintri Global Center can now be retrieved from vROps. One of the particular items that excites me is the ability to now have all of my Tintri performance and per-VM metrics available with the retention policy of vROps. I’ve typically seen environments set the retention for around 365 days, so now I have the ability to go back to last year and see exactly how my VMs were performing. Imagine trying to answer the “how did it look last year during peak holiday season?” question Again, focusing on he analytics engine of vROps – I’m now empowered to make accurate and well guided decisions.
Let’s take a deeper look into the technology and features.
Another element that leveraging the vROps toolset allows us is the ability to utilize the unique badge identifiers. If you’re not familiar, badges allow you to quickly see the health of an individual vSphere object. Through the use of these badges and whether or not an object is green (good), yellow (warning) or red (bad!) the given object is assigned a health score. Tintri MP brings forth Health, Workload and Capacity information.
So how do i get this coolness?
Follow me on Twitter @ClintWyckoff
Disaster Recovery is something that’s very near and dear to my heart all the way back to my years on the end-user side of the fence. The annual or semi-annual Disaster Recovery event is typically a very painful and long process with lots of lost sleep! Dating myself a bit, but BC/DR was even more of a challenge when we had to recover applications running on physical machines. System state restores to dislike hardware was never fun!
Virtualization’s changed the industry in many ways. One challenge IT Departments face is effectively protecting the business. What happens when we have that “smoking crater” or “How do we know we’re protected?” CIO’s around the world are asking these questions! So preparation is key.
For years Tintri customers have had the ability to efficiently replicate on premise VMs (yes, not LUNs or Volumes – individual VMs) off premise. VMs that have differing Recovery Point Objectives can be managed individually.
Tintri ReplicateVM with vCenter Site Recovery Manager (SRM)
Tintri OS 3.1 further integrates the ReplicateVM engine with vCenter Site Recovery Manager (SRM) to provide an automated orchestration and non-disruptive way to centralize recovery plans for every virtualized application! Let’s take a deeper look.
First off if you’re looking to implement Tintri VMstore with vCenter SRM you’ll want to check out the Best Practices Guide or watch the SRM Video above. These guides provide a step-by-step, soup to nuts explanation of exactly what’s required and how to get everything up and running without issue. Second, you’ll need to make sure you’ve got an appropriately setup infrastructure.
The requirements are rather basic. See illustration below.
Now we’re all setup and you need to grab the Tintri Site Recovery Adapter (SRA) off of the Tintri Support Portal. After you download, install the SRA on both of the vCenter servers. Pretty straightforward install on your vCenter Server on both Protected and Recovery sides, next, next….finish.
Next go through the normal SRM steps of Creating Mappings, and Setting up Tintri Replication
Mount the Tintri VMstore to your ESX hosts as normal – 192.168.1.1/Tintri = However, you need to create a sub mount for each group of VMs you want to protect.
For instance my Gold RPO – 192.168.1.1/Tintri/Gold_RPO – You can create this by browsing the datastore within the vSphere console and creating a folder /Gold_RPO and then mounting 192.168.1.1/Tintri/Gold_RPO to your ESX hosts. Then sVmotion the Gold RPO VMs to the datastore.
On this datastore I now have all VMs that need recovered with the Gold Recovery Point Objective, now jump over to the VMstore UI and navigate to the Virtual Machines tab and click Service Groups. The easiest way to think of a Service Group is Service Group in Tintri = Protection Group in SRM.
For more granular RPO, 15 minutes – Click Custom, Hourly and then click in the Minutes field and choose the required RPO. Also, important to note is the ability to provide crash consistent OR VM Consistent snapshot.
VM Consistent will leverage the native VM Tools present inside of the Guest OS to quiesce applications like Sharepoint, SQL Server, Exchange, DB2, Active Directory…etc.
To wrap up the setup go through and create your Array Pair (Choosing Tintri SRA), Protection Group and Recovery Group. All of these steps are illustrated in great depth in the Video I created or in the Best Practices Guide.
One of the great parts of Tintri ReplicateVM + Tintri SRM is the ease of use and efficiency. ReplicateVM has always been extremely WAN friendly! Like many things on a Tintri VMstore, replication too is based on VM’s and snapshots. When replicating VM snapshots prior to even sending a block of data over the WAN we’ll send block fingerprints to check for which blocks are missing. Once identified we’ll send over those missing blocks in a compressed & deduplicated fashion to ensure that efficiency of the latency sensitive WAN is never over taxed with unneeded blocks of data!
With everything setup it’s pretty easy to go through and perform a test recovery of the protected VMs. Within the SRM Plugin drill into the Recovery Plan that’s been created and mapped to the Protection Group. It’s worth reiterating that the Protection Groups in SRM correlate directly to the Service Groups in Tintri.
Right click on the Recovery Group and choose Test. One of the options you’re asked with is “Do you want to replicate recent changes to the Recovery Site?” This will allow Tintri ReplicateVM to copy over the blocks of data that have changed since the last synch cycle. After the test you’ll want to right click and run the Cleanup task. During the test, since it’s not an actual failover – the Protected side still retains the authoritative copy of VMs, so Cleanup allows SRM to get everything back to the way it needs to be for normal replication to continue. Never go through and perform a Recovery, unless it’s a true failure situation. If you’re just looking to sanity check yourself, use test. Recovery moves all authoritative rights over to the Recovery side and you’ll have to re-replicate everything back to the Primary.
With that I’ll leave you with the 3 key pillars and differentiators.
So what’s the takeaway? Again Tintri continues to deliver disruptive technology that focuses on the largest and fast growing area of the Modern Data Center – Virtualization!
At Tintri I talk with a lot of customers and prospects about their virtualization environments and how it relates to their storage configurations. Virtual machine provisioning discussions come up quite a bit, so I thought I would write about some new features that Tintri just introduced.
The method in which we deploy virtual machines over the past many years has certainly changed on the storage side of the house. Thin Provisioned, Eager Zero Thick, Lazy Zero Thick; there has always been a long menu of choices when deciding how to deploy your virtual machine’s that support your applications. This has also created some confusion for people around “which choice is right for me when I deploy my virtual machine?” I have also noticed recently that many customers thought they had deployed thin provisioned vmdk’s but in fact they were running thick due to default values being selected.
First let me start off by saying Tintri is “pro virtual machine thin provisioning”. You might be saying, wait a second, you’re NFS on vSphere, you are thin provisioned by default! This is true, but with our VAAI implementation we can observe any of the other types of provisioning methods from VMware as well. Let’s say you do a storage vMotion and move an inefficient thick provisioned virtual machine from an existing block storage environment over to a Tintri VMstore. If VAAI is installed, we will observe the specifications of the existing format and retain this .vmdk format and punch zero’s. (unless you decide to change the option when migrating).
Let me make note, there is no need to use older “Thick” provisioning methods when deploying workloads on Tintri. Our VMstore operating system is designed to understand the workloads of every virtual machine down to an 8KB block. Tintri has QoS built into our datastore to adapt as your VM’s change from a performance perspective.
With our new T800 platform, we have upped the bar on giving you more value from your Tintri VMstore investment. We have enabled compression at rest on all of the new models to help drive your storage costs down. This allows your organization to run as efficiently as possible from a capacity perspective. With our current shipping version of Tintri OS (188.8.131.52) we now add in some great capacity management features which I will highlight below.
I deployed a few VM’s for illustration in the lab, they are empty, no operating system, you can see some are eager zero thick provisioned, one is lazy, and one is thin in the screenshot shown below:
Here is the overall capacity of the VMstore prior to making changes to the virtual machine formatting:
In the example above you can see our compression ratio numbers are a little low, so let’s examine why. If a virtual machine is thick provisioned per VAAI, according to the specifications, you must “hard back” the zero’s, or reserve the space inside the virtual machine. If you were to thin provision the .vmdk file, then compression would allow us to reclaim the white space. This process typically involves doing storage VMotion so you can run the conversion process. Not any more!
Tintri has built in some great ways to help examine and fix how you can optimize your virtual infrastructure. In the example below you can see the “Provisioned Type” field on the far right that I have exposed in our user interface to identify which VM’s are thick provisioned.
Let’s go ahead and right click and convert these VM’s within the Tintri user interface to thin disks!
This conversion process is instantaneous, and you can now see in the Tintri user interface we have converted our inefficient thick provisioned vm’s to thin without having to perform a storage VMotion.
You can see below the vSphere Web client now reflects an accurate savings on our capacity on each virtual machine:
Below you can now see the Tintri VMstore overall compression ratio is gone from 1.7x to 2.7x since we have migrated the virtual machines to thin provisioned vdisks!
Tintri has taken this one step further to help our customers (and thank you customers for your continuous feedback, this is a result!). We now have a global option within the datastore settings to keep all virtual machines that get migrated to Tintri as a thin provisioned regardless! No more going back to reclaim on accidental vm’s that were migrated over.
I hope you found this write up useful, let me know if you have any questions!
In the movie Interstellar, one of the central themes throughout is gravity, and its effect on the passage of time. This time dilation is something that has long fascinated me, and it was great to see it fleshed out on the big screen by a master like Christopher Nolan. While not giving any spoilers, the basic scientific principle is that relative velocity and gravity both cause time to elapse at a much slower relative pace. So an astronaut could travel to Mars, and come back having aged 18 months, while someone on Earth had aged 19 months. Of course this is a dramatic oversimplification, but my goal is not to explain relativity here.
What does any of this have to do with virtualization, technology, or anything else?
One of my favorite tech podcasts these days is In Tech We Trust. This group of guys has great chemistry, and a broad array of technical experience. If you haven’t checked it out, I would encourage you to give a listen.
The past couple episodes got me thinking about the relativity between those who are technology pioneers, versus the rest of the world. There were mentions of “peak virtualization”, and how technologies like Docker, and everyone rushing to the public cloud, we could be seeing this now. And this is where I believe we can see some time dilation between the relative velocity of a handful of “astronauts”, heavily into technology on the bleeding edge, and reality down here on Earth. People who attend the multitude of different tech conferences, stream Tech Field Day events, and keep abreast of exciting developments in tech, are not in the majority.
The majority is engaged every single day in making the technology they have execute their business goals. While we are musing about OpenStack, Docker, Swift, etc. these folks are grinding it out with programs that were written long ago, and will never be suitable for those types of cutting edge deployments. I know companies right now who are planning projects to migrate core apps off mainframes. And you know how they’re doing it? They’re basically porting applications over so that they will run on open systems. They’re taking apps written in ALGOL, or COBOL, and writing them the exact same way in a language they can sustain, and deploy on open systems.
They’re not re-architecting the entire application, or the way they do business. They’re interested in satisfying a regulator, or auditor, who has identified a risk. They need to do it as inexpensively as possible, and they need to do it without introducing the risk of switching to object based cloud storage, or a widely distributed, resilient cloud model. They’re not concerned with the benefits they can glean from containerization, OpenStack, or whatever. They need to address this finding, or get off this old behemoth mainframe before it dies, or they have to spend millions on a new support contract.
In the real world, which runs on Earth time, the companies I deal with are not willing to entertain dramatic re-architecture of the core parts of their business, just to take advantage of something they don’t see a need for, or business case around. And if you happen to get an astronaut in a company like this, and he or she mentions something about cloud, or a hot new technology, the response is usually befuddlement, or outright dismissal. How can you blame the C level people? They’re constantly seeing stories about gigantic cloud providers taking massive amounts of downtime, and silly outages that affect the majority of the globe. They don’t need that. They need their bonuses.
Remember, many of these people only implemented virtualization because they couldn’t justify NOT doing it.
While many in our circle of astronauts have the luxury of ruminating on the end of virtualization, and the next big thing, the people who are still in the atmosphere have concerns that are far different. Predicting the future is definitely a fool’s errand, but based on what I can see down here, I’d have to guess that we are not yet at peak virtualization.