<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=353316&amp;fmt=gif">

Able-One Blog

Todd Cox

Recent Posts

What is Software-Defined Storage?

light.jpgBy Todd Cox, Chief Technology Officer

Recently, there has been a lot of buzz around the term “software-defined,” especially software-defined storage (“SDS”). At Able-One Systems, we’ve partnered with Atlantis Computing to deliver SDS solutions for our customers.

The obvious difference is that a true software-defined solution is just that; software only. The solution does not force you to purchase proprietary hardware. You can run a software-defined solution on your own existing hardware, or on hardware from leading manufacturers like IBM, Lenovo and Dell. With Atlantis, you can also pool and abstract existing SAN and NAS based storage, in addition to direct attached.

This leads nicely into the next major difference: cost. Enterprise storage manufacturers spend staggering amounts of money on developing their controllers and operating systems. A software-defined strategy uses commodity hardware, so that all of the R&D effort is focused on the software-defined solution. This drives the cost down considerably over traditional dual-controller solutions.

The real problem with dual-controller traditional storage systems is that there is a finite amount of scalability offered by these devices. Even the very high end controllers can only cluster so far, so there is a very real ceiling on scalability. Since software-defined uses a grid-based approach, the scalability is virtually limitless. When I want to add more capacity or performance, I add more commodity servers and storage.

One might think that using commodity components means that the level of availability and reliability would suffer, but it’s actually the opposite. Atlantis allows data to be striped across multiple drives located in multiple server nodes, for a much higher level of storage availability.

The new buzzword is hyper-converged, which goes hand in hand with software-defined. A hyper-converged architecture uses local SSD drives, commodity servers and combined with software-defined, puts all of your compute, storage and networking into a converged solution. Instead of buying storage and then buying servers to run your VMware environment, your hyper-converged architecture IS your VMware cluster and storage all in one. Imagine the flexibility and the ease of management this brings to the table.

I’m excited to see where this technology is going. Clearly, software-defined storage and hyper-converged are more than just marketing buzz. Companies like Atlantis Computing are certainly leading the way to less reliance on large storage manufacturers.

Navigating software-defined solutions can be complex; if your company needs assistance, contact us today for a free consultation.

Top 3 factors: which IBM Storwize Storage product is right for you

by Todd Cox, Chief Technology Officer and Lead IT Architect 

Recently, I've been getting a lot of questions on IBM's new announcement regarding the Storwize V3700... essentially the little brother to the Storwize V7000. While at first glance, the products seem very similar in terms of functionality, there are several important issues to consider when evaluating the two solutions:

External Virtualization

While the V3700 has the concept of internal virtualization and disk pools like it's older brother, there is no external virtualization capability with the V3700. This means that I cannot front end older fibre channel based disk subsystems like I can with the V7000. This functionality really sets the V7000 apart from the competition, allowing my customers to extend the useful life of their aging storage assets already on the floor. Even if you don't buy into virtualizing older storage subsystems, this capability greatly aids in the migration of data through built in migration wizards.

 

Unified Option

One of the best features of the V7000 is the unified option which combines block and file-based subsystems into one full-function solution. With the V7000 unified comes Active Cloud Engine; functionality that allows me to tier my files across multiple media types based on straightforwardpolicies. ACE allows me to reduce the 

V7000img resized 600

cost of my overall storage footprint by storing files on the right type of media based on file type or usage pattern. The V3700 does not have a unified option and is block-based storage only. So no Active Cloud Engine either.



Scalability
 

Overall growth and scalability is limited on the V3700 as compared with it's older sibling. I can scale out up to 4 controllers on the V7000, with a total of 960 disk bays. On the V3700 I only have 120 drive bays before I max out.

Don't get me wrong; the V3700 is an amazing entry level addition to the Storwize family from IBM with functionality like thin provision and remote mirroring; functionality typically reserved for enterprise-class storage. That said, it is important to understand the main differences between the two offerings before heading down a particular solution path.

 

Learn more about the right Storage solution for your Environment at our seminar on January 29th at 11am.

Sign Me Up!  

 

IBM Storwize V7000

IBM Storwize V3700

Topics: Infrastructure

The Future of Storage - are you ready?

by Todd Cox, Chief Technology Officer and Lead IT Architect

When I ask Chief Information Officers what their biggest source of frustration is around storage and storage management, clearly it involves managing the growth of storage and really getting to the "true" price per terabyte on the floor. The true cost of enterprise disk has nothing to do with the price of the hardware, as most storage vendors are buying their disk drives from the same manufacturers. The real value of storage in the data center today is the software functionality that is built into their offerings, allowing for truly efficient management of their storage environment. Functionality like compression, de-duplication and storage tiering become critical strategies when looking to drive down the price per usable terabyte.

Big Data

Unstructured file system data, sometimes called "big data" is driving many of our customers' expanding storage needs. Often I see clients storing information that has not been accessed for 12 months or more, yet it is sitting on very expensive disk. Why do companies do this? Simply, they have neither the tools to identify obsolete or orphaned data, nor policies in place for dealing with this dilemma. There must be a better way, you say? You're right, there is. Through technologies like tiering and information lifecycle management (ILM) data can live on the appropriate media type, which ultimately drives down the overall cost of storage on the floor.

Virtualization

Virtualization is a common theme in the server space and certainly internal virtualization is not new to the storage world; the concept of external virtualization is much less common however. What if you could "front-end" your existing storage with all of the new functionality and availability of your new enterprise storage purchase, thereby increasing the useful life of assets that you already own? Sounds too good to be true, right?

These concepts and technologies will be discussed in much more detail next Wednesday, December 12th as I present #TheFutureofStorage webinar. We will examine storage trends in the industry and talk frankly about driving costs out of enterprise storage. I hope you can join me!!

 

Click here to register for the Future of Storage Webinar Invitation

Topics: Infrastructure, Cloud Computing and Hosting

Written by

LinkedIn

Signup for Our Monthly Newsletter