Storage Archives | eWEEK https://www.eweek.com/storage/ Technology News, Tech Product Reviews, Research and Enterprise Analysis Fri, 28 Oct 2022 00:07:40 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 IBM’s Diamondback Tape Library Focuses on Security for Hyperscalers https://www.eweek.com/storage/ibms-diamondback-tape-library-focuses-on-security-for-hyperscalers/ Fri, 28 Oct 2022 00:05:01 +0000 https://www.eweek.com/?p=221527 Data storage innovation often gets short shrift in digital transformation discussions where it is simpler to focus on the advancements of silicon, chipset and system solutions. But the fact is that improvements in storage capabilities like capacity and data read/write speeds are comparable to – or even greater than – what compute performance has achieved. […]

The post IBM’s Diamondback Tape Library Focuses on Security for Hyperscalers appeared first on eWEEK.

]]>
Data storage innovation often gets short shrift in digital transformation discussions where it is simpler to focus on the advancements of silicon, chipset and system solutions. But the fact is that improvements in storage capabilities like capacity and data read/write speeds are comparable to – or even greater than – what compute performance has achieved.

Those and other issues make IBM’s recent introduction of its new Diamondback Tape Library both timely and intriguing.

Also see: Best Data Analytics Tools 

IBM’s Diamondback Tape Library

How does IBM’s new tape offering address these points? The company describes the Diamondback Tape Library as “a high-density archival storage solution that is physically air-gapped to help protect against ransomware and other cyber threats in hybrid cloud environments.”

The new solution was designed in consultations with more than 100 hyperscalers, including “new wave” organizations and the Big Five hyperscalers.

IBM notes that Diamondback is designed to provide hyperscalers the means to securely store hundreds of petabytes of data, including long-term archival storage with a significantly smaller carbon footprint and lower total cost of ownership than disk and flash solutions. According to the company, IBM Tape solutions are approximately one quarter the total cost of both spinning disk storage and public cloud archival services.

Individual IBM Diamondback Tape Libraries fit in the same 8 square feet floor space as an Open Compute rack (a 42U, 19” rack). Systems can be ordered fully loaded with LTO-9 tape cartridges and are fully compatible with IBM Ultrium 9 tape drives, which can increase total capacity by up to 50 percent compared to IBM Ultrium 8 technology.

Systems can be deployed in less than 30 minutes and individual libraries can support up to 27 PB of raw data or 69.5 PB compressed data. Customers can also store exabytes of data across multiple Diamondback tape libraries using erasure code software available from IBM and as open source.

Like all IBM storage solutions, Diamondback Tape Libraries support IBM Spectrum storage applications, including IBM Spectrum Archive, and can also be equipped with data encryption and write-once-read-many (WORM) media for advanced security and regulatory compliance. IBM Services are available for deployment, management and support. IBM Diamondback Tape Libraries are generally available for purchase now.

Also see: IBM Storage: Squeezing Enterprise Value into Smaller Form Factors

Final Analysis

Storage media solutions from punch cards to solid state drives (SSDs) all have had their time in the sun, often simultaneously. Outside of specialized use cases, most earlier storage media technologies like punch cards, floppy drives and optical storage have largely fallen out of favor for business storage.

However, enterprise tape solutions, including tape drives, libraries and media have remained a steady and profitable business for well over half a century.

Why is that the case? Primarily because of continuing development and innovations by tape vendors, including IBM, FujiFilm and Sony. But it can also be argued that the flexibility and adaptability of tape storage systems and media have enabled vendors to craft highly effective tape solutions for emerging businesses and use cases.

IBM’s new Diamondback Tape Library is an excellent example of that process. The company has a long history of storage innovations, and robust, massively scalable tape storage has played a central role in IBM’s mainframe business for decades. IBM also has deep expertise in a wide range of enterprise computing processes and understands the business and technological needs of enterprise clients in ways that few vendors can match.

In other words, designing and building a tape storage solution powerful and capacious enough for organizations that regularly store, manage and access data in petabyte and exabyte volumes is hardly a stretch for IBM given its data storage experience and continuing R&D.

It is worth noting that the Diamondback Tape Library will also complement and benefit from the company’s other storage solutions and initiatives, from the IBM TS7700 Virtual Tape Library to the recent announcement that Red Hat’s storage portfolio and teams will transition to IBM Storage.

Overall, IBM’s Diamondback Tape Library qualifies as an example of what the company does best—create and supply new offerings that meet the often-daunting needs of traditional and emerging enterprises, including traditional and “new wave” hyperscalers.

Also see: IBM Storage Announces Cyber Vault, FlashSystem and SVC Solutions

The post IBM’s Diamondback Tape Library Focuses on Security for Hyperscalers appeared first on eWEEK.

]]>
IBM Storage Announces Cyber Vault, FlashSystem and SVC Solutions https://www.eweek.com/storage/ibm-storage-announces-cyber-vault-flashsystem-and-svc-solutions/ Fri, 25 Feb 2022 00:39:49 +0000 https://www.eweek.com/?p=220508 Modern IT is undergoing a massive transformation, particularly in the realm of data storage. Adding more cybersecurity features and upgrading performance are all important moves for enterprise storage vendors like IBM, especially for their current customer base. The recent additions IBM announced to its storage portfolio should address top of mind issues for many in […]

The post IBM Storage Announces Cyber Vault, FlashSystem and SVC Solutions appeared first on eWEEK.

]]>
Modern IT is undergoing a massive transformation, particularly in the realm of data storage. Adding more cybersecurity features and upgrading performance are all important moves for enterprise storage vendors like IBM, especially for their current customer base.

The recent additions IBM announced to its storage portfolio should address top of mind issues for many in IT. Let’s take a look.

Also see: How Database Virtualization Helps Migrate a Data Warehouse to the Cloud

IBM Cyber Vault for FlashSystem

IBM Cyber Vault is a new offering that uses IBM FlashSystem Safeguarded Copies to provide validation and verification of copy data so IT can know it’s good. Safeguarded copies are logically air-gapped snapshots of FlashSystem primary storage, providing immutable, incorruptible data copies.

IBM has a number of offerings in the cyber resilience market, including their Cyber Resilience Assessment professional service, QRadar and Guardian software solutions to monitor for data threats from systems and humans. Cyber Vault rounds out their portfolio with validation/verification of data.

Cyber Vault is a blue-printed solution from IBM Labs that takes FlashSystem Safeguarded copies and uses them in a secure VM to provide analysis, scanning, and test/validation, as well as potentially forensic and diagnostic services for Safeguard data.

FlashSystem Safeguarded copies are first copied to a secure Cyber Vault virtual machine environment. Once there, IT can verify and validate that data with whatever tests seem pertinent. Once done, IT knows whether their primary storage (at the time of Safeguarded copy) is good to use to recover from cyber-attack.

Cyber Vault could be used also at a remote disaster recovery site with replicated FlashSystem storage. And because IBM supports Spectrum Virtualize targets on Azure, this whole process could be done on the Microsoft Azure Cloud.

Cyber Vault was already offered on mainframe systems but now this service is available for the open environment using FlashSystem storage Safeguarded copies.

Also see: What is Data Visualization

IBM FlashSystem Storage Upgrades

IBM has also released new FlashSystem 9500 and 7300 storage systems. These include:

  • Faster processors – 4 Intel Ice Lake 24-core, CPUs for 9500 and 4 Cascade Lake 10-core, CPUs for the 7300 system.
  • New PCIe support – Gen 4 for 9500 and Gen 3 for the 7300 system.
  • Larger capacities – 4.5PBe (PBe is effective capacity after data reduction) in 4U for 9500 and 2.2PBe in 4U for the 7300 system.
  • New Gen3 FlashCore Module (FCM) – from 4.8TB to 38.4TB in a single module and ~70msec latency.

All this means lower latency storage access, more storage bandwidth and overall, 25-50% faster storage performance over prior generation storage. The FlashSystem 9500 also offers up to 48 32GFC and is 64GFC ready, with new cards. The new FlashSystems mean up to 2X faster read throughput for AI and in-memory DB workloads, up to 50% more transaction per second for Oracle processing, and 4X better performance on VMware Horizon activity.

IBM also updated the SAN Volume Controller (SVC) appliance with two 24-core Intel Ice Lake CPUs to add more storage virtualization performance to SVC clusters.

Also see: IBM Extends “Tailored Fit” Pricing to Z Hardware

A Boost for Cybersecurity

One can see how IBM’s announcements incrementally improve and build upon past success, at least for cyber security. And performance is a major competitive arena among all storage vendors, which no business can afford to ignore for long. Again, FlashSystem 7300 and 9500 take all this to the next level.

Despite recent quarterly progress, IBM’s storage business has struggled over the past few years. FlashSystem and SVC are not the only solutions in IBM’s storage business, and all have a role to play in altering business trajectory. And the recent news is just the first of four quarterly announcements for IBM’s storage business.

We’d very much like to see how IBM can do more to address some of the other enterprise concerns. For example, the multi-cloud and how to get there. To many, this means Kubernetes, containerization and apps that run anywhere, wherever it makes the most sense, in the cloud, on-prem, or on the other side of the world.

Furthermore, on the horizon are all the new AI and applied data solutions moving into the enterprise. How to become the major storage supplier for these new applications needs to be on every storage vendor’s mind.

We look forward to Q2 and beyond to see what IBM will announce to raise the playing field on these and the other major issues facing IT today.

Also see: Tech Predictions for 2022: Cloud, Data, Cybersecurity, AI and More

About the Author: 

Ray Lucchesi, President, Silverton Consulting

The post IBM Storage Announces Cyber Vault, FlashSystem and SVC Solutions appeared first on eWEEK.

]]>
How Database Virtualization Helps Migrate a Data Warehouse to the Cloud https://www.eweek.com/database/how-database-virtualization-helps-migrate-a-data-warehouse-to-the-cloud/ Thu, 14 Oct 2021 21:01:05 +0000 https://www.eweek.com/?p=219661 Database migrations are some of the most dreaded initiatives in IT. Bring up the subject of migrations with any IT executive and one gets a strong visceral reaction. Too many careers have been wrecked by failed migrations. Clearly, conventional techniques using code conversion just don’t work. Especially when it comes to migrating an enterprise data […]

The post How Database Virtualization Helps Migrate a Data Warehouse to the Cloud appeared first on eWEEK.

]]>
Database migrations are some of the most dreaded initiatives in IT. Bring up the subject of migrations with any IT executive and one gets a strong visceral reaction. Too many careers have been wrecked by failed migrations. Clearly, conventional techniques using code conversion just don’t work.

Especially when it comes to migrating an enterprise data warehouse, horror stories abound. Failed migrations that collapsed after three years of hard work are quite common. Migration projects that cost over $20 million before they fell apart are the norm. But ask any IT leader off-the-record and you might learn about much costlier disasters.

As enterprises move to the cloud, modernizing on-prem data warehouses to cloud-native technology is a top priority for every IT executive. So, what is an IT leader to do? How can enterprises avoid migration disasters when moving legacy data warehouses to the cloud?

Over the past year, several vendors brought to market the concept of database virtualization. The principle is quite simple. A virtualization platform lets existing applications run natively on the cloud data warehouse. No or only minimal changes of SQL are required. So, how does database virtualization take the sting out of database migrations?

What is database virtualization?

Think of database virtualization as Hypervisor technology for database queries. The database virtualization platform sits between applications and the new destination data warehouse. Like any virtualization technology, it decouples two otherwise tightly bonded components. In this case, database systems and applications are abstracted from each other.

The database virtualization platform translates queries and other database statements in real-time. In effect, database virtualization makes a cloud data warehouse like Azure Synapse behave exactly like a Teradata or Oracle system. This is quite different from data virtualization. Data virtualization implements a new SQL dialect and requires all applications to be rewritten to this dialect first.

Compared to static code conversion, database virtualization is significantly more powerful. It can emulate complex constructs or data types. Even elements for which there is no equivalent in the new destination database can be emulated in real-time.

Applications originally written for one specific database can now run on a different database without having to change SQL. Instead of static code conversion with all its risks and challenges, database virtualization preserves existing applications. Instead of months of rewriting application logic, database virtualization makes the new database instantly interoperable with existing applications.

Virtualization separates migrating to cloud from application modernization

Database migrations typically fail because of an explosive increase in scope while the project is afoot. It starts as a simple mission where making existing applications work with a new database is the goal. However, once it becomes clear how extensive the rewrites will be, the scope of the project often changes.

Stakeholders may view the operation as a unique opportunity to modernize their application. If the application needs significant rewrite, they argue, why not make other important changes too? What started as a supposedly minimally-invasive operation turns now into full on open heart surgery.

In contrast, database virtualization lets applications move as-is. All changes are kept to the absolute minimum. In practice, the extent of changes to applications is around 1%. With cloud data warehouse technology evolving rapidly, we expect the need for even those changes will be further reduced in the future.

Database virtualization changes the above dynamics quite significantly: move first, modernize applications afterward—and only if needed. Once the enterprise is cloud-native, a few select applications may be candidates for modernization. Separating move and modernization is critical to controlling the risk.

Virtualization overcomes the dreaded 80/20-nature of migration

No other IT problem is so often underestimated. We attribute the error in judgement primarily to the fact that it is an incredibly rare operation. Most IT leaders have never planned, nor executed, a major database migration. If they could help it, they made database migrations their successor’s problem.

Once a rewrite project is underway, the initial progress can be exhilarating. Within just a few weeks, the “easy” SQL snippets are converted rapidly. Many just need substituting a few keywords and similarly trivial changes. In true 80/20 fashion, the first 80% take up only very little time and almost no budget. Then comes the last 20%. This is where hard problems are—and disaster strikes.

In contrast, database virtualization does not distinguish levels of perceived difficulty. Instead, progress is made uniformly. This is not to say there are no challenges to tackle. However, compared to code conversion, the effort needed to overcome those is typically an order of magnitude smaller.

Virtualization mitigates risks

Conventional migration is a high-risk undertaking. As we’ve seen, it starts with underestimating the effort needed. The limitations of rewrite-based approaches are impossible to know up front, despite best efforts. Yet, IT leaders are often asked to put their careers on the line in these projects.

Importantly, with rewrite-based approaches, IT is shouldering the responsibility mostly alone. They are tasked to complete the migration, and then the business gets to judge the outcome.

Compare this to database virtualization. From the get-go, applications can be tested side by side. IT signs up business units early on who can test drive their entire function using their existing tools and processes. Database virtualization promises to relieve IT from taking the risk of implementing something the business cannot use once complete.

On top of that, database virtualization comes with one rather obvious mechanism of risk mitigation. Until the old system is decommissioned, the organization can always move back to the old stack. Reverting back requires no special effort since all applications have been preserved in their original functionality.

Replatform IT to the public cloud

Major enterprises are about to replatform their IT to public cloud. However, so far only a fraction of on-prem systems and processes have been moved. The specter of database migrations is holding enterprises back as all critical workloads are tightly connected to database systems.

Database virtualization is therefore a powerful paradigm for IT leaders who are considering a database migration. While still a young discipline, database virtualization has proven its mettle with notable Global 2000 clients already. So far, its proof points are limited to enterprise data warehousing. However, little imagination is required to see how this technology could apply to operational databases as well.

Database virtualization should be viewed as a critical weapon in the IT leader’s quiver to attack migration challenges when an efficient way to migrate data to the cloud is called for.

About the Author:

Mike Waas, Founder and CEO, Datometry

The post How Database Virtualization Helps Migrate a Data Warehouse to the Cloud appeared first on eWEEK.

]]>
#eWEEKchat October 12: DataOps and the Future of Data Management https://www.eweek.com/big-data-and-analytics/eweekchat-october-12-dataops-and-the-future-of-data-management/ Fri, 01 Oct 2021 16:54:21 +0000 https://www.eweek.com/?p=219570 On Tuesday, October 12, at 11 AM PT, @eWEEKNews will host its monthly #eWEEKChat. The topic will be “DataOps and the Future of Data Management,” and it will be moderated by James Maguire, eWEEK’s Editor-in-Chief. We’ll discuss – using Twitter – important trends in DataOps, including market trends, key advantages, best practices, overcoming challenges, and […]

The post #eWEEKchat October 12: DataOps and the Future of Data Management appeared first on eWEEK.

]]>
On Tuesday, October 12, at 11 AM PT, @eWEEKNews will host its monthly #eWEEKChat. The topic will be “DataOps and the Future of Data Management,” and it will be moderated by James Maguire, eWEEK’s Editor-in-Chief.

We’ll discuss – using Twitter – important trends in DataOps, including market trends, key advantages, best practices, overcoming challenges, and the ongoing evolution of data management in today’s IT sector. DataOps is a “new-ish” idea, yet it’s an important emerging technology.

How to Participate: On Twitter, use the hashtag #eWEEKChat to follow/participate in the discussion. But it’s easier and more efficient to use the real-time chat room link at CrowdChat.

Instructions are on the DataOps Crowdchat page: log in at the top right, use your Twitter handle to register. The chat begins promptly at 11 AM PT. The page will come alive at that time with the real-time discussion. You can join in or simply watch the discussion as it is created.

Special Guests, DataOps and the Future of Data Management

The list of data storage experts in this month’s Tweetchat currently includes the following – please check back for additional expert guests:

Chat room real-time link: Go to the Crowdchat page. Sign in with your Twitter handle and use #eweekchat for the identifier.

The questions we’ll tweet about will include – check back for more/ revised questions:

  1. DataOps is still a new-ish term — how do you briefly define it?
  2. Do you think that DataOps is a mainstream approach in today’s enterprise?
  3. Why is DataOps important in today’s data-intensive world?
  4. What’s DataOps’s greatest challenge: Cohesion between the teams? Process efficiency?  Diversity of technologies?
  5. Apart from the challenges listed above, is DataOps’s greatest challenge human or technological?
  6. Can a company “buy” DataOps or is it simply a process to implement? Will is adopt a SaaS model?
  7. So many vendors claim to do DataOps – and they approach it differently. Is the concept losing clarity?
  8. What industries do you see benefitting the most from DataOps?
  9. What do you see as a core best practice for DataOps?
  10. Any predictions for the future of DataOps?

Go here for CrowdChat information.

#eWEEKchat Tentative Schedule for 2021*

Jan. 12: What’s Up in Next-Gen Data Security
Feb. 9: Why Data Orchestration is Fast Replacing Batch Processing
March 9: What’s Next-Gen in Health-Care IT?|
April 13: The Home as Enterprise Branch
May 11: Next-Gen Networking Products & Services
June 8: Challenges in AI
July 15: VDI and Enabling Hybrid Work
Aug. 17: DevOps & Agile Development
Sept. 14: Trends in Data Storage, Protection and Privacy
Oct. 12: DataOps and the Future of Data Management
Nov. 9: New Tech to Expect for 2022
Dec. 14: Predixions and Wild Guesses for IT in 2022

*all topics subjects to change

 

The post #eWEEKchat October 12: DataOps and the Future of Data Management appeared first on eWEEK.

]]>
Hitachi Vantara’s Radhika Krishnan on Data Fabric and Data Management https://www.eweek.com/big-data-and-analytics/hitachi-vantaras-radhika-krishnan-data-fabric-data-management/ Fri, 10 Sep 2021 21:14:38 +0000 https://www.eweek.com/?p=219471 I spoke with Radhika Krishnan, Chief Product Officer for Hitachi Vantara, about the role of data fabrics, and how data storage and data analytics are merging. Listen to the podcast: Watch the video:   James Maguire on Twitter: https://twitter.com/JamesMaguire eWEEK on Twitter: https://twitter.com/eWEEKNews eWEEK on Facebook: https://www.facebook.com/eWeekNews/ eWEEK on LinkedIn: https://www.linkedin.com/company/eweek-washington-bureau

The post Hitachi Vantara’s Radhika Krishnan on Data Fabric and Data Management appeared first on eWEEK.

]]>
I spoke with Radhika Krishnan, Chief Product Officer for Hitachi Vantara, about the role of data fabrics, and how data storage and data analytics are merging.

Listen to the podcast:

Watch the video:

 

  • James Maguire on Twitter: https://twitter.com/JamesMaguire
  • eWEEK on Twitter: https://twitter.com/eWEEKNews
  • eWEEK on Facebook: https://www.facebook.com/eWeekNews/
  • eWEEK on LinkedIn: https://www.linkedin.com/company/eweek-washington-bureau

The post Hitachi Vantara’s Radhika Krishnan on Data Fabric and Data Management appeared first on eWEEK.

]]>
How CrowdStorage Built an Affordable Alternative to Amazon S3 https://www.eweek.com/storage/how-crowdstorage-built-an-affordable-alternative-to-amazon-s3/ Thu, 03 Jun 2021 19:24:01 +0000 https://www.eweek.com/?p=219031 On premise data storage is a lot like closet space: one can never seem to have enough! However, the arrival of cloud-based storage solutions has changed the dynamic. More storage is just a few clicks away, making data storage today more like a long term storage facility, where you pay for the space needed and […]

The post How CrowdStorage Built an Affordable Alternative to Amazon S3 appeared first on eWEEK.

]]>
On premise data storage is a lot like closet space: one can never seem to have enough! However, the arrival of cloud-based storage solutions has changed the dynamic. More storage is just a few clicks away, making data storage today more like a long term storage facility, where you pay for the space needed and the amount of time you need that space.

However, there are a few caveats with that analogy, especially when it comes to calculating costs. Most storage as a service solutions seem to have some hidden costs associated with them. Typically, most users not only pay for a given amount of storage space, they also pay to access that data. Imagine that whenever you wanted to access something stored in a storage facility, you would have to pay a fee on top of the agreed upon rent, even if you were just to take something out of storage.

Typically, cloud storage vendors charge for storage space, as well as accessing the data, using something called egress fees or charging for API requests. Case in point is Amazon S3, where users are charged per GB and then charged for data retrieval requests or other types of data access. Adding insult to injury is the fact that Amazon S3 also uses a somewhat complex formula to calculate those fees, making it difficult to budget storage and access costs.

To its credit, Amazon S3 offers compatibility with numerous applications and services, making it quite simple for applications and services to use Amazon S3 as a primary method for storing and accessing data. It is those support and compatibility issues that drive many organizations to default to S3, despite concerns about costs.

Cloud storage vendor CrowdStorage offers a different take on the cloud storage cost conundrum with its Polycloud object storage service, which offers S3 compatible API and set pricing, without any hidden fees.

A Closer Look at Polycloud

CrowdStorage built Polycloud with several objectives in mind. The first of which was to build an alternative to existing cloud storage offerings, such as Amazon S3, Microsoft Azure Cloud Storage Services, and Google Cloud Storage. Other objectives focused on affordability, compatibility, and ease of use.

However, one primary goal was to establish a platform that could meet future needs as well as bring additional innovation into the cloud storage picture.

For example, the company has designed a method to store small chunks of data across multiple cloud connected storage devices, in essence creating cloud object storage that is distributed across hundreds, if not thousands of cloud connected storage devices, with data replicated across those devices.

The company has already established that cloud storage ideology for archival video files for a proprietary use case for a fortune 5000 company. That use case leverages some 250,000 storage nodes, where 60 Megabyte objects are stored as 40 shards on target devices, creating a highly resilient and secure distributed object storage network.

Hands On with Polycloud

Polycloud uses a “storage as a service paradigm,” where users can sign up for the service using a browser based form. The service is priced using a pay as you go model, where users only pay for what they use.

There are no egress or ingress fees, long term contracts or licensing charges. Current costs are roughly $4 per TB per month. CrowdStorage offers a cloud pricing calculator which compares the cost of Polycloud to other storage providers. The company also offers a “try before you buy” free membership, which includes 10GB of storage.

Once an account is established, users can access storage using a browser based interface. The browser based console is rudimentary and most users will probably only use it to setup storage buckets and upload or download files. That said, the browser based interface proves useful enough to store archival data in a bucket or other data that is not directly associated with an application, such as backup files, logs, and so forth.

Once storage buckets are established, users can leverage CrowdStorage’s S3 compatibility. The company offers integration with numerous applications and makes it is quite easy to create access keys to protect data. Integrations (via S3) are offered for numerous applications, including most of AWS SDKs, meaning that custom software developed using those SDKs can also access storage buckets.

Native S3 integrations are offered for ARQ 7 Backup, CloudBerry Explorer, Commvault, QNAP, and many other third-party applications. Integrating applications is very straightforward, users just need to define a storage location and then provide the necessary credentials. Some applications, such as ARQ 7 Backup, provide wizard-like configuration, further easing setup.

Conclusions

Currently, Polycloud’s claim to fame comes in the form of economy. In other words, CrowdStorage is offering Polycloud as a low cost option for cloud data storage, that is also S3 compatible. Those looking to significantly reduce cloud storage costs will be well served by Polycloud.

However, CrowdStorage is also evolving the Polycloud offering and will expand storage options to include a distributed storage offering, where additional security, as well as even lower costs will become available. The distributed storage model will offer increased resiliency, as well as increased uptime.

Polycloud’s distributed network combines un-utilized storage and bandwidth resources that are already deployed and connected to the internet. Each storage device on the distributed network becomes a distinct node, with a combined capacity of over 400 petabytes. The distributed network consists of nodes that are geographically dispersed and data shards are replicated across multiple nodes, increasing resiliency, while also making the data more secure, since no single file is stored on a single device.

The post How CrowdStorage Built an Affordable Alternative to Amazon S3 appeared first on eWEEK.

]]>
IBM Extends “Tailored Fit” Pricing to Z Hardware https://www.eweek.com/it-management/ibm-extends-tailored-fit-pricing-to-z-hardware/ Wed, 26 May 2021 23:17:18 +0000 https://www.eweek.com/?p=218995 Over the years, I’ve written quite a bit about the longevity and durability of IBM’s mainframe solutions, especially regarding the company’s ongoing efforts to keep its IBM Z platform current with enterprise computing trends and practices. However, it’s also worth considering how IBM has adapted mainframe offerings (and adjusted its own attitude) to stay aligned […]

The post IBM Extends “Tailored Fit” Pricing to Z Hardware appeared first on eWEEK.

]]>
Over the years, I’ve written quite a bit about the longevity and durability of IBM’s mainframe solutions, especially regarding the company’s ongoing efforts to keep its IBM Z platform current with enterprise computing trends and practices. However, it’s also worth considering how IBM has adapted mainframe offerings (and adjusted its own attitude) to stay aligned with other external forces.

That’s especially true in terms of hybrid cloud computing—an area where, as CEO Arvind Krishna noted in his recent IBM Think keynote, the company is “all in.”

IBM has essentially realigned its business model to support customers and partners in maximizing the value of cloud computing. So, it shouldn’t be surprising that those efforts are tangibly influencing and impacting its system portfolio. The recent announcement of Tailored Fit Pricing for IBM Z hardware highlights that strategy.

What is Tailored Fit Pricing for IBM Z?

Put simply, the new pricing model is designed to enable IBM customers to flexibly, transparently pay only for the mainframe resources they use. The company introduced a similar solution for IBM Z software when it launched the z15 platform and solutions in 2019. Since then, over 100 IBM customers of varying sizes, including fashion chain and online retailer Dillard’s, have deployed the solution.

This new announcement extends the same model to IBM Z hardware, as well. In essence, Tailored Fit Pricing for IBM Z provides instantaneous access to additional IBM mainframe compute capacity whenever it is needed. In order to employ the new offering, IBM Z customers obtain an always-on, fixed-price corridor of consumption capacity that sits atop the capacity they already own. That always-on corridor can be employed whenever it is needed at a predictable price while supporting optimal response times and complying with Service Level Agreements (SLAs).

Why is that important? Because modern enterprise workload requirements have become increasingly unpredictable as they focus on supporting business transactions, including mobile payments and real time analytics to name just two. That volatility is likely to grow as companies increase their use of analytics, deploy new edge of network workloads and expose existing assets to new uses.

Aren’t tailored fit and hybrid cloud the same thing?

If Tailored Fit Pricing for IBM Z sounds a lot like cloud computing offerings, it should. In fact, when the company originally introduced the new offering in 2019, it was described as, “a simple cloud pricing model for today’s enterprise IT environment.”

But IBM has designed its solution to address elemental problems with public cloud, including the substantial sums that companies are spending on “wasted” cloud services. In fact, the company cited a recent report by Turbonomics that explores this issue, and estimates that companies will spend some $21 billion (almost $2,400,000 an hour) on unused, idle or over-provisioned cloud resources in 2021.

That sum is obviously good for cloud computing vendors’ bottom lines, but wouldn’t it be better for enterprise customers to invest those resources in their own projects and strategies? To get a sense of how Tailored Fit Pricing for IBM Z hardware and software differs from cloud-based solutions, an IBM white paper compares the new solution to three common public cloud scenarios.

Final analysis

Superior technologies and performance have long been vital to IBM’s Z solutions. However, the company has always been careful to ensure that its new innovations also deliver dependable, measurable business value to enterprise customers. In the case of Tailored Fit Pricing for IBM Z hardware and software, those benefits include quickly responding to swiftly changing business needs and dynamic workload requirements while also supporting fully transparent, consumption-based costs.

The post IBM Extends “Tailored Fit” Pricing to Z Hardware appeared first on eWEEK.

]]>
IBM Moves into the Container-Native Storage Lane https://www.eweek.com/storage/ibm-moves-into-the-container-native-storage-lane/ Wed, 05 May 2021 17:25:36 +0000 https://www.eweek.com/?p=218835 Data storage stands at an odd crossroads today. From a technological perspective, storage vendors continue to deliver the goods in terms of solutions becoming increasingly speedy, capacious, and flexible. That is vital since modern businesses are creating and managing information in volumes that would have been unthinkable a few years ago. At the same time, […]

The post IBM Moves into the Container-Native Storage Lane appeared first on eWEEK.

]]>
Data storage stands at an odd crossroads today. From a technological perspective, storage vendors continue to deliver the goods in terms of solutions becoming increasingly speedy, capacious, and flexible. That is vital since modern businesses are creating and managing information in volumes that would have been unthinkable a few years ago.

At the same time, most storage media and systems have become thoroughly commoditized, driving prices and margins ever downward and bleeding dry many once-stalwart vendors. There are ways out of this blind alleyway, but they typically require proactive development and strategic efforts. The new Spectrum Fusion and updated Elastic Storage Systems announced last week by IBM offer insights into how one vendor is coping with these challenges.

IBM latest storage offerings

IBM’s announcement focused on entirely new and updated storage solutions. First, the company introduced Spectrum Fusion, a container-native software-defined storage (SDS) offering that integrates IBM’s general parallel file system, data discovery, and modern data protection technology into a single software solution to simplify accessing and securing information assets wherever they reside – within the data center, at the edge and across hybrid cloud environments.

When it comes to market later this year, IBM Spectrum Fusion will be available as a hyper-converged infrastructure (HCI) system that integrates compute, storage and networking. It will include Red Hat OpenShift to enable organizations to support environments for both containers and virtual machines, and provide software-defined storage functions for cloud, edge, containerized data assets and IT infrastructures. Additionally, IBM plans to release an SDS-only version of IBM Spectrum Fusion in early 2022.

The company also introduced updates to its IBM Elastic Storage System (ESS) family of scalable, easy-to-deploy high-performance solutions. The revamped ESS 5000 now delivers 10% greater storage capacity than prior generation systems, for a maximum total of 15.2PB. The new all-flash ESS 3200 doubles the read performance of its predecessor to 80 GB/second per node and also supports up to 8 InfiniBand HDR-200 or Ethernet-100 ports for high throughput and low latency. The ES3200 can support up to 367TB of capacity per 2U node.

Both the ESS 3200 and ESS 5000 feature system software and support for Red Hat OpenShift and Kubernetes Container Storage Interface (CSI), CSI snapshots and clones, Red Hat Ansible (for automated container deployment), Windows, Linux and bare-metal environments. The systems come with IBM Spectrum Scale built-in. The ESS 3200 and ESS 5000 also work with IBM Cloud Pak for Data, the company’s platform of integrated data and AI services, for integration with IBM Watson Knowledge Catalog (WKC) and Db2.

The value of practical storage innovations

What do these new and enhanced solutions mean for IBM customers? Let’s consider Spectrum Fusion first and the way it practically addresses both current and future business challenges.

At the same time, companies are adopting hybrid cloud solutions and services, they are also planning to add or increase end-of-network assets to their IT infrastructures. That may sound simple on paper but as IBM Storage GM Denis Keneally notes, “It starts with building a foundational data layer, a containerized information architecture and the right storage infrastructure.”

The integration of Red Hat OpenShift in IBM Spectrum Fusion is designed to support workloads utilizing containers and virtual machines, and to enable effective data management across the hybrid cloud, data center, edge, and containerized environments. The new solution’s incorporation of a fully containerized version of IBM’s general parallel file system, data discovery, and modern data protection software should significantly ease data discovery, data resilience, and storage tiering processes.

In contrast to conventional processes where making duplicate data copies is required to move application workloads, Spectrum Fusion enables customers to create and utilize single copies of data by storing them in a local cache where they can be easily accessed. That eliminates the clutter of multiple redundant copies of data, simplifying information management, reducing storage CAPEX and OPEX, and streamlining analytics and AI processes. Single copies of data can also help bolster compliance functions, a vital point for businesses that need to follow regulatory frameworks like HIPPA and GDPR, and also help reduce security exposures.

The IBM ESS 3200 and 5000 solutions’ performance and capacity enhancements should appeal to customers facing continually growing data asset challenges. Plus, the ESS 3200 and 5000’s containerized system software, support for numerous OS and virtual machine technologies and integration of key IBM Cloud Pak and Watson platforms make them highly flexible solutions for a variety of business computing scenarios.

Final analysis

In essence, IBM’s new Spectrum Fusion and the updated ESS 3200 and ESS 5000 solutions will be useful for modern businesses’ current and future needs. Given their built-in hardware and software capabilities and optional features, IBM’s offerings are likely to generate immediate interest among large-scale retailers and manufacturers (including pharma and biochem), and companies in compliance-sensitive sectors, including healthcare and financial services. Over time, however, IBM Spectrum Fusion and ESS 3200 and ESS 5000 should gain ground with a much wider audience of enterprise organizations.

The post IBM Moves into the Container-Native Storage Lane appeared first on eWEEK.

]]>
How Fauna Delivers Data-as-Utility in a Serverless World https://www.eweek.com/innovation/how-fauna-delivers-data-as-utility-in-a-serverless-world/ Fri, 23 Apr 2021 04:51:54 +0000 https://www.eweek.com/?p=218737 The idea behind Fauna is both radical and obvious. Your applications shouldn’t care where your data is physically located, just that it’s available when needed. If you could do without the complexity of a traditional database, along with all of its data management and its servers, and simply deliver the data when an API sends […]

The post How Fauna Delivers Data-as-Utility in a Serverless World appeared first on eWEEK.

]]>
The idea behind Fauna is both radical and obvious. Your applications shouldn’t care where your data is physically located, just that it’s available when needed. If you could do without the complexity of a traditional database, along with all of its data management and its servers, and simply deliver the data when an API sends a query, you would. The complexity doesn’t add anything to your operations beyond latency.

Fauna, which is billed as a serverless database, attempts to deliver this data-as-a-utility concept from the network edge. Just as you don’t care which of many generators at your electric utility provides the electricity that runs your office, Fauna believes that you shouldn’t need to be concerned where your data is being stored – it should just be available when needed.

For this to work, there must be servers that store the data, but they can be distributed on the edge of the network near where they’re most likely to be needed. They respond to a properly authenticated client with the data that’s requested. But most of the computing takes place on the client, using a web browser. And there can be multiple places where the data you need is stored, with the closest responding with the required data.

Data stores must be synchronized

For this to work properly, the data stores need to be kept synchronized, and there needs to be a means of authentication. Fauna supports a number of third-party authentication providers, such as Auth0 and Okta, to help secure access to the database. Fauna is designed for collaboration within development teams and features a number of security features, including multi-factor authentication and varying access levels.

A key feature for Fauna is its performance. The serverless model helps performance by keeping the data near the user, as well as keeping much of the compute requirements on the user’s browser. Fauna also features what the company calls real-time database streaming, which allows data to move in and out of the database in real time. This avoids latency-inducing polling as found in some other databases.

This focus on serverless data and on performance has its roots in Twitter. Founder Evan Weaver, currently Fauna’s CTO, was employee No. 15 at the social network. Weaver said he worked to scale the site and move to distributed storage. As you can imagine, Twitter depends on performance and reliability to meet the needs of its users, and Weaver brought that understanding to the design of Fauna.

Weaver said that as businesses depend more and more on data, they’ve learned that database workloads are not predictable. Instead, they need to scale to meet the needs of the organization at the time, He said that he saw the need for a data API early as well as the need for a global interface.

Creating a new tech stack

“The serverless movement created a new tech stack,” Weaver said.

The Fauna database is intended to offer developers a data platform that’s reliable and secure while also offering simplicity. Notably, getting started with Fauna is intended to demonstrate that simplicity. The company offers a free signup to get started and free database creation with a security key; it lets prospective users get started from there.

Once you’ve created your database and picked a query language, Fauna replicates your data globally to make sure latency stays low. This also allows the creation of globally dispersed development teams and users. Operation is intended to be easy from the beginning, eliminating most database operations. It supports web-native secure access and the ability to create any number of children and unlimited depth.

If all of this sounds unusual, that’s because Fauna is the first of a new breed of databases. Its serverless nature, its focus on performance and security and its planned ease of use are all unusual when it comes to database operations. While the design of Fauna is definitely new, it’s the serverless environment underneath it all that makes it possible.

The post How Fauna Delivers Data-as-Utility in a Serverless World appeared first on eWEEK.

]]>
Spectra Logic: Product Overview and Insight https://www.eweek.com/storage/spectra-logic-product-overview-and-insight/ Wed, 14 Apr 2021 21:03:07 +0000 https://www.eweek.com/?p=218706 Company: Spectra Logic (data storage and data management solutions) Company Description: Spectra Logic develops data storage and data management solutions that solve the problem of digital preservation for organizations dealing with exponential data growth. Spectra enables affordable, multi-decade data storage and access by creating new methods of managing information in all forms of storage—including archive, […]

The post Spectra Logic: Product Overview and Insight appeared first on eWEEK.

]]>
Company: Spectra Logic (data storage and data management solutions)

Company Description: Spectra Logic develops data storage and data management solutions that solve the problem of digital preservation for organizations dealing with exponential data growth. Spectra enables affordable, multi-decade data storage and access by creating new methods of managing information in all forms of storage—including archive, backup, cold storage, private cloud and public cloud.

Markets: Spectra Logic is a 40-year-old global organization that sells directly to midsize and large organizations as well as through a worldwide network of resellers and distributors who offer Spectra solutions to customers in a wide range of industries, including media and entertainment, education, government, finance, energy, health care, scientific research and high-performance computing environments, among others. Spectra Logic also has established strong strategic partnerships with Fortune 50 organizations and key technology partners to ensure interoperability and compatibility.

Product and Services

Spectra Logic’s agile and inventive approach has led to more than 125 patents and an expanding portfolio of solutions that have been deployed in more than 80 countries. Spectra offers a wide solution set that includes disk, object storage, tape and hybrid cloud storage in addition to storage lifecycle management software. StorCycle is the company’s flagship storage lifecycle management software that automatically identifies and moves inactive data from primary storage to a lower cost tier that includes cloud, object storage disk, network-attached storage and tape.

Key Features

StorCycle is a storage lifecycle management software that ensures data is stored on the right tier throughout its lifecycle for greater IT and budgetary efficiencies. More than 80 percent of data is being stored on the wrong tier, costing organizations millions of dollars a year. StorCycle storage lifecycle management software can reduce the overall cost of storing data by up to 70 percent by enabling organizations to efficiently scan primary storage and migrate inactive data and finished projects to a lower cost tier of storage for long-term preservation and access.

StorCycle delivers four key elements of data storage lifecycle management:

  • Identification: Scan of an active source file system compiles and presents real time analytics, revealing an actionable view of the data landscape. Scans can be scheduled, throttled as necessary and reused as needed;
  • Migration: Automated migration on the basis of past or upcoming scheduled scans or project-based migration of entire data sets or directories. After migrating data, StorCycle accurately maintains directory structures and Access Control Lists;
  • Protection: Makes and tracks multiple copies on a variety of targets, adding both geographic and genetic diversity into data protection plans;
  • Access: Use of HTML Links or Symbolic Links, and a web-based search maintains data easily accessible by users in a semi-transparent or transparent manner. The software activates archived data, allowing users to apply new technologies.

Interoperable with Linux, Mac and Windows, StorCycle identifies inactive files on primary storage based on policies set by the administrator and migrates those files to a lower cost tier of storage, which includes any combination of cloud storage, object storage disk, network-attached storage and tape. Users also can move entire completed data sets, such as machine-generated data, scientific output and finished videos, with the Project Archive method. This reduces the amount of data residing on expensive primary storage, shrinking backup windows, increasing performance and reducing more primary storage purchases.

Additionally, StorCycle protects data through end-to-end encryption on all storage targets, and storage of multiple copies of data on multiple storage mediums. It is fully ADFS-compliant, meaning file permission remains intact regardless of where data is stored. StorCycle enables organizational data to be stored in two geographically separate locations, for example on cloud and on local NAS.

The scheduled delete feature enables users to configure automatic deletions of migrated data after it has been retained on a storage target for a preset period of time. Other features enable users to prioritize restore jobs, activate one-click job reruns, archive and restore user-generated symbolic links, obtain CIFS/SMB support with Linux, and attain improved file search via background database indexing.

The latest version of StorCycle enables users to leverage the exposed RESTful API to take advantage of StorCycle’s core features, including scanning, migrating, and restoring data to build integrations and applications that leverage StorCycle’s Storage Lifecycle Management capabilities. The exposed API is an excellent tool for advanced users who wish to integrate StorCycle into wider workflows. In addition to providing core commands to configure storage locations, the API helps users build applications to better manage jobs or perform bulk actions without using the web interface.

StorCycle also now extends cloud support to Microsoft® Azure®, including both the standard (Hot/Cool) and Archive tiers. Azure® can be used as a storage target for migration jobs, helping organizations leverage the cost-effectiveness and ease of cloud storage. This is in addition to StorCycle’s existing support for Amazon S3 Standard and Glacier tiers.

Insight and Analysis

There are no user reviews of Spectra Logic on any of the major software review sites, including TechnologyAdvice, G2Crowd, Gartner Peer Reviews, IT Central Station, Capterra and Serchen.

Delivery: Direct from Spectra and through the company’s global network of value-added   resellers and distributors

Pricing: Annual subscriptions. For pricing or information, call 1-720-301-0153 or email sales@spectralogic.com

Contact information: sales@spectralogic.com

eWEEK is building an IT products and services section that encompasses most of the categories that we cover on our site. In it, we will spotlight the leaders in each sector, which include enterprise software, hardware, security, on-premises-based systems and cloud services. We also will add promising new companies as they come into the market. Here is a list of examples: https://tinyurl.com/EW-product-overview

The post Spectra Logic: Product Overview and Insight appeared first on eWEEK.

]]>