As I write this, I am currently flying back home to the United States after another great show for Spectra at the International Broadcasting Convention (IBC) in Amsterdam. The show brings together almost 60,000 visitors, attracting different players from the media and entertainment industry from all over the world. The conference’s program of panels, discussions and keynotes features more than 400 speakers and is complemented by a comprehensive exhibition which brings together close to 1700 exhibitors, each showcasing the state of the art in technology. I am exhausted but also energized from all the great conversations we had with customers and partners.Read More
By Hossein ZiaShakeri, Spectra senior vice president of business development & strategic alliances
In today’s dynamic and ever-changing market where technology shifts with priorities and business models, there exists tremendous confusion and industry consolidation. It is crucial for media and entertainment (M & E) entities to adapt to the fast-paced market to remain on top, keep a competitive edge and stay successful. Re-evaluating one’s strategies and looking forward is essential – hence the phrase ‘adapt or die’. To stay relevant, entertainment businesses must be agile, efficient, and innovative in the way that they produce, store, manage, and distribute their content. As M & E organizations try to find the right infrastructure to manage content throughout its life cycle, they are faced not only with the question of what strategy will provide the greatest return on investment (ROI) and drive positive business outcomes, but which company they can trust with this business-critical decision.
The Spectra Difference
Spectra has recognized and evolved alongside these industry shifts over the past decade. Its technology portfolio has grown to include solutions that are in line with technology trends and the business needs of the media and entertainment space. Modern storage solutions have been a focal point for Spectra, and continue to show greater relevancy and adaption to these market changes. Spectra has shown its perseverance in this space with its continued success and growth.
Serving content owners worldwide, the organization’s rich forward-looking technology solutions are underpinned by its commitment to research and development (R & D), Spectra’s unique culture, along with the company’s financially robust status and being privately owned. This has allowed Spectra to continue to invest in R & D to create greater and more modern advances in its suite of solutions. Spectra invests 15% to 18% of its annual revenue into R & D, maintaining its tradition of bringing innovative storage solutions to the market. Spectra’s dedicated M & E business development team works closely with customers and ecosystem and channel partners alike to plan, develop and deliver the company’s offerings and bring forth new opportunities in its largest business segment.
Fiscal stability, sustained profitability and a strong balance sheet exemplify Spectra as a solid and trusted storage solution provider. As end users look to financially stable organizations, Spectra prides itself on maintaining a significant cash balance, along with its untapped credit line. This provides customers with the long-term security they require when investing in storage solutions – knowing they are forming a business partnership with a financially stable company that will remain resilient to economic shocks and shifts in the market. Under NDA, Spectra will provide financial details to any customer.
Spectra’s focus on the M & E market has proved fruitful, with most of its M & E business growth fueled by new solutions – Spectra’s BlackPearl® and ArcticBlue®. The company’s standing in other vertical markets, such as general IT and high performance computing, continues to be strong. Spectra foresees continued growth and greater relevancy of its modern solutions into the future.In the coming months, Spectra will be announcing additional solutions that will lend greater efficiency and cost savings to workflows, and even tighter integration to the next generation of cloud-based workflows at both the local and global level. These solutions will help any enterprise, worldwide, maintain their relevancy and keep their workflows modern.
Hurricane Florence is expected to strike the southeastern portion of the United States very soon. In fact, governors of North and South Carolina, Maryland and Virginia have already declared states of emergency far ahead of the approaching storm.
As Spectra Logic’s customers along Hurricane Florence’s path prepare to protect their homes and families, our team is ramping up to support their business operations wherever possible by initiating our Storage Crisis Lifeline program, which was first introduced in 2005 during Hurricane Katrina. Read More
by Fred Moore President, Horison Information Strategies
The changing landscape of the data protection industry has evolved from backing up data to providing recovery from hardware and network failures, software bugs, and human errors, as well as fighting a mounting wave of cybercrime. Over the years, reliability and resiliency level of hardware and software have significantly improved. Cybercrime, however, has now become a bigger threat to data protection than accidental deletion, bit rot or drive failure. And the stakes are getting higher as anonymous cyber criminals seek to profit from the valuable digital data of others. With a ceasefire in the cybercrime war unlikely, we are witnessing the convergence of data protection and cybersecurity to counter rapidly growing cybercrime threats, including ransomware.Read More
In my last blog, I discussed types of “data movers” including backup, archive, HSM and migration. These are ways to move data, but where are you moving the data to? Disk? Tape? Cloud? A combination of those? The significance of “data mover” applications to an organization’s workflow is closely tied to other elements of their data storage ecosystem, namely storage targets.
Over time, storage targets have evolved from repositories completely “unaware” of the data they hold to hyper-intelligent data management platforms. On one end of the spectrum, more typically found in tier-1 storage, are Storage Area Networks (SANs). These repositories, which are unaware of the data they hold, offer up blocks of storage which a server’s file system or a database administrator must configure into usable storage. With the advent of Network Attached Storage (NAS), storage devices use their own processing power and file system to lay data across blocks – presenting the storage as a folder of files which can be accessed by multiple servers, and even servers with varying operating systems and/or file systems, across the network. SAN and NAS are commonly found in tier-1 data storage, although they can be used in tier-2 storage as well.
Object storage – commonly used in cloud storage – is the next step in moving “storage intelligence” closer to the storage target, and has revolutionized modern storage.
Earlier this month, we presented an introduction to object storage in a blog called, “Mind the Tipping Point: Object versus File Storage”. One of its key characteristics, the unique object ID, deserves further examination. When a piece of information (usually in the form of a file) is moved to object storage, it has to be converted into an object through a “gateway” or “engine,” where each piece of data receives a unique object ID. Unlike specific blocks mapped out by SAN or hierarchical files in NAS, this unique object ID is not tied to a physical location. This specific attribute has had a significant impact on data movement, and that’s where the magic comes in.
The analogy commonly used in describing object storage is valet parking. You go out to dinner, give your car to the parking attendant, and they give you a parking receipt. Maybe it’s a nice car, so they park it right in front of the restaurant. But while you’re eating, an even nicer car pulls up to park – your car gets moved to parking across the street and the new car is parked in its place out front. As the night progresses, the spaces in front of the restaurant and the parking across the street are filled. Your car has been sitting there for hours so they move it down the block to underground parking. When you finally pick up your car, you give your parking receipt to valet parking, and they give you your vehicle back.
Intelligent object storage platforms can automatically move data based on storage policies which reflect the “value” of the storage tier in which it resides and the amount of total time since it was last used. When a user or application needs to retrieve a specific object, the unique object ID allows them to seamlessly retrieve the file, none the wiser as to which storage tier it was last located in.
As mentioned above, object-based storage is commonly used in cloud storage. When users back up their data to the cloud, they don’t know (exactly) where their data is. Cloud users are typically unaware of whether their data is stored on disk or tape, or online or offline. Cloud providers offer various service level agreements that agree to return data in a specific amount of time, and users may be able to specify that they want it stored in a separate geographic region for disaster recovery (DR) purposes, but that’s about the extent of it.
As data-driven organizations consider cloud storage, they should keep in mind that the infrastructure of the cloud (http and object storage) is a large part of what makes it so appealing. When implemented locally in data centers, this infrastructure can truly modernize on-premise, long-term data storage. Spectra’s BlackPearl® Converged Storage System incorporates a feature-rich object storage engine. The innovative solution combines multiple storage targets into a simple and affordable self-managing, cloud-enabled object-based platform that provides organizations with the openness, scalability, efficiency and control they demand to easily grow and adapt to changing business models.
BlackPearl can retrieve data from the fastest and/or most affordable storage when multiple copies are kept, checksum the data before returning it – and if it’s no longer good – find another copy to return while also replacing the failed copy. For environments with tape libraries as a storage target, BlackPearl can compact tapes as certain data sets age off to maximize free space and even migrate data from older versions of LTO tape to newer versions of LTO tape without disrupting operations.
While there are “data mover” software packages that can deploy object storage by writing data to the cloud, they don’t always play well with other forms of storage. By allowing these data movers to send data to BlackPearl, configurable data policies can send copies to the cloud, archive disk, tape or replicate the entire setup at another site – all in object format. This “valet parking” approach allows users to select the appropriate storage target for access and cost based on the data’s business value. As the value of that data changes over time, BlackPearl can move the data to different storage areas accordingly.
Organizations can and should use object storage in their data centers. As more applications adopt object-based interfaces, object storage will become the de facto standard for data movement and management of long-term storage.
It’s 1985. You’re the last one to leave the office that afternoon. You go into the server room, put the 8-inch floppy disk in the drive, go back to your office, type a backup command via the “terminal” on your desk, and now you’re free to go home. Your data is being backed up and safe. One last thing… don’t forget to take yesterday’s backup floppy with you in case the building burns down over night. OK, maybe you worked for a slightly larger company than I did in 1985, but it worked pretty much the same. There was storage and backup for the storage. We didn’t refer to “primary” storage. Disk was storage and all floppies, CDs, or tape was backup to that storage.
As storage and access to storage has evolved, so have the methods to protect it and place it in an appropriate storage tier for cost and access. Hence, the discussion of Backup, Archive, Hierarchical Storage Management (HSM) and Migration. While the terms are often used interchangeably, there are key differences, and arguably they can be quite significant. Understanding the variances is key to creating a fail-safe data protection scheme as well as an efficient and affordable storage infrastructure.
There are many options to backup that have been introduced over the years – snapshots, fulls/incrementals, incremental-incrementals, disk-to-disk, etc. But backup, inherently, is a simple concept. Data is created or captured on some form of storage medium. If it’s the only copy in existence, it’s vulnerable to accidental deletion, storage medium failure, natural disasters or (assuming it’s still online) some type of cyberattack. So a duplicate copy is made on a storage medium that can be taken offline and offsite for protection. The original data is left where it was, and a second copy is stored somewhere else. Moreover, data continues to be backed up on a regular basis, so users have multiple copies to turn to in case one copy becomes corrupt or is otherwise inaccessible. This procedure covers the “big four” threats mentioned above.
The 13th annual Cost of Data Breach Report declared that the average cost of a data breach is $3.86 million, which is a 6.4% increase year-over-year.
Here’s a simple way to look at backup when comparing it to the other data management approaches – the backup is a copy of the original, and should be stored offline and offsite. If not, while the data may be protected in some other way, it’s not “backed up”.
Archive is very similar to backup. The main difference is that the original data no longer resides in its original location. While that may sound too simple to mention, there are significant implications. If data is deleted from its original location after being copied, users will need a way to find it when it’s needed. The original path or file system will no longer see it after it’s moved and deleted. A new database needs to be referenced to find the data, and its format and features will vary per application.
Archives are great for large amounts of infrequently accessed or fixed data that can be associated with a project or grouped in some way. Fixed data includes content such as last year’s final financials; a completed movie, a sports event or news event; a large data download or output from research. There are numerous data sets that would qualify and are relatively simple to identify for recall based on a larger grouping versus looking for a single file. Once data has been safely archived, it’s no longer backed up. Archiving data is a great way to decrease the amount of primary or active data that needs to be backed up on a regular basis and, at its core, is a data management process that enables cost efficiency.
If the data is going to be accessed semi-frequently, HSM or Migration are better approaches.
Hierarchical Storage Management (HSM)
HSM is a concept that allows organizations to tier their data, keeping the most business-critical, frequently accessed data on the most responsive (and expensive) tier of storage, and moving less critical data to more affordable storage, including disk, or tape. Here’s the big differentiator for HSM: when the data is moved from its original location, a “stub” file is typically left in its place. The stub file contains some of the data that’s been moved. This seems like an ideal way to move data because the application or user can go to the original location to retrieve their data even though it’s been moved. When the user or application requests the data, its return starts immediately from that stub file, while the remainder of the data is recalled from the new location. Depending on the type of storage target and its recall capabilities, users may notice a slight delay or the HSM application may time-out.
HSMs can be complex. In addition to dealing with time-out errors, they are also the only method to retrieve the moved data. Its proprietary format means that if the HSM goes down, so does access to the data. If a tremendously large file has been migrated in this way, it may be recalled without enough primary storage to hold it. Many HSM solutions have come and gone, but the HSM applications that have stood the test of time show up most often in the high-end High Performance Computing (HPC) world and are capable of integrating tape as a storage tier accessible by users or applications.
Migration holds some of the most interesting possibilities for truly opening up the world of storage options – from flash to tiered disk to tape to cloud. Most migration applications use symbolic links instead of stub files. When moving data to Tier 2 disk or highly responsive cloud, the symbolic link is left in the data’s original location to redirect the application to the new destination where the data can be accessed directly. Cloud can even become a part of semi-active data recall if users have contracted for an appropriate data access speed. Other migration applications may actually function as a second file system and sit directly in the path of the data. While that approach is more complex to implement, it can make data recall easier for both users and applications. All migration applications still have to deal with time-out issues if the data has been moved to a slower response tier such as long-term cloud storage or tape. That is where having object storage on the back end can be very helpful, a topic we’ll touch on in my next blog.
Today’s “data mover” applications allow a mix of storage mediums and approaches to be implemented. IT professionals now have many more options than the 8-inch floppy disk, but with those options come choices that require the individual caveats of each to be examined. By determining how much data can be archived, IT professionals have significantly decreased the amount of active data they have to deal with on a daily basis. By implementing a migration approach to the remaining data, the cost and performance of each storage tier can be matched with business needs. And as a final thought – no matter how much archiving and/or migration we implement – backing up active, mission-critical data to an offline medium which is stored offsite is still the best way to avoid a system shutdown due to cyber-attack, ransomware or natural disaster.
Data is a central part of our day-to-day as a society, from our personal electronic devices, like smart phones and tablets, to our professional lives. Major industries like agriculture, transportation, energy, healthcare and finance, to name a few, all rely on it. Whether you’re working behind a computer or at a job site building houses, digital information is an intricate part of your environment. The intrinsic value of data, be it the intelligence, communication or analysis of it, points to a revolution in how we access, manage and consume it. In order to enable organizations to thrive within the unsurpassed data growth of modern times, storage technologies have evolved accordingly.Read More
Spectra Logic has released a new version of its BlackPearl software, version 5.0. The update, which was announced in a recent press release, introduces several major new enhancements, including object versioning and staging, chunk aggregation, and a host of intelligent object management attributes. The feature-rich software is a key component of Spectra’s BlackPearl® Converged Storage System, a purpose-built storage platform that integrates directly with ”data mover” software applications to simplify workflows, and seamlessly manages large volumes of content to a variety of storage targets, including disk, tape and public cloud. The latest software advancements provide modern data centers with lower storage costs, time advantages, and improved capacity and data integrity.
The first of these components, the BlackPearl S3 Object Versioning, is a data resiliency feature that allows multiple versions of the same file to be uploaded and saved on BlackPearl, ensuring customers do not lose data. While a client recall of the file will, by default, restores the most recent version, any version can be retrieved if the version ID is specified in the request. Object versioning also protects data from accidental deletion by continuing to retain all versions of an object when a traditional DELETE command is received.
Another important enhancement is BlackPearl’s newly-developed Data Staging capability, which allows pre-staging of data from tape to disk. Data staging reduces wait times to use assets while taking advantage of tape’s cost savings. In environments where data sets can be quite large, or where data has been gathered over long periods of time and archived daily to tape, staging from tape to disk spares expensive compute storage. Along the same lines, the system’s new Chunk Aggregation feature reduces tape mounts and improves tape throughput performance by identifying and grouping files together when writing to or reading from tape. By automatically scanning across all active data transfer operations and intelligently grouping those that are going to be written to or read from the same storage target, BlackPearl can increase the number of files in each operation, allowing for continuous write or read without pausing.
While all of these enhancements play their part to improve data integrity, drive down costs, and maximize performance, Intelligent Object Management (IOM) is perhaps the most significant new development. As the name implies, IOM is a suite of features that allows BlackPearl to more intelligently manage objects. Its four main components are: self-healing, automatic tape compaction, data policy modification and migration.
With the Self-Healing feature of IOM, BlackPearl can automatically self-heal all copies of a file across all media if a problem is detected. As long as a good copy exists, any detected bad copies , be it during a random recall or a scheduled object verification, will be automatically recreated through self-healing.
On the capacity front, BlackPearl’s new Automatic Tape Compaction feature provides self-activated consolidation of data on fragmented tapes, reclaiming deleted tape storage space. Once Automatic Tape Compaction is enabled, administrators can then set the threshold by which it will begin. When the amount of space wasted by deleted objects exceeds the defined threshold, BlackPearl will move the active data to new tapes and reformat the old tape for reuse. This feature optimizes overall capacity and reduces the challenges of using tape media in data storage environments where files are regularly deleted.
In addition, the Data Policy Modification feature in IOM allows customers to modify BlackPearl data policies after they have already been in use. Not only does new data entering the BlackPearl follow the now modified data policy, but BlackPearl will read all of the existing data that was previously written and conform it to the modified policy in the background.
Finally, as tape and disk storage technologies evolve, customers need an easy way to migrate their data to new storage technologies. IOM’s Media Migration capability allows data to be migrated from one storage type to another quickly and transparently, which ensures data integrity while reducing costs. The feature automatically migrates data in the background, allowing users to continue to archive and restore data without being interrupted and making it easy to upgrade to new technologies without disruptions.
BlackPearl’s feature-rich software delivers a modernized approach to storage. To learn more about the BlackPearl Converged Storage System, visit the product page here.
Spectra Logic expands LTO’s functionality with its Time-based Access Ordering System (TAOS)
When Spectra announced new enhancements for tape storage systems earlier this month, it became clear to anyone paying attention that tape technology is not a thing of the past. Between the Tape Storage Council’s 2018 tape technology memo and recent articles on how tape is changing the game, tape storage systems have generated a lot of buzz. With decades of tape expertise, Spectra continues to release features that leverage tape technology to advance modern workflows. The latest example of this is TAOS or Time-Based Access Order System for LTO drives.
Historically, recalling multiple small files from tape has taken longer than necessary due to the linear way that files are read. Retrieving non-consecutive files from a tape can result in large seek times between file reads due to potentially long distances between files. An LTO-8 tape, for instance, appears as a 200 km long tape, which threads down and back to read data.
Spectra Logic developed TAOS to create an optimal ordering of read requests that results in an up to four times improvement in overall time for files that are less than 100MB in size.
Over the last few years, enterprise tape technologies like IBM’s TS11x0 and Oracle’s T10000 have used similar functionalities to optimize recall times, but this type of tape enhancement has never before been present in LTO-based automated tape systems. “This is a feature which has been on enterprise tape drives – with a two to three times price premium, but has never been available in open systems tape before,” said Matt Starr, Spectra CTO.
“Before TAOS, recalls from LTO tape were read in the same linear pattern in which they were written,” explains David Feller, vice president of product management and solutions engineering at Spectra Logic. “TAOS enables the software to get the files off of tape in the most logical order. When the system receives a read request, it first determines where each file starts and ends physically on the tape to create an optimal ordering of reads.”
For example, recalling 300 100MB files from an LTO-8 tape would have taken approximately 2 hours and 45 minutes. When using TAOS, the same recall only takes about 55 minutes with the exact same LTO-8 drives and media, making the total tape access and data recall time about three times faster.
In addition to improving performance of restore operations, TAOS reduces wear to tape media and tape drives. The enhancement leads to up to 13 times less tape traveling across the head in the drive for the same files that need recalled. So the less tape that runs past the head, the longer the tape drive and media will last, improving overall system reliability and reducing costs.
This and the many other tape innovations in development demonstrate the long-term commitment to tape archive technology that Spectra Logic provides to its enterprise customers worldwide. TAOS is a proprietary Spectra logic feature that will be included with the Spectra® T950 and TFinity® ExaScale Tape Libraries. For existing customers, the feature will also be available through a simple library software update, requiring no additional hardware. TAOS will initially be released with IBM’s High Performance Storage System (HPSS) and HPE’s Data Management Framework (DMF) storage software later this year, with other ISV software packages to follow.
Spectra Logic held its new fiscal year Sales Kickoff meeting last week at its Boulder, Colorado headquarters. With more than 300 Spectra associates in attendance, the three-day event engaged team members from all over the world and enabled Spectra management to share our corporate vision for the company’s future.
Team members watch a presentation at the Spectra Logic events center
“Working closely with sales is crucial to the success of our marketing initiatives and enables Spectra to achieve its overarching company objectives to satisfy our customers,” said Spectra Vice President of Corporate Marketing Betsy Doughty. “Holding a Kickoff meeting allows the entire organization to come together to intentionally launch the next 12 months. Team members leave empowered with a renewed sense of commitment.”
Day one focused on discussions about customers and markets; the company’s industry position; and our performance to date. The end of the day highlighted the organization’s top achievers, acknowledging their accomplishments for the year. Day two was dedicated to customer and partner presentations, offering Spectra’s Sales and Marketing organizations a deeper understanding of the unique aspects of these core relationships. The Partner Showcase followed, with team members connecting one-on-one with technology and channel partners, enabling deeper conversations for joint initiatives. Finally, the third and last day of Spectra’s Kickoff Event featured updates from the Spectra Product Management team. In addition, flash talks on recent sales success stories allowed representatives to share how Spectra has helped customers from around the globe meet their most challenging data storage dilemmas. The team also built camaraderie over after-hour gatherings. An all-employee meeting and picnic wrapped up the week. Nathan Thompson, Spectra’s CEO, provided an overview of the year and reiterated the company’s mission to develop, sell and deploy the highest quality and most secure data storage solutions to help our customers collect, use, monetize and preserve society’s digital information forever.
At the Partner Showcase, Spectra Sales Operations Specialist Brandon Daniels catches up with Brad Painter, vice president of worldwide sales at Strongbox Data Solutions
“With such a rapidly changing market it is incredibly valuable to get the whole team together to talk about what issues customers are facing and what tools we have to help them,” says David Feller, vice president of product management and solutions engineering at Spectra Logic. “As a nimble company, Spectra is constantly adapting to meet the critical needs of our customers and lead the way into the future. With such an incredible set of products and world-class people to back them, it’s a good time to be at Spectra.”