Directed Attached Storage: 1. Introduction To DAS
Directed Attached Storage: 1. Introduction To DAS
Directed Attached Storage: 1. Introduction To DAS
Page 1 of 123
www.wilshiresoft.com info@wilshiresoft.com
Page 2 of 123
in cache, the file system then makes a request to the disk controller software the retrieves the data from its disk or RAID array and returns the data to the file system to complete the I/O process.
Highlights
Upgradeable to RAID with the swap of a module Redundant data paths with dual-ported Fiber drives and dual Fiber Channel loops Quad Loop feature provides over 700 MB/s from a single subsystem Enhanced enclosure services (SES) monitoring and reporting Intuitive, comprehensive management with Adaptec Storage Examiner
1.5. Connectivity
Direct-attached storage refers to a storage device, such as a hard drive or tape drive that is directly connected to a single computer. These connections are usually made by one of the following methods: Enhanced Integrated Drive Electronics (EIDE) Small Computer Systems Interface (SCSI) Fiber Channel
www.wilshiresoft.com info@wilshiresoft.com
Page 3 of 123
EIDE connects internal Advanced Technology Attachment (ATA) storage to a computer, SCSI provides a means to connect both internal and external storage to a computer, and Fiber Channel connects external storage to a computer. Fiber Channel is most often used with external storage in a SAN. Although Fiber Channel can be used for direct-attached storage, less expensive SCSI storage can offer similar performance, but works only over limited distances due to the physical limitations of the SCSI bus. When external direct-attached storage devices are located more than twelve meters away from a server, then Fiber Channel must be used. Direct-attached storage retains its high popularity because of its low entry cost and ease of deployment. The simple learning curve associated with direct-attached storage technologies is also a factor many organizations consider. Direct-attached storage also makes it easy to logically and physically isolate data, because the data can only be directly accessed through a single server. Although it is simple to deploy, there are other management considerations to take into account with directattached storage: Direct-attached storage can be more expensive to manage because you cannot redeploy unused capacity, which results in underutilization. Having storage distributed throughout the organization makes it difficult to get a consolidated view of storage across the organization. Disaster recovery scenarios are limited because a disaster will cause both server and storage outages. For data backup and recovery, you need to choose whether to attach local backup devices to each server, install dual network adapters in each server and back up the data over a separate LAN, or back up the server over the corporate LAN. Large organizations have found that placing stand-alone tape drives in individual servers can quickly become expensive and difficult to manage, especially when the number of servers in the organization grows into the hundreds. In this situation, it is often best to back up servers over a network to a storage library, which offers backup consolidation and eases management.
www.wilshiresoft.com info@wilshiresoft.com
Page 4 of 123
1.5.1.1 PATA
Parallel ATA is the primary internal storage interconnects for the desktop, connecting the host system to peripherals such as hard drives, optical drives, and removable magnetic media devices. Parallel ATA is an extension of the original parallel ATA interface introduced in the mid 1980s and maintains backward compatibility with all previous versions of this technology. The latest revision of the Parallel ATA specification accepted by the ANSI supported INCITS T13 committee, the governing body for ATA specifications, is ATA/ATAPI-6, which supports up to 100Mbyte/sec data transfers. Development of the ATA/ATAPI-7 specification, an update of the parallel bus architecture that provides up to 133Mbytes/sec, was recently finalized.
1.5.1.2 SATA
SATA is the next -generation internal storage interconnects, designed to replace parallel ATA technology. SATA is the proactive evolution of the ATA interface from a parallel bus to serial bus architecture. This architecture overcomes the electrical constraints that are increasing the difficulty of continued speed enhancements for the classic parallel ATA bus. SATA will be introduced at 150Mbytes/sec, with a roadmap already planned to 600Mbytes/sec, supporting up to 10 years of storage evolution based on historical trends. Though SATA will not be able to directly interface with legacy Ultra ATA hardware, it is fully compliant with the ATA protocol and thus is software compatible.
Serial ATA
No master/salve, point to point Up to 39-inch (1-meter) cable Thin cable (1/4-inch) 7-wire differential (noise canceling) Blade and beam connector (snap in) -inch-wide data connector First-party DMA support Low voltage (.25V) tolerance Intelligent Data Handling Hot Swap CRC on data, command, status
SATA Advantages Ease of use Ease of Integration Improved system airflow Eliminates data integrity problems Ease of Use Ease of Integration Performance enhancement Design improvement Performance enhancement Ease of Integration/use Enhanced data protection
www.wilshiresoft.com info@wilshiresoft.com
Page 5 of 123
Another comparison is that SATA devices require much less power than PATA. Chip core voltages continue to decline and, because of this, PATA's 5-volt requirement is increasingly difficult to meet. In contrast, SATA only requires 250 mV to effectively operate. SATA is also hot-swappable meaning that devices can be added or removed while the computer is on. The last, and most important, difference is the maximum bandwidth between the two technologies. The true maximum transfer rate of PATA is 100 MB/sec with bursts up to 133 MB/sec. With the first introduction of SATA, the maximum transfer rate is 150 MB/sec. This is supposed to increase every 3 years with a maximum transfer of 300 MB/sec in 2005 and 600 MB/sec in 2008. Finally, SATA doesn't require any changes to existing operating systems for implementation. SATA is 100% software compatible and, with SATA adapters, some hardware doesn't have to be immediately replaced.
Serial ATA
150 MB/s Currently 300 MB/s by 2005 and 600 MB/s by 2008 1 Meter (about 40 inches) 7 15 2 250 mV Yes
Power Connector Pins 4 Data transfer wires used Power Consumption Hot Swappable?
26 5V No
www.wilshiresoft.com info@wilshiresoft.com
Page 6 of 123
Fig. 1.5.1.5.1 - These pictures show the difference in size of PATA and SATA connectors.
Furthermore, a look at figure 1.5.1.5.2 shows a PATA cable on the left and an SATA cable on the right. As is easily apparent, the SATA cable is much more builder friendly and can be easily routed out of the way in a case due to its length and flexibility.
Fig. 1.5.1.5.2 - SATA is the undisputed champion in terms of size and flexibility of cables.
Figure 1.5.1.5.3 shows an SATA power adapter with a 15-pin connector as opposed to the customary 4 pin connectors in parallel ATA. The new 15-pin connector may sound as though it would be a hindrance in comparison to the older ones but the two connectors measure almost the same width. The reason for the 15-pin connector is so that different voltages are supplied to the appropriate places. In addition to the customary 5v and 12v wires, new 3.3v wires are included for the new devices. 9 of the pins provided are for the positive, negative and ground contacts for each voltage. The remaining 6 connectors are for the hot-swappable feature of SATA, designating an additional two contacts per voltage for this.
Fig. 1.5.1.5.3 - As seen in the picture above, SATA power connectors are still the same size as current power connectors even though they have a total of 15 contacts.
As discussed earlier in this article, SATA to PATA adapters are currently available to allow existing hard drives to be used with new motherboards or controller cards and one is shown below in figure 1.5.1.5.4.
www.wilshiresoft.com info@wilshiresoft.com
Page 7 of 123
The package, made by Soyo, includes the SATA to PATA adapter, 1 SATA cable and a short instructional manual. To connect this to a hard drive, simply connect the 40-pin PATA adapter to the connector on the drive as shown in figure 1.5.1.5.6. Also, 7 jumpers will have to be set according to the instructions shown in the manual.
Then, connect one end of the serial cable to the adapter and the other end to a motherboard or controller card. Finally, connect a power connector to both the hard drive and the SATA adapter. This device can be used to connect a PATA drive to a SATA connector on a motherboard or controller card, connect a SATA drive to a PATA connector on a motherboard or, with the use of two adapter kits, connect a PATA drive to a PATA connector on a motherboard using an SATA cable. Figure 1.5.1.5.7 below shows a comparison of the inside of a computer case with a PATA cable connected to a hard drive and a SATA cable connected to a hard drive.
www.wilshiresoft.com info@wilshiresoft.com
Page 8 of 123
Fig. 1.5.1.5.7 - Its quite easy to distinguish the winner here: SATA takes the gold without a doubt.
1.5.2. SCSI
1.5.2.1. Advantages of SCSI
It's fast -- up to 160 megabytes per second (MBps). It's reliable. It allows you to put multiple devices on one bus. It works on most computer systems
www.wilshiresoft.com info@wilshiresoft.com
Page 9 of 123
# of Devices
8 8 16 8 16 8 8 8 16 16
Bus Width
8 bits 8 bits 16 bits 8 bits 16 bits 8 bits 16 bits 8 bits 16 bits 16 bits
Bus Speed
5 MHz 5 MHz 5 MHz 10 MHz 10 MHz 20 MHz 20 MHz 40 MHz 40 MHz 40 MHz
MBps
4 MBps 5 MBps 10 MBps 10 MBps 20 MBps 20 MBps 40 MBps 40 MBps 80 MBps 160 MBps
Differential Cables:
Differential cables connect up to eight Differential drivers and receivers. A 50 - conductor cable or 25 - signal twisted - pair cable shall be used. The maximum cable length shall be 25 meters (primarily for connection outside of a cabinet). A stub length of no more than 0.2 meters is allowed.
Differential:
Allows up to 10 MB per sec., and cable lengths up to 25 meters (about 82.5 feet). Requires more powerful drivers than single-ended SCSI. Ideal impedance match is 122 Ohms.
Impedance Requirements
IDEAL characteristic impedance:
www.wilshiresoft.com info@wilshiresoft.com
Page 10 of 123
- Single - Ended cables: 132 ohms - Differential cables: 122 ohms However, cables with such high characteristic impedance are not usually available. As a result, the SCSI standard requires the following: * For unshielded flat or twisted - pair cables: 100 ohms +/ - 10% * For shielded cables: 90 ohms or greater Somewhat lower characteristic impedance is acceptable since few cable types meet the above requirements. Trade - offs in shielding effectiveness, characteristic impedance, cable length, number of loads, transfer rates, and cost, can be made to some limited degree. Note: To minimize discontinuities and signal reflections, cables of different impedances should not be used in the same bus.
This means that the two devices you connect with a SCSI cable must be compatible, or of the same type, in terms of each of the five features listed above. For instance, both devices should be differential or both should be single - ended; at the same time, both should have a "hard" RESET or a "soft" RESET and so on...for each of the five features above.
www.wilshiresoft.com info@wilshiresoft.com
Page 11 of 123
Both HVD and LVD normally use passive terminators, even though the distance between devices and the controller can be much greater than 3 ft (1 m). This is because the transceivers ensure that the signal is strong from one end of the bus to the other.
Bus Type
PCI PCI PCI PCI PCI
Bus Speed
66/33 33 33 33 33
Ports
2 2 1 1 1
Model
ISP12160A ISP12160 ISP10160 ISP1040C ISP1040B
www.wilshiresoft.com info@wilshiresoft.com
Page 12 of 123
server to a storage system. This reduces the number of storage systems required but substantially increases complexity and cost due to the switches. Not surprisingly, both methods provide an almost mutually exclusive set of benefits, but an intermediate solution DAS supporting multiple servers using FC without switches becomes a sensible and desired alternative. Fortunately, innovative FC-based DAS solutions are now available to fill the void between traditional SCSI-based DAS and FC-based SAN. This white paper explores how FC DAS solutions apply the benefits of fiber channel to reduce SCSI storage costs without requiring SAN switches.
Reduced Costs
SCSI DAS storage systems are available in a broad range of configurations and prices. Even so, there are two general types based on where their controllers reside. Internal RAID types are DAS systems that require RAID controllers to be installed inside their server. DAS systems with RAID controllers outside the server are external RAID types. In any event, SCSI DAS storage systems can cost up to $10,000 each or more depending on their configuration. Storage costs are reduced significantly by consolidating the purchase of multiple SCSI DAS storage systems into a FC DAS storage solution. Four external RAID systems can cost the same or more as an FC DAS solution, without the added benefits that fiber channel provides. Moreover, the Total Costs of Ownership (TCO) for the FC DAS solution will be far less than external RAID and internal RAID in some cases due to the far greater management and maintenance costs of supporting multiple storage systems instead of a consolidated one.
Faster Performance
Fiber Channel is a newer and faster technology than SCSI. As such, storage systems utilizing FC technology are generally more advanced and feature rich than those utilizing SCSI. This can result in a FC DAS storage solution providing much faster performance. FC DAS storage solutions often provide performance similar to several SCSIbased storage systems combined. This results in greatly improve performance for every server with FC DAS storage solutions.
Better Scalability
Consolidating the storage requirements of several servers will surely increase the storage capacity requirements of the storage system in use. Fortunately, this is another area in which FC DAS storage solutions are far superior to internal RAID and external RAID alternatives. Each fiber channel disk connection supports a far greater number of disks than SCSI can, so FC DAS storage solutions often scale to extremely large storage capacities. It is unlikely servers connected to a FC DAS storage solution will out-grow the supported storage capacity, unless their requirements are highly unusual.
Improved Utilization
The storage consolidation provided by a FC DAS storage solution provides far superior storage utilization. FC DAS storage solutions allow adding capacity one disk at a time and allocating the new storage capacity to one or multiple servers. Increasing storage capacities when using internal RAID or external RAID requires adding one or more disks per system. For example, adding storage capacity to four servers would require a minimum of four disks when using internal RAID or external RAID one for each storage system. FC DAS storage solutions can provision storage to servers as needed, so it could require as little as one disk to increase the storage available to four servers. An even more basic aspect of storage utilization involves the unusable disk capacity required for RAID protection and spare disks. With internal RAID and external RAID, each includes an independent set of disks configured for RAID protection. Using RAID 5 protection results in one disk lost to parity overhead and potentially one additional disk for use as a spare. If there are four such systems in use, each is wasting two disks each for a total of eight disks.
www.wilshiresoft.com info@wilshiresoft.com
Page 13 of 123
A FC DAS storage solution would provide storage to all four servers using one set of disks configured for similar RAID 5 protection and one spare disk. The number of disks made unusable for user storage is reduced by 75% in this example. Moreover, storage capacity is more precisely allocated using FC DAS storage solutions since any portion of the added capacity can be allocated to any server. The alternative is to add storage to internal RAID or external RAID in exact increments of one disk per server. The efficiency and advantages of an FC DAS storage solution grows as the number of servers increase.
More Dependability
The common measure of dependability for storage systems is Reliability, Availability and Serviceability (RAS). Reliability reflects how infrequently the storage system will experience a component failure, regardless of the effects of that failure. Availability describes the likelihood that the storage system will remain usable over time. Serviceability describes the ability to perform maintenance on the storage system without removing it from service. Together with uptime and downtime ratings, they provide common factors for comparing products.
Supported Platforms
Confirm that the FC storage system under consideration can support multiple operating systems simultaneously and can do so without requiring expensive software options. Also, ensure all features are available for every supported server platform and operating system.
Sufficient Performance
Sharing an FC storage system among servers will result in sharing its performance as well. Fortunately, FC storage systems are now available where the performance provided is greater than the performance provided by several SCSI storage systems combined. Look for these for best results.
Dependability
Reliability, availability and serviceability (RAS) become critical with FC storage systems since any disruption can affect multiple servers at once. Ask for documentation to support any RAS claims and avoid products without proof of five 9s (99.999 %) uptime or better.
Management Software
A comprehensive storage management suite greatly simplifies storage set-up, configuration and monitoring. Ideal FC storage systems offer software that supports all popular server operating systems at no cost or low cost. The availability of multi-pathing and load balancing software is a plus.
Scalability
Exactly what is required to scale the FC storage system? Many require substantial hardware and software upgrades as they scale, which creates costly barriers in the future. This is rather common for product families that have many outwardly similar models.
www.wilshiresoft.com info@wilshiresoft.com
Page 14 of 123
Page 15 of 123
Faster data response times and application speeds Higher Availability and Reliability Y2K compliance Enhanced Migration of existing data Scalability
Analysts at International Data Corporation (IDC) recommend NAS to help IT managers handle storage capacity demand, which the analysts except will increase more that 10 times by 2003. says IDC, Network-attached storage (NAS) is the preferred implementation for serving filers for any organization currently using or planning on deploying general-purpose file servers. Users report that better performance, significantly lower operational costs, and improved client/user satisfaction typically results from installing and using specialized NAS appliance platforms.
www.wilshiresoft.com info@wilshiresoft.com
Page 16 of 123
Pros
Cons
Doesnt alleviate LAN bandwidth issues Not appropriate for block level data storage applications
www.wilshiresoft.com info@wilshiresoft.com
Page 17 of 123
Network Appliance EMC Auspex HP-Compaq IBM Procom Quantum-DSS Maxtor Spinnaker Connex Procom Technology Tricod Systems BlueArc Opanasas
EMC IBM Sun Hitachi Data Systems (HDS) HP-Compaq Dell TrueSAN XIOTech Corporation
Emerging players in the NAS/SAN landscape can be broadly categorized as developing systems that can potentially scale to thousands of Terabits, far exceeding anything available today. This scalability, however, comes at the expense of available features. Network Appliance and EMC, on the other hand, have the featurerich software capabilities for their platforms but do not yet have the scalability that some of the emerging companies claim to offer.
www.wilshiresoft.com info@wilshiresoft.com
Page 18 of 123
Networked file systems originally gained popularity after Sun Microsystems' Network File System (NFS) was placed in the public domain and most UNIX-based systems adopted the protocol for networked file access. Today, in some circles, NAS systems may still be referred to as NFS servers, even though products like E Disk NAS support multiple protocols including Microsoft's Windows SMB/CIFS Common Internet File system, HTTP, and FTP. Keeping files on a server in a format that is accessible by different users on different types of computers lets users share data and integrate various types of computers on a network. This is a key benefit for NAS systems. Because NAS systems use open, industry standard protocols, dissimilar clients running various operating systems can access the same data. So it does not matter if there are Windows users or UNIX users on the network. Both can utilize the NAS device safely and securely.
www.wilshiresoft.com info@wilshiresoft.com
Page 19 of 123
Higher overall storage resource utilization which leads to decreased costs, since enterprise storage needs can be met with fewer, and sometimes less expensive, storage assets. Improved storage management capabilities and processes as all of an organizations storage assets can be placed under centralized, automated control. The ability to more effectively and flexibly scale storage resources to meet the demands of business processes and related applications. Potentially lower ongoing costs through enhanced resource utilization and reduction in the number of discrete storage management activities. Decoupled NAS offering an effective model/methodology for storage consolidation initiatives across the enterprise.
2.11. Drawback
Bottlenecks: The major issue that NAS does not address is the LAN bandwidth requirements. The fact that NAS appliances are connected directly to the messaging network can contribute to its congestion and create bottlenecks.
Snapshot: 250 snapshots per volume. Backup: Native utility for use over SAN, LAN, WAN and Internet. Replication: Native utility for use over SAN, LAN, WAN and Internet. Unix Management: Over 300 CLI tools, Unix Shell.
Manage 3rd Party Storage: Turn your legacy storage into a NAS regardless of the manufacturer File Storm NAS appliances are Cluster Capable!
www.wilshiresoft.com info@wilshiresoft.com
Page 20 of 123
User
User experience is exactly the same as accessing a standard file server Supports multiple network operating environments and various protocols at the same time
Security
NAS can include RAID functionality for security of data Existing client back up software will work with a NAS device
Benefits
www.wilshiresoft.com info@wilshiresoft.com
Page 21 of 123
Network Appliance NAS appliances deliver the lowest total cost of ownership of any storage approach, together with enterprise-level performance, scalability, and availability.
Problems Solved
Direct-attached storage works well in environments with an individual server or a limited number of servers, but the situation rapidly becomes unmanageable if there are dozens of servers or significant data growth. Storage for each server must be managed separately and cannot be shared. Performance and scalability are often limited, and storage resources cannot be efficiently allocated. The data management needs of today's enterprise IT environments are typically much better served by a networked storage approach. NAS has considerable advantages over direct-attached storage, including improved scalability, reliability, availability, and performance. In addition NetApp NAS solutions provide true heterogeneous data sharing and deliver unparalleled ease of use, enabling IT organizations to automate and greatly simplify their data management operations.
Product Description
NAS was initially designed for data sharing in a LAN environment and incorporates file system capabilities into the storage device. In a NAS environment, servers are connected to a storage system by a standard Ethernet network and use standard file access protocols such as NFS and CIFS to make storage requests. Local file system calls from the clients are redirected to the NAS device, which provides shared file storage for all clients. If the clients are desktop systems, the NAS device provides "server less" file serving. If the clients are server systems, the NAS device offloads the data management overhead from the servers. The first NAS devices were general-purpose file servers. However, NetApp redefined NAS storage with its "filer" appliancesspecial-purpose storage systems that combine high performance, exceptional reliability, and unsurpassed ease of use. The key to these capabilities is the combination of modular hardware architecture with the NetApp Data ONTAP storage operating system and Write Anywhere File Layout (WAFL) software, which enable the industry's most powerful data management features.
www.wilshiresoft.com info@wilshiresoft.com
Page 22 of 123
www.wilshiresoft.com info@wilshiresoft.com
Page 23 of 123
Network Appliance has dramatically enhanced how storage networks are deployed, enabling customers with large capacity environment to simplify, share and scale their critical storage networking infrastructures
www.wilshiresoft.com info@wilshiresoft.com
Page 24 of 123
Host Processor
Standard Solaris Platform UltraSPARC-IIi, 300MHz 512MB ECC System Memory PCI expansion Dual, redundant root drives No Direct Involvement in Data Delivery Full-featured Mgmt Environment Coordinates and Monitors I/O Nodes Compute power can be leveraged for Enterprise System and Network Mgmt
Network Interfaces
10/100 Ethernet (4 ports per device) (up to 2) Gigabit Ethernet (up to 2) ATM OC-12 (up to 1 - XR systems Only)
Network Software
EtherBand (High Speed Trunking for Fast Ethernet)
www.wilshiresoft.com info@wilshiresoft.com
Page 25 of 123
NetGuard (NIC Failover for GB and Fast Ethernet) Implementation of Virtual IP Support
www.wilshiresoft.com info@wilshiresoft.com
Page 26 of 123
approximately 135,000 NFS operations per second to a blistering 300,000 operations per second. Scale from 4 to 8 clustered X-Blades, increasing your usable capacity from 48 to 112 terabytes. Unmatched availability ensures non-stop access to vital data and applications. Celerra NSX features advanced N+1 clustering to keep availability at its highest. Dual power and dual control stations with redundant Ethernet connections further enhance availability, reliability, and serviceability-as does a dual-managed UPS (uninterruptible power supply). Expand on the fly by adding additional X-Blades-without operational delays or disruption. Its just another way that Celerra NSX protects your investment.
www.wilshiresoft.com info@wilshiresoft.com
Page 27 of 123
The main benefit of NAS gateways is clear-you can leverage your current investment in storage while adding new capabilities and improving consolidation. NS Series/Gateway solutions are compatible with CLARiiON CX or Symmetrix DMX storage, ensuring a seamless, integrated solution.
Page 28 of 123
Appliance supports UNIX and Microsoft Windows, simplifying file sharing between disparate platforms. To protect your data and keep your business running smoothly, the Sun StorEdge 5310 NAS Appliance combines advanced business-continuity functions such as file system journaling, checkpointing, remote mirroring, clustering*, and full system redundancy with a full 2-gigabit Fiber Channel (FC) RAID array to deliver very high levels of availability and performance in almost any open environment. Available in single and dual clustered NAS server configurations, the Sun StorEdge 5310 NAS Appliance provides quick deployment, simple manageability, seamless integration, and flexible policy-based services. The Sun StorEdge 5310 NAS Appliance is easy to operate and effortless to manage, and installs in less than 15 minutes, thanks to its highly intuitive wizard. Designed to grow along with your business, this powerful NAS appliance can easily be scaled to 65 terabytes of raw FC or 179 terabytes of raw SATA* RAID-protected storage.
Key Features
Easy Storage Platform to Deploy and Manage. Cross-Protocol Client Support and Management. Journaled File System with Checkpoint Capability. Remote Replication/Data Mirroring and Remote Monitoring. Clustering Capability*. Sun StorEdge Compliance Archiving Software Investment Protection with Common Storage Modules.
Specification
Processor : CPU: One 3.06-GHz Intel Xeon processor with 512 KB of Level 2 cache Number of Slots : 4 GB in 6 DIMM slots, registered DDR-266 ECC SDRAM Fiber channel : 1 or 2 dual-port 2-Gb Fiber Channel (FC) HBAs Capacity : Scales to 65 TB of FC or 179 TB of SATA RAID-protected storage* Mass Storage: Max. Exp. Units: Up to 28 (7 per RAID Expansion Unit)
Simplicity
The affordable, plug-and-play Sun StorEdge 5310 NAS Appliance provides simple manageability, quick deployment, seamless integration of UNIX and Windows, effortless configuration, and flexible policy-based data services to match your unique IT requirements. The Sun StorEdge 5310 NAS Appliance is easy to operate, effortless to manage, and installs in less than 15 minutes, thanks to its highly intuitive wizard.
Multi-Protocol
The highly flexible Sun StorEdge 5310 NAS Appliance supports Common Information File System (CIFS), NFS, and FTP protocols, cross-protocol file sharing and cross-protocol file locking.
High Performance
The Sun StorEdge 5310 NAS Appliance is a powerful NAS filer with fully optimized NAS heads and a full 2 gigabits of FC RAID backend array for fast response times critical to computational or content-creation applications, including technical computing and oil and gas exploration.
Scalability
www.wilshiresoft.com info@wilshiresoft.com
Page 29 of 123
Designed to grow along with your business, the powerful Sun StorEdge 5310 NAS Appliance is easily expandable and highly scalable. Non-disruptively add storage capacity as you grow this appliance to as much as 65 terabytes of RAID-protected storage.
www.wilshiresoft.com info@wilshiresoft.com
Page 30 of 123
www.wilshiresoft.com info@wilshiresoft.com
Page 31 of 123
tasks. By taking advantage of the simple, centralized and automated control that StorNext SM provides, IT managers can fully support enterprise data access and protection needs. All data is "critical" when it's needed. Enterprises want to know that their critical data is always accessible with reliable data integrity-despite any resource constraints. Through user-defined policies, StorNext SM balances access needs with available capacity by storing critical data on high-performance media and lower priority data on slower media. For data integrity, StorNext SM provides vital data protection options, such as versioning, file replication, and media copy. Data growth continues to soar. As data volumes grow, the pressure on enterprises to better utilize storage resources is increasing. By using StorNext SM's policies to manage data movement between disk and tape systems, based on Quality of Service (QOS) levels needed over time, enterprises can plan out the life cycles of different data classes. The result is a system that scales easily, and allows you to handle growing volumes of data with maximum flexibility and minimal disruption.
www.wilshiresoft.com info@wilshiresoft.com
Page 32 of 123
www.wilshiresoft.com info@wilshiresoft.com
Page 33 of 123
The creation of an independent SAN further enhances the workflow of information among storage devices and other systems on the network. Additionally, moving storage-related functions and storage-to-storage data traffic to the SAN relieves the front end of the network, the Local Area Network (LAN), of time consuming burdens such as restore and backup. SAN's are often contrasted with NAS's, but NAS is actually under the "storage network" umbrella. The major difference is that the SAN is channel attached, and the NAS is network attached. NAS -Primarily designed to provide access at the file level. Organizations working on LANs consider NAS the most economical addition to storage. DAS or SAN - Optimized for high-volume block-orientated data transfers. Storage Management Solutions
www.wilshiresoft.com info@wilshiresoft.com
Page 34 of 123
Traditionally, computers are directly connected to storage devices. Only the computer with the physical connection to those storage devices can retrieve data stored on those devices. A SAN allows any computer to access any storage device as long as both are connected to the SAN. A SAN is usually built using Fiber-Channel technology. This technology allows devices to be connected to each other over distances of up to 10 Kilometers. Devices connected using Fiber Channel can be setup in point-topoint, loop, or switch topology.
www.wilshiresoft.com info@wilshiresoft.com
Page 35 of 123
www.wilshiresoft.com info@wilshiresoft.com
Page 36 of 123
www.wilshiresoft.com info@wilshiresoft.com
Page 37 of 123
The most complex topologies use one more Fiber Channel switches with multiple storage management applications. These configurations are made possible by using a technique commonly called zoning in which the Fiber Channel network is partitioned to create multiple, smaller virtual SAN topologies. By doing this, the Fiber Channel network looks like a simple SAN configuration to the host storage management application. Zoning techniques and tools vary widely at this point, but are available from virtually every Fiber Channel vendor.
www.wilshiresoft.com info@wilshiresoft.com
Page 38 of 123
Switches: Used in data-intensive, high-bandwidth applications such as backup, video editing, and document scanning. Due to the redundant data paths and superior manageability, switches are used in with large amounts of data in high-availability environments Are there reasons to use Switches instead of Hubs in a SAN? Switches provide several advantages in a SAN environment: Failover Capabilities: If a single switch fails in a Switched Fabric environment, other switches in the fabric remains operational. A Hub-based environment typically fails if a single hub on the loop fails. Increased Manageability: Switches support the Fiber Channel Switch (FC-SW) standard, making addressing independent of the subsystem's location on the fabric, and provides superior fault isolation along with high availability. FC-SW also allows host to better identify subsystems connected to the switch. Superior Performance: Switches facilitate "multiple-transmission data flow", in which each fabric connection can simultaneously maintain a 100MB/sec throughput. A hub offers a single data flow with an aggregate throughput of 100MB/sec. Scalability: Interconnection switches provide thousands of connections without degrading bandwidth. A hub-based loop is limited to 126 devices. Availability: Switches support the online addition of subsystems (servers or storage) without requiring re-initialization or shutdown. Hubs require a Loop Initialization (LIP) to reacquire subsystem addresses every time a change occurs on the loop. A LIP typically takes 0.5 seconds and can disable a tape system during the backup process.
3.9. TruTechnology
Most SAN storage area network solutions utilize Fiber Channel technology, which provides higher speeds and greater distances. SCSI devices, however, can function on a SAN by utilizing a SCSI to Fiber Bridge.
3.9.1. TruFiber
We call INLINE Corporation's Fiber Channel storage TruFiber because they feature Fiber Channel technology from the host connections, through the controllers, to the drives. Many other Fiber Channel storage providers take you down to slower SCSI, even in their high-end solutions. With INLINE TruFiber you know you are getting Fiber Channel throughout.
3.9.2. TruCache
When it comes to performance, the single most important factor for any storage system is how well it makes use of higher speed cache memory to enhance disk IO operations. Cache is used to increase the speed of read and write operations as well as allow dual operation writes in applications such as mirroring. While the use of cache provides an incredible performance gain, there is also an incredible risk associated with it. File system corruption and lost data can result if the cache is not managed and maintained properly. For this reason, INLINE Corporation utilizes our TruCache technology in high availability, redundant controller configurations. When you deploy a dual controller system from INLINE, you are assured cache integrity because the system simultaneously mirrors all cache and maintains complete coherency. In fact, INLINE Corporation differentiates itself from most other vendors by offering independent paths from two different controllers to the same disk simultaneously, while supporting reads and writes from both controllers. TruCache insures high-performance and data integrity when you operate in an Active/Active (multi-controller) mode of operation.
3.9.3. TruMap
When offering multiple ports to hardware RAID controllers, one often-overlooked feature is port control. On the SanFoundation and MorStor product lines you have up to 128 two-gigabit host connections. Flexibility in mapping the ports on each controller makes management infinitely easier. TruMap gives you the ability to map each port on a controller using one of three methods: One-to-One, One-to-Any, or Any-to-One. You can choose the appropriate mapping scheme based on your needs such as security purposes, bandwidth provisioning (QoS), Functions and Network segregation. This allows you to maintain bandwidth for mission critical and sensitive applications as well as insure minimum or maximum data rates to a specific LUN.
www.wilshiresoft.com info@wilshiresoft.com
Page 39 of 123
3.9.4. TruMask
Network security has never been more important than it is now. Because today's storage implementations are often on a network, in addition to being directly attached, storage arrays must have their own level of security to insure data integrity and privacy. To answer this need, INLINE Corporation utilizes our TruMask option to protect your valuable data. TruMask gives you control over which arrays, and even LUNs, can be viewed by individual hosts and storage management applications. Because TruMask works down at the LUN level you have the ability to mix different data security classifications within a single array. When TruMask is invoked, the storage array looks at each computer connected and instantaneously determines which LUNs a computer can see as well as access. TruMask is a key component in successful SAN installation.
3.9.5. TruSwap
Today's data storage solutions need to be not only highly available and highly reliable but they also need to be easily maintained. INLINE Corporation, realizing this need, has designed all of our data storage arrays to be truly user friendly, even when it comes to maintenance. Our TruSwap technology allows hot-swap removal and replacement of components while the array is operational. During normal operation, an INLINE array can be serviced and maintained without ever shutting down the array and interrupting data access to the users. Every component involved in data integrity is hot swappable and can be removed and replaced in less than 5 minutes. INLINE solutions are quite different from other arrays because they do not require special tools, complex cabling, a specially trained engineer and worst of all - downtime. Most of the servicing can be performed quickly and easily by untrained personnel. With INLINE Corporation's TruSwap you stay online while you replace the necessary component in less than 5 minutes.
www.wilshiresoft.com info@wilshiresoft.com
Page 40 of 123
enterprises that anticipate significant growth in information storage requirements. And unlike direct-attached storage, excess capacity in SANs can be pooled, resulting in a very high utilization of resources.
Disaster Recovery
SANs allow greater flexibility in Disaster Recovery. They provide a higher data transfer rate over greater distances than conventional LAN/WAN technology. Therefore, backups or recovery to/from remote locations can be done during a relatively short window of time. Since storage devices are accessible by any server attached to
www.wilshiresoft.com info@wilshiresoft.com
Page 41 of 123
the SAN, a secondary data center could immediately recover from a failure should a primary data center go offline.
Scalability
In today's computing environments, the demand for large amounts of high-speed storage is increasing at phenomenal rates. This demand brings new problems to IT departments. Of major concern is the physical location of the storage devices. The traditional connection of storage is through SCSI connections. However SCSI has physical distance limitations that could make it impossible to connect the necessary storage devices to the servers. SAN technology breaks this physical distance limitation by allowing you to locate your storage miles away as opposed to only a few feet.
Manageability
Many organizations have groups whose tasks are dedicated to specific functions. It is common to find NT Administrators, Novell Administrators or Unix Administrators all in the same company. All of these Administrators have two things in common: they all use a network to communicate to the clients they serve and they all require disk storage. For an organization's networking needs, you will often find a Network Manager or Network Group. They maintain the installed base of hubs, switches and routers. The Network Manager ensures the network is operating effectively and makes plans for future growth. Few organizations have groups whose responsibilities include managing the storage resources. It is ironic that a company's most crucial resource, data storage, generally may have no formal group to manage it effectively. As is, each type of system administrator is required to monitor the storage attached to their servers, perform backups and plan for growth. Storage Management in a SAN environment could offload the responsibility of maintaining the storage devices to a dedicated group. This group can perform backups over the SAN, alleviating LAN/WAN traffic for all type of servers. The group could allocate disk space to any server regardless of the type. The SAN managers could actively monitor the storage systems of all platforms and take immediate corrective action, whenever needed.
Cost Effectiveness
Each server requires its own equipment for storage devices. The storage cost for environments with multiple servers running either the same or different operation systems can be enormous. SAN technology allows an organization to reduce this cost through economies of scale. Multiple servers with different operating systems can access storage in RAID clusters. SAN technology allows the total capacity of storage to be allocated where it is needed. If requirements change, storage can be reallocated from devices with an excess of storage to those with
www.wilshiresoft.com info@wilshiresoft.com
Page 42 of 123
too little storage. Storage devices are no longer connected individually to a server; they are connected to the SAN, from which all devices gain access to the data.
Storage Pool
Instead of putting an extra 10GB on each server for growth, a storage pool can be accessed via SAN, which reduces the total extra storage needed for projected growth.
Summary
SANs connect storage devices and provide high-speed, fault tolerant access to data. A SAN is different than a LAN/WAN, in that a LAN/WAN is solely a communications highway. SANs are usually built with Fiber-Channel, and are setup either in a point-to-point, loop, or switch topology.
The Future
The potential of SAN technology is limitless. Advances in both cabling and Fiber Channel technology occurs on a regular basis. Unlike any other existing data transport mechanisms, fiber-optic technology offers a substantial increase in bandwidth capacity. Fiber-optic cabling transmits data through optical fibers in the form of light. A single hair-thin fiber is capable of supporting 100 trillion bits per second. Currently, SAN backbones can support 1.025Gbps throughput; 2Gbps throughput are going to be available shortly, and exponential leaps will occur in more frequently in the next few years. As bandwidth becomes a commodity, data exchange will be liberated from size constraints, and storage will soon be measured in petabytes (equal to 1000 terabytes). To meet the demand for fiber interfaces, storage vendors are now designing their products with fiber backplanes, controllers and disk modules. Future offerings include "serverless" backup technology, which liberates the traditional server interface from backup libraries, to enable faster backups. Currently, heterogeneous platforms can only share the physical storage space within a SAN. As new standards and technologies emerge, UNIX, NT, and other open systems will enable data sharing through a common file system. Some major vendors in the SAN field are presently developing products designed for 4Gbps throughput.
www.wilshiresoft.com info@wilshiresoft.com
Page 43 of 123
The interconnection of choice in today's SAN is Fiber Channel, which has been used as an alternative to SCSI in creating high-speed links among network devices. Fiber Channel was developed by ANSI in the early 1990s, specifically as a means of transferring large amounts of data very quickly. Fiber Channel is compatible with SCSI, IP, IEEE 802.2, ATM Adaptation Layer for computer data, and Link Encapsulation, and it can be used over copper cabling or fiber-optic cable. Currently, Fiber Channel supports data rates of 133Mbytes/sec, 266Mbytes/sec, 532Mbytes/sec, and 1.0625Gbits/sec. A proposal to bump speeds to 4Gbits/sec is on the drawing board. The technology supports distances of up to 10 kilometers, which makes it a good choice for disaster recovery, as storage devices can be placed offsite. SANs based on Fiber Channel may start out as a group of server systems and storage devices connected by Fiber Channel adapters to a network. As the storage network grows, hubs can be added, and as SANs grow further in size, Fiber Channel switches can be incorporated. Fiber Channel supports several configurations, including point-to-point and switched topologies. In a SAN environment, the Fiber Channel Arbitrated Loop (FCAL) is used most often to create this external, high-speed storage network, due to its inherent ability to deliver any-to-any connectivity among storage devices and servers An FCAL configuration consists of several components, including servers, storage devices, and a Fiber Channel switch or hub. Another component that might be found in an arbitrated loop is a Fiber Channel-to-SCSI bridge, which allows SCSI-based devices to connect into the Fiber Channel-based storage network. This not only preserves the usefulness of SCSI devices but also does it in such a way that several SCSI devices can connect to a server through a single I/O port on the server. This is accomplished through the use of a Fiber Channel Host Bus Adapter (HBA). The HBA is actually a Fiber Channel port. The Fiber Channel-to-SCSI bridge multiplexes several SCSI devices through one HBA.
www.wilshiresoft.com info@wilshiresoft.com
Page 44 of 123
The FCAL provides not only a high-speed interconnection among storage devices but also strong reliability. In fact, you can remove several devices from the loop without any interruption to the data flow. The major benefit of SAN is its ability to share devices among many servers at high speeds and across a variety of operating systems. This is particularly true in a centralized Data Center type environment. However, SANs are expensive, difficult to configure and costly to manage. The costs of SAN implementation would make them prohibitive in a geographically diverse, branch office or retail, environment.
Fig. 3.17.1.2. - Network speed and performance suffer as backup traffic increases.
www.wilshiresoft.com info@wilshiresoft.com
Page 45 of 123
Fig. 3.17.1.3. - SAN backup protects LAN performance and scales easily and cost-effectively.
Overview
Storage growth continues to escalate, yet IT departments have to manage more data with constant or declining resources. Sun helps you meet this challenge with a comprehensive set of products and services that eases storage area network (SAN) management and consolidates storage resources on the network. The Sun StorEdge Open SAN architecture delivers on the promise of SANs by simplifying SAN management, optimizing resource utilization, and driving down total cost of ownership (TCO).
Flexibility
The Sun StorEdge Open SAN architecture has flexibility designed in to allow it to meet a wide range of customer requirements. Whether your SAN needs are small or large, simple or more challenging, local or worldwide, the Sun StorEdge Open SAN architecture can support your design today and grow with you in the future.
Manageability
Sun has taken a leadership role in designing, promoting, and adopting open standard based SAN management. Taken together with existing management interfaces and tools, Sun is able to deliver simple to use heterogeneous management software as well as enable third party software vendors to provide additional choice for our customers.
www.wilshiresoft.com info@wilshiresoft.com
Page 46 of 123
Compatibility
The Sun StorEdge Open SAN architecture has as a particular focus implementing and taking advantage of open standards. Whether through early adoption of management standards, using SCSI and Fiber Channel standards or ensuring that switches interoperate, openness is a key design goal and practice throughout the architecture.
Availability
The Sun StorEdge Open SAN architecture enables extreme levels of availability. From the component level through best practices, the architecture is capable of meeting your availability requirements.
Performance
The Sun StorEdge Open SAN architecture offers very high performance. The architecture supports 1 and 2 GB Fiber Channel today and will incorporate 10 GB Fiber Channel in the future. Trunking capabilities between switches, a high performance shared file system, and load balancing on hosts are some of the means to provide a powerful set of building blocks to construct a SAN capable of world record performance.
www.wilshiresoft.com info@wilshiresoft.com
Page 47 of 123
Highly scalable performance. Performance can be scaled easily while ensuring availability. Provides enhanced productivity and faster information distribution. Dramatically reduces risk of data loss in the event of an outage and greatly improves mean time to recovery. Improves total cost of ownership (TCO) as large amounts of data can be accessed from lower-cost media. Reduces the cost of server memory and storage capacity. Lowers TCO by consolidating resources simplifying system management and minimizing administrator training.
Data Protection
VERITAS data protection solutions deliver robust, scalable storage management, backup, and recovery--from the desktop to the data center--for heterogeneous environments. Organizations of every size rely on VERITAS for comprehensive data protection. With our data protection solutions, there's no need to use multiple backup products of UNIX, Windows, and database backup. And you'll never have to rely on end users to copy critical corporate data from desktops and mobile laptops onto a networked file server. VERITAS data protection solutions streamline, scale, and automate backup throughout your organization. VERITAS products safeguard the integrity of all corporate data on all platforms and in all databases. VERITAS is the world's most powerful data protection solution for fast, reliable, enterprise-wide backup and recovery.
Disaster Recovery
Disaster recovery is a business essential. Companies large and small need their data protected, accessible, and uninterrupted in the even of a disaster. VERIAS disaster recovery solutions are based on software products that work together efficiently and seamlessly across all platforms and applications. And our solutions are flexible enough to grow along with your business. As you build your disaster recovery plan, VERITAS can provide you with a layer of protection at every stage.
High Availability
Maintaining high levels of access to information across heterogeneous environments without compromising a quality user experience can challenge any IT organization. VERITAS high availability solutions protect the user experience from servers to storage. IT staff can use VERITAS products to build higher levels of availability throughout the data center, even at levels once thought too expensive, complex to install, or difficult to manage.
Page 48 of 123
Peripheral sharing
According to a June, 1999 Dataquest survey, 56% of respondents reported using less than 50% of RAID capacity due to the inability to share the devices among many servers. As a result, they estimate an IT manager in a distributed storage environment can manage only one-third the storage capacity managed in a centralized environment. The most obvious way in which SANs helps reduce costs is by facilitating sharing of sophisticated peripherals between multiple servers. External storage is commonplace in data centers, and sophisticated peripherals are generally used to provide high performance and availability. An enterprise RAID system or automated tape library can be 5 to 10 times more expensive than a single server, making it prohibitively expensive to use a one-to-one devices attach approach. Even with multiple channel controllers in the peripheral, the cost equation is often not attractive. Fiber Channel-based storage networking provides three key features to facilitate peripheral sharing. First, flexible many-to-many connectivity using Fiber Channel hubs and switches improves the fan-out capabilities of a peripheral, allowing multiple servers to be attached to each channel. Second, the increased distance capabilities of fiber optic cables break the distance restrictions of SCSI, allowing servers to be located up to 10Km from the peripheral. Finally, Fiber Channel hubs and switches support improved isolation capabilities, facilitating nondisruptive addition of new peripherals or servers. This avoids unnecessary downtime for tasks such as installing a new I/O card in a server. However, storage management software is also required in combination with Fiber Channel networks to deliver true SAN functionality. Software tools are used to allocate portions of an enterprise RAID to a server in a secure and protected manner, avoiding data corruption and unwanted data access. Storage management software also can also provide dynamic resource sharing, allocating a tape drive in an automated tape library to one of many attached servers during a backup session on an as needed basis.
Capacity Management
With traditional locally attached storage, running of out disk space means that new storage must be physically added to a server either by adding more disks to an attached RAID or adding another I/O card and a new peripheral. This is a highly manual and reactive process, and leads IT managers to deploy large amounts of excess capacity on servers to avoid downtime due to re-configuration or capacity saturation. SANs allow many on-line storage peripherals to be attached to many servers over a FC network. Using tools to monitor disk quotas and free space, administrators can detect when a server is about to run out of space and take action to insure storage is available. Using storage allocation software, free space on any RAID can be allocated to a hot server putting the storage where its needed most. As existing SAN-attached peripherals become saturated, new peripherals can be added to the SAN hubs or switches in a non-disruptive way allowing free space to be allocated as needed.
www.wilshiresoft.com info@wilshiresoft.com
Page 49 of 123
storage peripherals to be added without breaking a SCSI chain. However the server application is still unaware of this new storage since it must be stopped and re-started to access new volumes. Storage virtualization software, such as advanced logical volume managers, can allow an existing application volume to dynamically grow to include the new SAN attached storage. This completes the process of adding new storage to a server without disrupting application up-time. With logical volume management, an application volume can physically exist in one more peripherals or peripheral types. Virtualizing physical storage into logical volumes is key to minimizing disruptions. SANs will also allow a large number of varying types of storage to be available to a server farm. Available storage will vary in terms of cost, performance, location, and availability attributes. By virtualizing physical SAN-attached storage in terms of its attributes, administrators will be able to add and re-configure storage based on its properties rather than performance configuration through device level mapping tools. Allowing administrators to dynamically reconfigure and tune storage while applications are on-line improves application performance and dramatically reduces the likelihood of unplanned downtime. In addition, these attributes allow administrators to set policies that automatically allocated unused storage to servers and applications where necessary.
www.wilshiresoft.com info@wilshiresoft.com
Page 50 of 123
However, as shown in the diagram above, implementing a SAN can allow server farms to share access to a storage farm. With storage management tools applications can be moved to different servers and still have access to their data. For read-only applications, a single copy of data can be shared between multiple application servers removing the necessity of replicating data. And because this can all be done while applications are online, productivity losses are minimized.
SAN architectures can also accommodate multi-dimensional growth. Capacity management techniques can be used to ensure new storage can be added continuously, so server applications always have storage capacity they need. If more processing power is needed, more servers can be added to the SAN to provide better access to stored data. For higher read performance access to data, multiple copies of data can be created on the SAN, thus eliminating bottlenecks to a single disk.
www.wilshiresoft.com info@wilshiresoft.com
Page 51 of 123
Dynamic Multi-Pathing to provide non-disruptive path-level fail-over and load balancing over multiple Fiber Channel links between a server and storage peripheral. VxVM can perform all of these operations for both JBOD and RAID peripherals on a SAN today and even mix and match between peripheral types. By building applications on top of VxVM, these intrinsic virtualization features can be made available without the server application being aware of the physical SAN configuration. This includes other VERITAS applications such VERITAS File Server, Foundation Suite Editions for Oracle, and other thirdparty applications.
www.wilshiresoft.com info@wilshiresoft.com
Page 52 of 123
SAN. Like LAN Free Backup, HSM over a SAN also increases automation by intelligently scheduling HSM sessions to shared tape drive resources.
Features:
Flexible Storage Architecture Open and Interoperable Solutions Exploit Data Assets Rapid Access to Data Growth and Capacity Management
www.wilshiresoft.com info@wilshiresoft.com
Page 53 of 123
Decrease server CPU utilization Disaster tolerance: Offer remote vaulting/mirroring over 10 km Provide no single point of failure (SPOF) Increase availability, including automatic path selection/failover Enhance load balancing
Features Advantages Benefits Single file system Uses native file system on MDC or any Eliminates the need to manage multiple file systems,
other Tivoli SANergy-enabled thirdregardless of the number of computers connected to
www.wilshiresoft.com info@wilshiresoft.com
Page 54 of 123
the SAN Continues using your existing LAN to handle metadata traffic and low-bandwidth data Works equally well with Fiber Channel, SCSI, SSA, iSCSI or InfiniBand SANs with components from any manufacturer Works with the mix of computers and operating systems used today Enables immediate control through most SAN management consoles
Utilizes any LAN hardware and software Utilizes any SAN hardware and software Supports true file sharing across heterogeneous networks Enables management control through the Web and SNMP
Enterprise-ready Availability
Tivoli SANergy High Availability is an add-on feature to the Windows NT and Windows 2000 versions of Tivoli SANergy. It ensures that critical data remains available in the event of an MDC failure. If a Tivoli SANergy MDC for Windows NT or Windows 2000 fails, the spare MDC running SANergy High Availability seamlessly assumes the duties of the failed MDC. MDC dependent Tivoli SANergy hosts running Windows NT, Windows 2000, and UNIX automatically remap their drives. Most network-aware applications, including database servers, carry on without interruption. Tivoli SANergy High Availability is an essential component for SANs supporting corporate databases, Web servers, and other business-critical applications.
www.wilshiresoft.com info@wilshiresoft.com
Page 55 of 123
Enterprise-ready Management
In addition to using native and HTML based interfaces, administrators can use any SNMP management console to manage Tivoli SANergy. A custom SANergy management information base (MIB) is included to support the use of consoles, such as Tivoli Netview, HP OpenView, or SunNet Manager.
3.18.3.1. Point-to-Point
This topology uses Fiber channel without a loop overhead, to increase performance and simplify cabling between a RAID storage box and a host. http://www.aspsys.com/hardware/nas_san/view.aspx/system_nas_san_pointtopoint_lg.aspxIn a point-to-point configuration, there are only two devices and they are directly connected to each other. This is used in instances where it is necessary to locate the physical storage in a different location from the server. Reasons for this type of configuration could include security or environmental concerns.
www.wilshiresoft.com info@wilshiresoft.com
Page 56 of 123
This topology allows you to attach up to 127 nodes without hubs and switches. FC-AL is a time-shared, fullbandwidth, distributed topology where each port includes the minimum necessary connection function. Depending on the distance requirements, workstations or servers can be connected to a single disk or a disk loop with either optical Fiber or copper media. To understand a loop configuration, picture a circle with several points around it. Each point represents a device on a Fiber Channel Loop. Devices connected in this manner are said to be in a Fiber Channel Arbitrated Loop (FC-AL). In this configuration, each device is connected to the next device and is responsible for repeating data from the device before it, to the device after it. Should a device on a FC-AL fail, then no devices on the FC-AL will be able to transmit data.
www.wilshiresoft.com info@wilshiresoft.com
Page 57 of 123
www.wilshiresoft.com info@wilshiresoft.com
Page 58 of 123
In fact, multiple terabytes of Fiber Channel interfaced storage are installed every day! Fiber Channel works equally well for storage, networks, video, data acquisition, and many other applications. Fiber Channel is ideal for reliable, high-speed transport of digital audio/video. Aerospace developers are using Fiber Channel for ultrareliable, real-time networking. Fiber Channel is a fast, reliable data transport system that scales to meet the requirements of any enterprise. Today, installations range from small post-production systems on Fiber Channel loop to very large CAD systems linking thousands of users into a switched, Fiber Channel network. Fiber Channel is ideal for these applications: High-performance storage Large data bases and data warehouses Storage backup systems and recovery Server clusters Network-based storage High-performance workgroups Campus backbones Digital audio/video networks
www.wilshiresoft.com info@wilshiresoft.com
Page 59 of 123
Fig. 3.18.9 Fiber Channel systems are built from familiar elements
IT systems today require an order of magnitude improvement in performance. High-performance, gigabit Fiber Channel meets this requirement. Fiber Channel is the most reliable, scalable, gigabit communications technology today. It was designed by the computer industry for high-performance communications, and no other technology matches its total system solution.
Fiber Channel Technology application Storage, network, video, Topologies Baud rate Scalability to higher data rates Guaranteed delivery
clusters point-to-point loop hub, switched 1.06 Gbps 2.12 Gbps, 4.24 Gbps Yes
Gigabit Ethernet
Network Point-to-point hub, switched 1.25 Gbps Not defined No
ATM
Network, video Switched 622 Mbps 1.24 Gbps No
www.wilshiresoft.com info@wilshiresoft.com
Page 60 of 123
Congestion data loss Frame size Flow control Physical media Protocols supported
None Variable, 0-2KB Credit Based Copper and Fiber Network, SCSI, Video
Yes Fixed, 53B Rate Based Copper and Fiber Network, video
www.wilshiresoft.com info@wilshiresoft.com
Page 61 of 123
The fastest data back up method for a servers internal disk drives is to attach a backup device directly to the server. This method is known as local or distributed backup. Figure below shows a group of systems in a typical distributed backup configuration.
For small environments, distributed backup works very well. As the number of servers requiring back up increases, distributed backup starts exhibiting problems. The first problem is the cost and a second and far more serious problem is managing the backup process. Distributed backup requires IT technicians to touch each system physically to perform backup operations. If the server data exceeds the tape capacity, then the IT individual must monitor the operation and reload new tapes at the proper time. In larger organizations, distributed backup is not viable due to the lack of centralized management and the high administrative cost associated with the management of multiple, discrete, backup operations
www.wilshiresoft.com info@wilshiresoft.com
Page 62 of 123
The major advantage of a centralized method is the ease of management. Advanced backup management products allow scheduling multiple backups in advance, which can proceed without operator intervention. Backups can generally occur during slower weekend periods. For small to medium environments that do not have heavily loaded LANs, conventional centralized backup is probably the most cost effective and easily managed backup method.
This concept of a dedicated storage network is known as a Storage Area Network or SAN. Backup methods based on SANs offer all of the management advantages that centralized backup solutions offer coupled with high data transfer rates generally associated with directly attached or distributed backup solutions. SANs offer great promise but are relatively new to the market. In addition to increasing backup operations efficiency, SANs allow storage to decouple from the server. Decoupling storage from the server allows IT
www.wilshiresoft.com info@wilshiresoft.com Wilshire Software Technologies Ph: 2761-2214 / 6677-2214 / 6452-6173 Rev. Dt: 15-Oct-08 Version: 3
Page 63 of 123
individuals more flexibility in controlling storage resources. Currently the only practical interconnect for a SAN is Fiber Channel. Figure below shows a LAN-free Backup implementation based on a Fiber Channel SAN. In Figure below, a backup operation involves copying data from internal server storage and writing it to the tape library. In this case, data copies occur only once before data is written to tape. Since backup data does not traverse the network stack, the CPU utilization is much lower than with the centralized backup method. Given that the maximum transfer rate for 1 Giga bit Fiber Channel interconnect is around 100 Mbytes/sec, the limiting factor or SAN backup performance is now the tape drive transfer rate. A FCbased SAN can fully backup a 500GB site in about 15 hours using a two-drive tape library. Using a four-drive tape library, the backup can be done in about 7.5 hours. Figure below also shows a Fiber Channel to SCSI Router. Since native FC tape libraries are relatively new, this enables using of legacy SCSI tape libraries.
With Fiber Channel providing 100 Mbytes/sec today (moving to 200 Mbytes/sec in the near future), there is more than enough backup application bandwidth. The high bandwidth of Fiber Channel also allows sharing external storage. Figure 5 shows a SAN configuration with external storage and an attached tape library. There are numerous advantages to having storage external to the servers that include storage sharing, the ability to scale storage independently, easier manageability storage, etc. A full discussion of these advantages exceeds this documents scope.
Having storage external to the server introduces the possibility of performing a server-less backup. In a serverless backup, the server issues a SCSI third party copy command to the backup device. The backup device then becomes a SCSI initiator and copies the data directly from the storage elements. This has the advantage of not requiring servers to copy data from the storage element and send it to the backup device. The server is not part of the data movement and can therefore devote all its compute cycles to serving applications.
www.wilshiresoft.com info@wilshiresoft.com
Page 64 of 123
3.18.12. Conclusion
A Fiber Channel, LAN-free backup solution offers all management advantages of a centralized backup scheme coupled with the high performance of distributed backup. For cost sensitive solutions, a Fiber Channel hub can replace the switch. Fiber Channel hubs are less expensive than switches but do not scale well for configurations that involve external storage. LAN-free backup using Fiber Channel is an excellent solution for environments that have a heavily congested LAN and need to perform system backups without impacting LAN performance. LAN-free backup is a first step into SAN technology. With the addition of external storage, the true power of SANs can be realized. Applications such as storage centralization, virtualization, and clustering allow IT environments to reach new levels of reliability, scalability, and maintainability.
Ease of installation
Recipe book that includes an installation guide and users manual makes installation easy. Installation and technical support are available through a single point of contact.
Flexibility of design
As storage demand grows component selection is not limited to one brand.
www.wilshiresoft.com info@wilshiresoft.com
Page 65 of 123
In this design, the tape server can stream data directly from the storage to the bridge device at 85 to 90 M/sec. The only bottleneck is the speed of the tape library itself, and the realized throughput of the tape server itself.
www.wilshiresoft.com info@wilshiresoft.com
Page 66 of 123
Typically, there are several major elements to a server-less backup solution. First is the hardware infrastructure deployed for LAN-free backup. Second a bridge device such as the ATTO FiberBridge capable of acting as a copy device or independent data movement unit is needed to actual move the data. Finally, special control software such as Legato's Celestra, which issues commands to the copy device and insures smooth operation of the system. A tape server is still necessary, but acts as a place to house the control software more than as a system device dedicated to moving data. The copy device follows a similar philosophy to network computing devices sometimes referred to as network appliances. It is a specialized device with sufficient and specialized resources to perform a specific rather than general activity within a network or SAN. In the case of a copy device for server-less backup the copy device needs to have enough compute power and memory to support the movement of large blocks of data. The copy device must also support connections to other device that may be involved in the movement of the data, in this case disk drives and tape libraries. Finally, the device must provide a software interface that allow it to interact with software applications that wish to control, manage, and track the movement of data in the SAN. Currently, the Extended Copy Command interface is the most popular interface for these type applications. In general the market has looked to bridge devices since many of them, including the ATTO FiberBridge, have these attributes.
www.wilshiresoft.com info@wilshiresoft.com
Page 67 of 123
3.19. iSCSI
3.19.1 Introduction of iSCSI
With the release of the Fiber Channel and SAN based on it the storage world staked on a network access to storage devices. Almost everyone announced that the future belonged to the storage area networks. For several years the FC interface was the only standard for such networks but today many realize that it's not so. The SAN based on the FC has some disadvantages, which are the price and difficulties of access to remote devices. At present there are some initiatives, which are being standardized; they are meant to solve or diminish the problems. The most interesting of them is iSCSI.
www.wilshiresoft.com info@wilshiresoft.com
Page 68 of 123
The word iSCSI can often be seen in newspapers and ads of leading storage device makers. However, different sources have very different views, and some consider the iSCSI an indisputable leader for data storage systems in the near future, others have already given it up for lost yet before it was born. iSCSI (Internet Small Computer System Interface) is a TCP/IP-based protocol for establishing and managing connections between IP-based storage devices, hosts and clients.
Scalability
The switched architecture of SANs enable IT managers to expand storage capacity without shutting down applications.
www.wilshiresoft.com info@wilshiresoft.com
Page 69 of 123
www.wilshiresoft.com info@wilshiresoft.com
Page 70 of 123
Within the frames of those which can effectively be realized using modern methods: Consolidation of data storage systems Data backup Server clusterization Replication Recovery in emergency conditions Here are new capabilities which can effectively be realized with the IP Storage: SAN geographic distribution QoS Safety In addition, new storage area systems with the iSCSI being native for them provide more advantages: A single technology for connection of storage systems, servers and clients within LAN, WAN, SAN Great experience of industry in Ethernet and SCSI technologies Possibility of substantial geographic remoteness of storage systems Possibility to use management means for TCP/IP networks. To transfer data to storage devices with the iSCSI interface it's possible to use not only data carriers, communicators and routers of existent LAN/WAN but also usual network cards on the client's side. But it is followed by considerable expenses of processor power on the client's side which uses such card. According to the developers, the software iSCSI realization can reach data rates of Gigabit Ethernet at a significant, about 100%, CPU load. That is why it is recommended using special network cards which support mechanisms of CPU unload before TCP stack processing. At present (June 2002), such cards are produced by Intel. The Intel PRO/1000T IP Storage Adapter is offered at 700USD. It contains a powerful Xscale processor, 32M memory and transfers calculations related with iSCSI and TCP/IP and calculations of checksums of TCP, IP frames to the integrated processor. According to the company it can be as efficient as 500Mbit/s at 3-5% CPU load of a host system.
3.19.6. Applications that can take advantage of these iSCSI benefits include:
Disaster recovery environments for stored data that needs to be mirrored/recovered in a remote location can take advantage of the distance extensions that iSCSI enables over an IP network. Fiber Channel server and storage extensions. Storage backup over an IP network enables systems to maintain backups online and always be ready and available to restore the data. Storage virtualization and storage resource management applications can create a shared storage environment for all users on a global IP network. Any application can now take advantage of data from remote sites that are accessible over an IP network, expanding the usefulness of this data to E-commerce applications.
www.wilshiresoft.com info@wilshiresoft.com
Page 71 of 123
Here, each server, workstation and storage device support the Ethernet interface and a stack of the iSCSI protocol. IP routers and Ethernet switches are used for network connections. The SAN makes possible to use the SCSI protocol in network infrastructures, thus, providing high-speed data transfer at the block level between multiple elements of data storage networks. The Internet Small Computer System Interface also provides a block data access, but over TCP/IP networks. Architecture of a pure SCSI is based on the client/server model. A client, for example, server or workstation, initiates requests for data reading or recording from a target - server, for example, a data storage system. Commands which are sent by the client and processed by the server are put into the Command Descriptor Block (CDB). The server executes a command which completion is indicated by a special signal alert. Encapsulation and reliable delivery of CDB transactions between initiators and targets through the TCP/IP network is the main function of the iSCSI, which is due to be implemented in the medium untypical of SCSI, potentially unreliable medium of IP networks. Below is a model of the iSCSI protocol levels, which allows us to get an idea of an encapsulation order of SCSI commands for their delivery through a physical carrier.
The iSCSI protocol controls data block transfer and confirms that I/O operations are truly completed. In its turn, it is provided via one or several TCP connections.
Page 72 of 123
An iSCSI node is an identifier of SCSI devices (in a network entity) available through the network. Each iSCSI node has a unique iSCSI name (up to 255 bytes), which is formed according to the rules adopted for Internet nodes.
For example
Fqn.com.ustar.storage.itdepartment.161. Such name has an easy-to-perceive form and can be processed by the Domain Name System (DNS). An iSCSI name provides a correct identification of an iSCSI device irrespective of its physical location. At the same time in course of handling data transfer between devices it's more convenient to use a combination of an IP address and a TCP port which are provided by a Network Portal. The iSCSI protocol together with iSCSI names provides a support for aliases, which are reflected in the administration systems for better identification and management by system administrators.
www.wilshiresoft.com info@wilshiresoft.com
Page 73 of 123
At the end of a transaction the initiator sends/receives last data and the target sends a response, which confirms that data are transferred successfully. The iSCSI logout command is used to complete a session - it delivers information on reasons of its completion. It can also send information on what connection should be interrupted in case of a connection error, in order to close troublesome TCP connections.
Here is the hierarchy of the error handling and recovery after failures in the iSCSI:
The lowest level - identification of an error and data recovery on the SCSI task level, for example, repeated transfer of a lost or damaged PDU. Next level - a TCP connection which transfers a SCSI task can have errors. In this case there is an attempt to recover the connection. At last, the iSCSI session can be damaged. Termination and recovery of a session are usually not required if recovery is implemented correctly on other levels, but the opposite can happen. Such situation requires that all TCP connections be closed, all tasks, under fulfilled SCSI commands be completed, and the session be restarted via the repeated login.
3.19.11. Security
As the iSCSI can be used in networks where data can be accessed illegally, the specification allows fpr different security methods. Such encoding means as IPSec, which use lower levels, do not require additional matching because they are transparent for higher levels, and for the iSCSI as well. Various solutions can be used for authentication, for example, Kerberos or Private Keys Exchange, an iSNS server can be used as a repository of keys.
www.wilshiresoft.com info@wilshiresoft.com
Page 74 of 123
iSCSI Snap Servers, storage arrays and HBAs are flexible, cost-effective and easy-to-manage. Ideal for building iSCSI-based networked storage infrastructures for remote offices, email and other databases, or as primary storage for data that doesn't require the high performance of Fiber Channel SANs, they provide a high-ROI storage option for businesses of all sizes.
3.19.12.2. HBAs
Adaptec 7211C (Copper) - 1Gb ASIC-based iSCSI copper adapter with full protocol offload Adaptec 7211F (Fiber Optic) - 1Gb ASIC-based iSCSI fiber optic adapter with full protocol offload
Highlights
The premier choice for connectivity High-speed iSCSI SAN connectivity with minimal CPU utilization Fully offloads protocol processing from the host CPU Enables any enterprise that uses standard Ethernet technology to consolidate storage, increase data availability, and reap the benefits of SANs Ideal for environments where storage consolidation, LAN-free backup, and remote replication
Benefits
Delivers outstanding iSCSI performance using familiar, affordable technology Ideal for environments where storage consolidation, LAN-free backup, and remote replication are required. Database, e-mail, and disaster recovery and perfectly suited for iSCSI SANs with iSCSI HBAs. Fully offloads protocol processing from the host CPU High-speed iSCSI SAN connectivity with minimal CPU utilization Enables any enterprise that uses standard Ethernet technology to consolidate storage, increase data availability, and reap the benefits of SANs Enables low latency SCSI "blocks" to be transported via Ethernet and TCP/IP
3.19.13. Conclusion
I'm quite sure that in the near future the Fiber Channel won't disappear and the FC SAN market will be further developing. At the same time the IP Storage protocols will make possible to use effectively storage area networks in those applications for which the FC can't provide an effective realization. With the FCIP and iFCP protocols data storage networks will be geographically distributed. And the iSCSI will make possible to use advantages of the SAN in the spheres, which are still not or ineffectively realized within popular technologies.
3.19.13.1. P.S.
The rapid development of data storage networks is what the conception of the World Wide Storage Area Network based on. WWSAN provides for an infrastructure which will support a high-speed access and storage of data
www.wilshiresoft.com info@wilshiresoft.com
Page 75 of 123
distributed all over the world. The conception is very close to the WWW but is based on different services. One of examples is servicing a manager who travels around the world with presentations. WWWSAN provides for transparent transfer of "mobile" data according to how their owner travels all around the world. Therefore, wherever such manager can be, he will always have a high-speed access to the data he needs, and an operation with them won't require a complicated ineffective synchronization via the WWW. The conception of building the World Wide Storage Area Network excellently fits in the development of modern IP Storage technologies.
www.wilshiresoft.com info@wilshiresoft.com
Page 76 of 123
FCIP helps to effectively solve a problem of geographical distribution, and integration of SANs on large distances. This protocol is entirely transparent for existent FC SANs and involves usage of infrastructure of modern MAN/WAN networks. So, if you want to merge geographically remote FC SANs with new functionality enabled you will have to get just one FCIP gateway and connection to MAN/WAN networks. A geographically distributed
www.wilshiresoft.com info@wilshiresoft.com
Page 77 of 123
SAN based on the FCIP is taken by SAN devices as a usual FC network, and it is seen as a usual IP traffic for a MAN/WAN network it is connected to.
3.19.14.3. iFCP
Internet Fiber Channel Protocol is a protocol which provides FC traffic delivery over the TCP/IP transport between iFCP gateways. In this protocol an FC transport level is replaced with a transport of the IP network, the traffic between FC devices is routed and switched by the means of TCP/IP. The iFCP protocol allows connecting current FC data storage systems to an IP network with a support of network services which are necessary for these devices.
www.wilshiresoft.com info@wilshiresoft.com
Page 78 of 123
ISCSI sends SCSI commands over an IP network. As long as the machine requesting data and the machine serving the data both understand iSCSI, the requesting machine will see drives and data on the server as "local." This lets you expand the data in your data server (or group of servers) and not throw disks into every network app server. In iSCSI parlance, an initiator is a device or software that maps SCSI into IP: It wraps SCSI commands in an IP packet and ships them to an iSCSI target. The target machine unwraps iSCSI packets from IP and acts upon the iSCSI commands. It returns an iSCSI response or multiple responses, which are usually blocks of data. The server is your application server, and the storage box is the machine serving up iSCSI drives. (We're using storage box to represent anything from a Linux software iSCSI target to a full-blown SAN with iSCSI support.) You need a gigabit copper network for an iSCSI SAN. If you try running iSCSI over a 100-Mbps network, you'll be disappointed. Assuming your network connection maintains 100 percent utilization, 100 Mbps is roughly equivalent to 5 MB per second of disk transfer. Because iSCSI has a request/response for every packet transferred and network performance degrades before 100 percent saturation, the best performance you'll get is 6.25 MBps of throughput. That's a rough estimate that includes time to wrap and unwrap data packets and responses to each packet. Bottom line: 6.25 MBps of data transfer is not good, considering that most drives run in the 40- to 320-MBps transfer range. Besides, Gigabit Ethernet is affordable: Gigabit adapters start at $60; switches, $120. And Gigabit Ethernet has the throughput for treating iSCSI as a local drive.
Don't put your iSCSI SAN on your regular IP network, either. There's plenty of traffic running over that, and iSCSI is bandwidth-intensive. Also consider whether your servers have enough CPU power to handle iSCSI. Unwrapping and reassembling iSCSI packets can take a lot of CPU time. The iSCSI standard assumes packets are received in order, while TCP does not (plus iSCSI adds load from intensive TCP/IP processing). So if your server CPU is moderately utilized, you'll need an HBA (Host Bus Adapter) or TOE (TCP Offload Engine). These devices take care of some, or all, of the iSCSI and TCP processing without burdening the CPU. HBAs are storage-only cards that connect your machine to a target using iSCSI. TOEs are TCP-only cards that off-load TCP processing for the CPU. TOE is useful in iSCSI because of the high volume of packets transferred, while HBA processes SCSI commands--another data-intensive application. HBAs costs $600 to $1,200 each, so add them only to machines that need more CPU. And check with your HBA vendor to ensure that its product supports regular TCP/IP communications (most don't). If it doesn't, buy a separate gigabit NIC for that machine if it will handle any management for your storage network. Ideally, the NIC should sit on a separate machine on the gigabit SAN--but not participating in storage work--for network management.
www.wilshiresoft.com info@wilshiresoft.com
Page 79 of 123
3.19.16. Setup
Part of iSCSI's appeal is you don't need specialized networking knowledge--like you do with Fiber Channel SANs-to set it up. It's relatively simple to build and configure. First, set up your IP network. Install Ethernet cards or HBAs, and remember to put one in your storage server if you have a target that requires a card or blade to make it iSCSI-ready. You have several options: At the low end is the UNH-iSCSI open-source project that builds a target for Linux. You can install it on any Linux machine, customize the config file and have an iSCSI target. Fill the box with drives and use it as your storage box. Alternatively, you can buy premade storage boxes that are iSCSI targets with plenty of room for drives. This is a good place to start if your budget is tight. You'll need to choose the number of drives, the type of drive (SCSI, ATA or Fiber Channel) and how much expandability you need in the device, as well as the amount of throughput. Another option is to make your existing Fiber Channel SAN and NAS equipment iSCSI-compatible, with iSCSI cards for SANs and iSCSI gateways for NAS products. Next, run cables to your gigabit switch. Remember, you're creating a separate IP network from your backbone. IP networking is much the same no matter the medium--configure the network using your OS guidelines.
www.wilshiresoft.com info@wilshiresoft.com
Page 80 of 123
Technical Highlights
16-port switch deliver an industrial-strength framework for enterprise SAN fabrics. Each port delivers 100 MB/sec full duplex line speed. Offer superior interoperability with a wide range of servers and storage devices. Fabric OS provides powerful fabric management capabilities.
www.wilshiresoft.com info@wilshiresoft.com
Page 81 of 123
Provides swappable, redundant power supplies and cooling fans for high reliability, availability, and serviceability. Rack mount, Desktop or Drop-in
General
Support seamless connectivity to Fiber Channel Arbitrated Loop and full switch fabric configurations. Supports disk, tape and removable devices.
Technical Highlights
Single GBIC Fiber channel port. Dual independent SCSI buses. RS-232, Ethernet and Fiber channel In-band configuration, Management and Monitoring. Support for Full Duplex and class 2 transfers. Rackmount, Desktop or Drop-in.
www.wilshiresoft.com info@wilshiresoft.com
Page 82 of 123
General
Reliably attaches SCSI devices to Fiber channel arbitrated loop and fabric infrastructures. Supports disk, tape and removable devices.
SCSI Connectivity
Two independent SCSI buses.
www.wilshiresoft.com info@wilshiresoft.com
Page 83 of 123
www.wilshiresoft.com info@wilshiresoft.com
UA - Emerging Technologies
Page 84 of 123
5. Emerging Technologies
5. Introduction of InfiniBand
InfiniBand is a new High Speed, Enterprise Wide, I/O technology. InfiniBand provides for high performance I/O used in networked computing platforms and defines the requirements for creating an InfiniBand network. The benefits of InfiniBand over existing technologies include more scale for growth, higher speed data transfer and easy integration with legacy systems. Today's bus-based architecture is limited in its ability to meet the needs of the evolving data center. The speed of the Peripheral Component Interconnect (PCI) bus, the 'gateway' between external communications (the Internet) and the CPU, has not increased in tandem with CPU speed and Internet traffic, creating a bottleneck. InfiniBand (Infinite Bandwidth) promises to eliminate this bottleneck. InfiniBand, a switched-fabric architecture for I/O systems and data centers, is an open standard that implements a network for I/O connectivity, thereby decoupling the I/O path from the computing elements of a configuration (the CPU and memory). InfiniBand allows for improvements in network performance, processor efficiency, reliability, and scalability. Despite these compelling benefits, the enormous investment in PCI-based architectures will make a phased implementation of InfiniBand necessary.
UA - Emerging Technologies
Page 85 of 123
Even in its slowest configuration, InfiniBand's throughput is on par with the fastest PCI bus, SCSI, Gigabit Ethernet, and Fiber Channel technology. Thus, implementation of the highest-class InfiniBand architecture will increase throughput by twelve times or more. InfiniBand enables systems to keep up with the ever-increasing customer requirements for reliability, availability, and scalability, increased bandwidth, and support for Interact technology. Processor Efficiency--InfiniBand's channel adapters are intelligent. This allows them to offload much of the communications processing from the operating systems and CPU. InfiniBand shifts the burden of processing I/O from the server's CPU onto the InfiniBand network, freeing up the CPU for other processing. Reliability--Reliability is superior to today's PCI model because data can take many paths across the InfiniBand architecture. For example, a processor could have two ports; each port would connect to one of two switches. In the event one of the links failed, all traffic could be rerouted over the other operating link. By building a network of redundant pathways using multiple switches, reliability can be achieved. Scalability--The center of the Internet data center shifts from the server to a switched fabric in an InfiniBand architecture. Servers, networking, and storage all access a common fabric. Each of these devices can scale independently based on the needs of the data center.
www.wilshiresoft.com info@wilshiresoft.com
UA - Emerging Technologies
Page 86 of 123
The data rates and pin counts for these links are shown in Table
www.wilshiresoft.com info@wilshiresoft.com
UA - Emerging Technologies
Page 87 of 123
Signal Count
4 16 48
Signaling Rate
2.5 Gb/s 10 Gb/s 30 Gb/s
Data Rate
2.0 Gb/s 8 Gb/s 24 Gb/s
Note: The bandwidth of an InfiniBand 1X link is 2.5 Gb/s. The actual raw data bandwidth is 2.0 Gb/s (data is 8b/10b encoded). Due to the link being bi-directional, the aggregate bandwidth with respect to a bus is 4 Gb/s. Most products are multi-port designs where the aggregate system I/O bandwidth will be additive. InfiniBand defines multiple connectors for out of the box communications. Both fiber and copper cable connectors are defined as well as a backplane connector for rack-mounted systems.
Packets
There are two types of packets within the link layer, management and data packets. Management packets are used for link configuration and maintenance. Device information, such as Virtual Lane support is determined with management packets. Data packets carry up to 4k bytes of a transaction payload.
Switching
Within a subnet, packet forwarding and switching is handled at the link layer. All devices within a subnet have a 16 bit Local ID (LID) assigned by the Subnet Manager. All packets sent within a subnet use the LID for addressing. Link Level switching forwards packets to the device specified by a Destination LID within a Local Route Header (LRH) in the packet. The LRH is present in all packets.
QoS
QoS is supported by InfiniBand through Virtual Lanes (VL). These VLs are separate logical communication links which share a single physical link. Each link can support up to 15 standard VLs and one management lane (VL 15). VL15 is the highest priority and VL0 is the lowest. Management packets use VL15 exclusively. Each device must support a minimum of VL0 and VL15 while other VLs are optional. As a packet traverses the subnet, a Service Level (SL) is defined to ensure its QoS level. Each link along a path can have a different VL, and the SL provides each link a desired priority of communication. Each switch/router has a SL to VL mapping table that is set by the subnet manager to keep the proper priority with the number of VLs supported on each link. Therefore, the IBA can ensure end-to-end QoS through switches, routers and over the long haul.
Data integrity
At the link level there are two CRCs per packet, Variant CRC (VCRC) and Invariant CRC (ICRC) that ensure data integrity. The 16-bit VCRC includes all fields in the packet and is recalculated at each hop. The 32-bit ICRC covers only the fields that do not change from hop to hop. The VCRC provides link level data integrity between two hops and the ICRC provides end-to-end data integrity. In a protocol like ethernet, which defines only a single CRC, an error can be introduced within a device, which then recalculates the CRC. The check at the next hop would reveal a valid CRC even though the data has been corrupted. InfiniBand includes the ICRC so that when a bit error is introduced, the error will always be detected.
www.wilshiresoft.com info@wilshiresoft.com
UA - Emerging Technologies
Page 88 of 123
www.wilshiresoft.com info@wilshiresoft.com
UA - Emerging Technologies
Page 89 of 123
5.5.2 Switch
Switches are the fundamental component of an InfiniBand fabric. A switch contains more than one InfiniBand port and forwards packets from one of its port to another based on the LID contained within the layer two Local Route Header. Other than management packets, a switch does not consume or generate packets. Like a channel adapter, switches are required to implement a SMA to respond to Subnet Management Packets. Switches can be configured to forward either unicast packets (to a single location) or multicast packets (addressed to multiple devices).
www.wilshiresoft.com info@wilshiresoft.com
UA - Emerging Technologies
Page 90 of 123
5.5.3 Router
InfiniBand routers forward packets from one subnet to another without consuming or generating packets. Unlike a switch, a router reads the Global Route Header to forward the packet based on its IPv6 network layer address. The router rebuilds each packet with the proper LID on the next subnet.
www.wilshiresoft.com info@wilshiresoft.com
UA - Emerging Technologies
Page 91 of 123
UA - Emerging Technologies
Page 92 of 123
products. InfiniBand Architecture has created an opportunity for server design innovation including dense server blade implementations. InfiniBand Architecture draws on existing technologies to create a flexible, scalable, reliable I/O architecture that interoperates with any server technology on the market. With broad adoption, InfiniBand is transforming the industry.
www.wilshiresoft.com info@wilshiresoft.com
UA - Emerging Technologies
Page 93 of 123
Racks of servers that can be managed as one autonomous unit Servers that can share I/O resources True "plug-and-play" I/O connectivity
www.wilshiresoft.com info@wilshiresoft.com
UA - Emerging Technologies
Page 94 of 123
through connectivity to the InfiniBand fabric, data center managers can react more quickly to fluctuations in traffic patterns, upswings in data center processing demand, and the need to retool to meet changing business needs. The net result is a more agile data center with the inherent flexibility to tune performance to an ever-changing landscape.
www.wilshiresoft.com info@wilshiresoft.com
Page 95 of 123
All DLT and LTO tape products write linear serpentine data tracks parallel to the edge of the tape (Figure 1). In these technologies, half-inch tape moves linearly past head assemblies that houses the carefully aligned read and write heads. To create the serpentine pattern on the tape, the head assembly moves up or down to precise positions at the ends of the tape. Once the head assembly is in position, the tape motion is resumed and another data track is written parallel to and in between the previously written tracks. Both DLT and LTO technologies position the read heads slightly behind the write heads to accomplish a read-while-write-verify. Older DLT and LTO technologies use the edge of the tape or a pre-written servo-track as a tracking reference during read and
www.wilshiresoft.com info@wilshiresoft.com
Page 96 of 123
write operations. The new Super DLT technology, however, uses an optical assist servo technology, called Pivotal Optical Servo, to align its heads to the proper tracks. The Use of Azimuth to Increase Linear Capacity Azimuth is defined as the trajectory of an angle measured in degrees going clockwise from a base point. In many tape and disk applications, azimuth has been used through time to increase storage densities. When using azimuth, tracks can be pushed together on a tape, eliminating the need for the guard bands that used to be required between adjacent tracks. The guard bands were eliminated, for example, in DLTs transitions from the DLT 4000 to the DLT 7000-8000 technologies The DLT 4000 used normal linear recording, in which the head assembly operated in one position perpendicular to the tape, writing data blocks in a true linear pattern. The DLT 7000 and DLT 8000 incorporated a modified linear serpentine method called Symmetrical Phase Recording (SPR). The SPR method allows the head assembly to rotate into three different positions, thereby allowing data blocks to be written in a herringbone or SPR pattern, as shown Figure 2 below. This method yields a higher track density and higher data capacity, eliminating the wasted space for guard bands. A third vertical head position (zero azimuth) allows the DLT 7000 and DLT 8000 drives to read DLT 4000 tapes.
Fig. 6.1.1.2 - Logical diagram of normal Linear and SPR Linear Recording.
Read heads are positioned just behind the write heads, allowing read-while write-verify, which ensures the data integrity of each data stripe. A special servo head on the drum and track on the tape are used for precise tracking during subsequent read operations. All helical-scan tape drives use azimuth to maximize the use of the tape media. Rather than moving the head assembly itself like linear devices do, helical recording creates azimuth by mounting the heads at angles in respect to each other.
www.wilshiresoft.com info@wilshiresoft.com
Page 97 of 123
www.wilshiresoft.com info@wilshiresoft.com
Page 98 of 123
www.wilshiresoft.com info@wilshiresoft.com
Page 99 of 123
Media load and file access times are important factors to consider as per-tape capacities rise or when tape drives are integrated into robotic tape libraries. Media load time is defined as the amount of time between cartridge insertion and the drive becoming ready for host system commands. File access time is defined as the time between when the drive receives a host-system command to read a file and the time when the drive begins to read the data. File access times are typically expressed as averages, since the requested file might be located in the middle of the tape or at either end. Times are usually specified as the time required to reach the middle. Drive vendors typically state specifications for both media load and file access. The specifications for the four mid-range tape technologies are shown in the following table.
Tape Drive
Quantum Super DLT HP LTO Surestore Ultrium 230 IBM LTO 3580 Ultrium Seagate Viper 200 LTO Ultrium Sony AIT-1 Sony AIT-2 Sony AIT-3
* Times obtained from drive manufacturers published information. The Sony AIT drives offer a much faster media load time and file access time, making these technologies an obvious choice for applications requiring fast data retrieval. The AIT time advantage is due in part to the unique Memory In Cassette (MIC) feature, which consists of an electrically erasable programmable read-only memory chip, called Flash EEPROM, built into the Sony AME tape cartridge. The flash memory stores information previously stored in a hidden file written before a tapes Logical Beginning Of Tape (LBOT). Through the use of the MIC feature, Sonys AIT drives reduce wear and tear on mechanical components during the initial load process and offer faster file access. MIC technology is now being used in todays LTO tape drives.
Algorithm
IDRC ALDC DLZ DLZ ALDC ALDC
Ratio
2:1 2.5:1 2:1 2:1 2:1 2.6:1
www.wilshiresoft.com info@wilshiresoft.com
* Data compression obtained from drive manufacturers published information. Native and compressed capacities for each type of tape are shown in the table below. The comparisons made here are based on the maximum tape lengths available at the time of this writing.
Native Capacity
20 GB 60 GB 40 GB 110 GB 100 GB 100 GB 100 GB 35 GB 50 GB 100 GB
Compressed Capacity
40 GB 150 GB 80 GB 220 GB 200 GB 200 GB 200 GB 91 GB 130 GB 260 GB
Native
3 MB/sec. 12 MB/sec. 6 MB/sec. 11 MB/sec. 15 MB/sec. 15 MB/sec. 16 MB/sec. 3 MB/sec. 6 MB/sec. 12 MB/sec.
Compressed
6 MB/sec. 30 MB/sec. 12 MB/sec. 22 MB/sec. 30 MB/sec. 30 MB/sec. 32 MB/sec. 7.8 MB/sec. 15.6 MB/sec. 31.2 MB/sec.
6.3 Reliability
In general, tape drive reliability can mean many things to many people. Tape drive vendors have notoriously slanted tape technology specifications in order to lure users into using to their technology. Following are two sets of reliability specifications often used in mid-range tape technology competition.
One method of measuring tape drive reliability is specified by Mean Time Between Failure (MTBF). This is a statistical value relating to how long, on average, the drive mechanism will operate without failure. In reality, drive reliability varies greatly and cannot be accurately predicted from a manufacturers MTBF specification. Environmental conditions, cleaning frequency, and duty cycle can significantly affect actual drive reliability. The fact that manufacturers usually dont include head life in the MTBF specification, and the manufacturers duty cycle assumptions vary. Tape drive manufacturers often add a disclaimer to the MTBF specification that the figures should only be used for general comparison purposes. Head life specifications (in hours) are subject to some of the same interpretation problems as MTBF, but when combined with other reliability specifications, they offer a good comparison of performance in high duty-cycle environments. The table below shows how reliability spec. compare.
MTBF
250,000 hours @ 20% duty cycle 300,000 hours @ 20% duty cycle 250,000 hours @ 100% duty cycle 250,000 hours @ 100% duty cycle 250,000 hours @ 100% duty cycle 250,000 hours @ 100% duty cycle 250,000 hours @ 100% duty cycle 250,000 hours @ 40% duty cycle 250,000 hours @ 40% duty cycle 400,000 hours @ 100% duty cycle
Head Life
30,000 hours 50,000 hours 50,000 hours 30,000 hours ** 60,000 hours ** 50,000 hours 50,000 hours 50,000 hours
Approximate AFR
2.5% 4.5% ** 1.5%
www.wilshiresoft.com info@wilshiresoft.com
Media Type
AME MP MP LTO MP AME
Media Uses
10,000 15,000 17,850 ** 15,000
* Rates obtained from drive manufacturers published information. ** These tape drive information not there.
MP tape is read by the Mammoth drive, the drive will not accept another tape until a cleaning cartridge is inserted. Cleaning is required because the MP media binder chemistry is prone to leave debris on the heads and in the tape path. This raises a reliability question for Mammoth drives reading MP tapes on a consistent basis. Exabyte has not published any specifications or test reports that quantify reliability when using the Mammoth drive in this mode. The implications of cleaning are even less appealing when using the drive with a mixed media set in a tape library environment where backup software does not recognize the difference in media types. It is perhaps more realistic for Mammoth users to transition to AME media and avoid the problems associated with using MP media. As tape technologies evolve, a drive manufacturer must weigh the size of its installed base and the willingness of that base to switch to a new media type as the manufacturer introduces new tape drives. In general, new tape drives utilize new media types to take advantage of the latest head and media components. Unfortunately, comparison algorithms and media types have been continued long past their usable life just to extend the installed bases backward read (and sometimes write) capabilities. Sonys third generation AIT product, AIT-3, is the first tape drive to double the transfer rates of previousgeneration media. For example, an AIT-1 cartridge in an AIT-3 drive will achieve double the transfer rate of that same cartridge in an AIT-1 drive. (That transfer rate is higher than an AIT-2 cartridge in an AIT-2 drive, but still not as high as an AIT-3 cartridge in an AIT-3 drive.) However, an AIT-2 cartridge in an AIT-3 drive will duplicate the transfer rate available for AIT-3 cartridges in AIT-3 drives. Other technologies have always forced the previous generation speeds when using the older media. So, while it is appealing to be able to read the older tape with the newer drives, most customers have ended up transitioning their media pool over to the newer tapes. Backup windows become unpredictable when new and old media are mixed inside an automated tape library. However, tape library manufacturers like Spectra Logic are now providing solutions in which a user can logically partition old and new media in one tape library. Logical partitioning such as this can help to leverage the end users original investment in the older tapes.
www.wilshiresoft.com info@wilshiresoft.com
alone as having full ownership control over its deck manufacturing, head technology, and media; several companies, however, have been very dependent upon other companies to release their next product. Three generations have historically been the industry norm for tape drive evolution. Evolving semiconductor technologies, compression algorithms, heads, and media processes have made it very difficult for drive vendors to extend the older technologies past three generations while remaining competitive with newer drive products and backward compatible with the existing installed base.
Roadmaps of Drive Performance (Native Transfer Rates) * Drive Type AIT 1998
AIT-1-XL 3 MB/sec. DLT 7000 5 MB/Sec. Mammoth 3 MB/Sec. -
1999
AIT-2 6 MB/sec. DLT 8000 6 MB/Sec. Mammoth 3 MB/Sec. -
2000
-
2001
2003
2005
2006
2007
AIT-6 96 MB/sec. Super Super Super DLT Super DLT Super DLT DLT 1280 DLT 2400 220 320 640 50 100 11 MB/Sec. 16 MB/Sec. 32 MB/Sec. MB/Sec. MB/Sec. AIT-3 AIT-4 AIT-5 12 MB/sec. 24 MB/sec. 48 MB/sec. ** ** ** ** ** ** ** ** ** ** ** **
Mammoth ** ** 12 MB/Sec. Surestore Ultrium ** 15 MB/Sec. 3580 Ultrium ** 15 MB/Sec. Viper 200 Ultrium ** 16 MB/Sec.
* Highest data transfer rates of tape drive technologies as publicly stated by drive vendors. ** These tape drive information not there.
www.wilshiresoft.com info@wilshiresoft.com
Roadmaps of Drive Capacity (Native) * Drive Type AIT DLT Mammoth HP LTO IBM LTO Seagate LTO 1998
AIT-1-XL 35 GB DLT 7000 35 GB Mammoth 20 GB -
1999
AIT-2 50 GB DLT 8000 40 GB Mammoth 20 GB -
2000
Mammoth 60 GB -
2001
AIT-3 100 GB Super DLT 220 110 GB **
2003
AIT-4 200 GB Super DLT 320 160 GB **
2005
AIT-5 400 GB Super DLT 640 320 GB ** ** ** **
2006
-
2007
AIT-6 800 GB.
Surestore Ultrium 230 ** 100 GB 3580 Ultrium ** 100 GB Viper 200 Ultrium ** 100 GB
* Highest native capacities of tape drive technologies as publicly stated by drive vendors. ** These tape drive information not there. This typically leaves engineers with the problem concerning backward compatibility. Often times, backward compatibility issues make it difficult to remain competitive with other technologies of the time. In the early years of DLT technology, the capacity and transfer rate between DLT generations doubled. However, now that its mature, the jump from DLT 7000 to 8000 yielded an incremental increase of only 5 GB in capacity and 1 MB/sec. in transfer rate. Quantum Corporation recently launched its next generation DLT product: Super DLT. Super DLT technology incorporates more channels, new thin film M-R heads, a new optical servo system, and advanced media formulations. This new DLT product required significant engineering innovation. The major challenges that created on-schedule delivery difficulties include the new servo positioning architecture, a new head design, new media formulations, and much higher internal data rates than the previous DLT architecture. Additionally, pressure to maintain backward read and write compatibility only increased the engineering complexity. The first Super DLT drives did not offer backward compatibility to previous DLT generations. With AIT, Sony remains in the forefront of all mid-range tape technologies, holding the highest capacity and performance specifications for the last several years. Sony has continued to drive the cost of AIT drives down, offering users the best cost-for-performance figures in this class. The December 2001 release of AIT-3 marks the third generation of Sonys AIT technology. Sony has published a roadmap, which extends through AIT-6, expecting to double capacity and performance every two years. Exabytes Mammoth drive had experienced some lengthy production delays but is shipping in volume quantities today. Exabytes Mammoth technology showcased numerous industry firsts and was the companys first attempt at designing and manufacturing a deck mechanism and head assemblies without Sonys expertise. During the production delays, Exabyte allowed Quantums DLT and Sonys AIT to capture Mammoths previous generation customers as the customers needs increased when no new products were being offered by Exabyte. The companys financial woes were only continuing to grow, and Exabyte very recently made the decision to merge with Ecrix Corporation. In todays marketplace, companies that deliver solid products on schedule have gained market share and have become standards. Exabyte delivered a number of products from 1987-1992, and gathered more than 80 percent of the mid-range market share. Those products included the EXB-8200, EXB-8500, EXB-8200C, EXB-8500C, EXB-8205, EXB-8505, and EXB-8505XL. Exabyte owes its key success to those initial products, which offered higher performance at a moderate price while playing in a market with very little competition. However, Exabytes inability to deliver Mammoth until nearly three years after announcing the product opened the door for other technologies. Quantums DLT drives were able to deliver better throughput at a time when storage capacities were exploding. The DLT 2000, DLT 2000XT, and DLT 4000 drives were able to offer better capacity, performance, and reliability than the first Exabyte products, allowing them to capture the market share previously owned by Exabyte. Again,
www.wilshiresoft.com info@wilshiresoft.com Wilshire Software Technologies Ph: 2761-2214 / 6677-2214 / 6452-6173 Rev. Dt: 15-Oct-08 Version: 3
delivering a product in a landscape with little competition allowed Quantum to gain more than 80 percent of the market between 1992 and 1996. Availability and engineering delays for DLT 7000 and follow-up DLT products have now opened the door for newer technologies.
6.7.1 DAT
6.7.1.1 HP DAT 72 Tape Drive
Overview
The HP StorageWorks DAT 72 tape drive is the fifth generation of HP's popular DDS tape drives, built on the success of four previous generations of DDS technology and providing unprecedented levels of capacity, reliability and cost of ownership. The DAT 72 delivers a capacity of 72 GB on a single data cartridge and a transfer rate of 21.6 GB/hr (assuming a 2:1 compression ratio). This drive reads and writes DAT 72, DDS-4, and DDS-3 formats, making it the perfect upgrade from earlier generations of DDS. The StorageWorks DAT 72 tape drive is the ideal choice for small and medium businesses, remote offices, and workgroups. The DAT 72 drive comes in four models -- internal, external, hot-plug, and offline hot-swap array module - plus it fits in HP's 3U rack-mount kit, making it compatible with virtually any server environment.
www.wilshiresoft.com info@wilshiresoft.com
HP One-Button Disaster Recovery (OBDR): Restores your entire system at the touch of a button without the need for system disks or software CDs Small, half-height form-factor: Fits easily into most servers and workstations, including HP ProLiant and AlphaServers with hot-plug drive bays Wide choice of models: Comes in internal, external, hot-plug, offline-hot swap array module, and rackmount configurations, providing a suitable option for any server Automatic head cleaner: Minimizes the need for manual cleaning with a cleaning cartridge Lowest media price of any tape technology: Reduces the overall cost of ownership Broad compatibility with a wide range of servers, operating systems, and backup software: Suits almost every operating environment HP StorageWorks Library and Tape Tools utilities: Helps make installation, management, and troubleshooting a breeze Includes TapeWare XE: Provides a complete, easy-to-use backup solution that includes disaster recovery capabilities
Specification
System feature Capacity Media Description
Up to 36 GB native capacity on a single tape -- 72 GB at 2:1 compression DAT 72 media 170m, 4mm tape, Metal Particle (MP++++) formulation (Blue cartridge shell for ease of identification in mixed media archives where older versions of DDS media may be in use) DDS4 - read and write compatibility DDS3 - read and write compatibility Recording method - 4 mm helical scan Recording Format - DAT 72, DDS-4, DDS-3 (ANSI/ISO/ECMA) Data Compression - Lempel-Ziv (DCLZ) Error Detection/Correction - Reed-Solomon Data Encoding Method - Partial Response Maximum Likelihood (PRML) 8 MB Sustained Transfer Rate (native) 3 MB/s Sustained Transfer Rate (with 2:1 data 6 MB/s compression) Burst Transfer Rate 6 MB/s (asynchronous) 40 MB/s (synchronous) Data Access Time 68 s Average Load Time Average Unload Time Rewind Time Rewind Tape Speed 15 s 15 s 120 s (end to end) 1.41 m/s
Media Format
Reliability Interface
MTBF - 125,000 hours at 100% duty cycle Uncorrected Error Rate - 1x10-17 bits read SCSI Interface - Wide Ultra SCSI-3 (LVD/SE) SCSI Connector Internal: 68-pin wide HD LVD External: 68-pin wide LVDS, thumbscrew Array module: 80-pin SCA (SCSI and power) Termination No terminator is required for internal model (assumes use of terminated cable). External model requires termination with multimode terminator (included with product). Array module requires termination with multimode terminator (ordered separately - p/n C2364A).
www.wilshiresoft.com info@wilshiresoft.com
Benefits
Reduced batch and backup windows
With its native data transfer rate of 30 megabytes per second, or up to 70 megabytes per second with compression, the T9940B drive helps you store more data in less time to meet your shrinking production batch and backup windows.
Increased productivity
The high capacity T9940 tape drives minimize cartridge mounts, require fewer cartridges to manage for disaster recovery and improve automation efficiency.
Standard Features
Tape compatibility
The T9940B drive provides backward read compatibility with 9940A cartridges. It can rewrite StorageTek 9940A tape cartridges with three times more data, for extended investment protection.
Multi-platform connectivity
www.wilshiresoft.com info@wilshiresoft.com
T9940 drives run on todays popular operating environments. The T9940B supports two gigabit FICON, ESCON, and two-gigabit Fiber Channel connectivity. The T9940A drive supports ESCON, SCSI and one-gigabit Fiber Channel connectivity.
SAN-readiness
A native two-gigabit fabric-aware Fiber Channel interface makes the T9940B drive ready for the demands of highspeed SAN environments and storage server networks.
Specification
Tape load and thread to ready: 18 sec (formatted) Average file access time (first file): 41 sec Average access time: 59 sec Maximum/average rewind time: 90/45 sec Unload time: 18 sec Data transfer rate, native (uncompressed): 30 MB/sec Data transfer rate (compressed): 70 MB/sec Capacity, native (uncompressed): 200 GB Interface: 2 Gb Fiber Channel, ESCON, ESCON for VSM, 2Gb FICON for FICON and FICON express channels Burst transfer rate: Channel rate (Fiber Channel): 200 MB/sec (maximum instantaneous) Interface (Fiber Channel): N & NL port, FC-PLDA (Hard and soft AL-PA capability), FC-AL-2 FCP-2, FC-TAPE Read/write compatibility interface: Proprietary format Emulation modes: Native, T9940A, 3490E, 3590
6.7.2 DLT
6.7.2.1 Tandberg DLT 8000 Autoloader
10 Cartridge version
The Tandberg DLT Autoloader brings you the productivity and security of an automated tape solution, as well as the proven reliability and scalability of DLT technology.
www.wilshiresoft.com info@wilshiresoft.com
Special Functions
Up to 800GB* storage capacity Up to 43GB*/hr transfer rate Available with Tandberg DLT8000 drive Fits easily on a desk, in a rack or on top of a server Removable magazine for easy storage management Optional barcode reader for fast cartridge inventory and data retrieval Added security with TapeAlert Supported by all major software suppliers Data Capacity Native: 400GB Data Capacity Compressed (2:1): 800GB Transfer Rate Native: 6 MB/s / 360MB/min / 21.6GB/hr Transfer Rate Compressed (2:1): 12MB/s / 720MB/min / 43.2GB/hr SCSI Interface: SCSI-2, Fast/Wide LVD/SE Tape Capacity: 10-cartridge capacity
www.wilshiresoft.com info@wilshiresoft.com
Features
Upgradable to support future drive types. Future ability to connect multiple libraries via redundant Pass-Through Ports (PTPs). Conversion kit for some customer-owned drives. Ease of service. Sun FC drives support FCP-2 error recovery. Small footprint, high slot density (can exceed more than 50 slots/sq. ft.). Service/operator areas limited to front and back. Remote monitoring via TCP/IP or optional local touch-screen panel. Supports true mixed media and drives, including 9840B/C, 9940B and LTO 2. Multiple robots.
Benefits
Protects customer investment; can accommodate growth without scheduled downtime, supporting the highavailability demands of enterprise customers. Protects your current customer investment. Near zero scheduled downtime. No interruption in backup performance, which is transparent to user. Conserves valuable data center floor space. Ease of management. Customers can select the appropriate drives for their application and migrate to new drive types without having to manage physical partitions - so there's only one library to manage. Reduces the queuing effect found in libraries with single robots; multiple robots can handle more requests in parallel.
Specification
Availability: Non-disruptive serviceability: Standard N+1 power for drives, robotics, and library electronics, allowing replacement while the library is operating. 2N power is optional. Capacity and Performance:
www.wilshiresoft.com info@wilshiresoft.com
Number of cartridge slots: 1,448 customer-usable slots (minimum) 6,500 customer-usable slots (maximum) Number of tape drives: Up to 64 drives of any combination Cartridge access port (CAP): Standard 39 cartridge slot CAP, Optional 39 additional slots (78 total) Capacity: Number of cartridge slots: 1,448 customer-usable slots (minimum) 6,500 customer-usable slots (maximum) Number of tape drives: Up to 64 drives of any combination Cartridge access port (CAP): Standard 39 cartridge slot CAP, optional 39 additional slots (78 total) Hardware: Sun Blade 1000, 1500, 2000, 2500Sun Fire V210, V240, V250, 280R, V440, V480, V880, V1280, E2900Sun Fire 4800, 4810, E4900, 6800, E6900Sun Fire 12K, 15K, E20K, E25KNetra 240, 440, 1280Ultra 60 & 80Sun Enterprise 220R, 250, 420R, 450, x500, 10000 Management: Media management: full mixed media, any cartridge can be placed in any cell, no required partitions Digital vision system: Unique digital vision camera system performs continuous calibration and reads bar codes Operator panel: Standard remote monitoring and control; touch-screen is optional Automatic clean: Dedicated cleaning cartridge slots for tape drive cleaning for multiple drive types by library or software command Automatic self discovery: Auto-discovery and auto-configuration for all drive, media types, slots, and Cartridge Access Ports Continuous automation calibration: No periodic maintenance or alignment required Performance: Throughput per hour, native (uncompressed):Per drive: 9840C: 30 MB/sec9940B: 30 MB/secLTO-2: 30 MB/secPer 64 drives:9840C - 6.9 TB/hr9940B - 6.9 TB/hrLTO-2 - 6.9 TB/hrAverage cell to drive time: 6.25 sec per robotMean Time To Repair (MTTR): 30 minutes or lessMean Exchanges Between Failures (MEBF): 2,000,000 exchangesMean Time Between Failure (MTBF) - drives9840C FCPower On: 290,000 hr @ 100% duty cycleTape Load: 240,000 hr @ 10 loads/day (100,000 loads)Tape Path Motion (TCM): 216,000 hr @ 70% TCM duty cycleHead Life: 8.5 yr @ 70% TCM duty cycle9840B FCPower On: 290,000 hr @ 100% duty cycleTape Load: 240,000 hr @ 10 loads/day (100,000 loads)Tape Path Motion (TCM): 196,000 hr @ 70% TCM duty cycleHead Life: 8.5 yr @ 70% TCM duty cycleMLTO-2 FCMTBF: 250,000 hr @ 100% duty cycleMCBF: 100,000 cartridge load/unload cyclesHead Life: 60,000 tape motion hours Software: Operating System:Solaris 8 U4 Operating System or laterSolaris 9 Operating SystemSupported software:Sun Enterprise and Application:Sun StorEdge Enterprise Backup Software 7.1 and laterSun StorEdge Utilisation Suite (SAM-FS) Software 4.1 and laterSun StorEdge SFS 4.4 and laterThird-Party:VERITAS NetBackup 5.0 and laterACSLS 7.1 and later
www.wilshiresoft.com info@wilshiresoft.com
The MSL6000 Tape Libraries are easily managed through an intuitive GUI control panel and integrated remote web management, allowing simple management capabilities from any remote or on-site location. In addition, each library is available with HP world-class diagnostic tool, HP Library and Tape Tools, at no additional charge. Fully tested and certified in HP's Enterprise Business Solutions (EBS), the MSL6000 tape libraries can be up and running quickly in a wide range of fully supported configurations. The MSL6000 Tape Libraries provide growth without limits by offering maximum investment protection through scalability. To move from a direct attach to network attached storage configuration, a simple installation of a Fiber Channel interface card makes the conversion a snap. In addition, the MSL6000 Tape Libraries will scale to larger configurations by enabling a single library to grow and change with capacity and technology as needs require. Not only will the MSL6000 Tape Library scale within the family, but it can also be scaled with MSL5000 Tape Libraries using a pass-through mechanism for up to 16 drives and 240 slots.
Features
Scalable: Multi-unit stacking allows the library to grow with your storage requirements. You can start with a direct-attach configuration, and easily change to network storage environment with only an interface card upgrade. Flexible: Available with a broad choice in tape technology, including: Ultrium 960, Ultrium 460, and SDLT 600, and with either a SCSI or Fiber Channel interface. Upgrade to new technology with easy to install upgrade kits Manageable: User-friendly GUI control panel and web interface make library management easy from any remote or local location. Reliable: Tape libraries provide consistent backup and automatically change tapes with robotics rating of 2 million Mean Swaps before Failure. Compact: 5U and I0U modules offer the highest storage density in their class. Affordable: Buy only the storage you need now and add more later. Evolutionary: Drives can be upgraded as technology progresses. Compatible: All MSL6000 libraries work with industry leading servers, operating systems, and backup software is fully tested through the HP Enterprise Business Solutions group for complete certification
Benefits
Flexible: Investment protection by providing instant interface and drive technology upgrades without hassle Manageable: Manage the library from any local or remote location and reduces administrative burden Scalable: Investment protection by providing seamless capacity enhancement
www.wilshiresoft.com info@wilshiresoft.com
86.5 to 173 GB/hr transfer rate Accommodates either VXA-2 or M2 tape drives Optimized for rack-mount installations
Impressive! The 430 tape library is the most affordable, mid-range automated data storage solution designed for mid-size data centers running IBM and HP/Compaq servers. The power of mid-range automation now comes with a choice. The 4 drive, 30 slot 430 library can be configured to meet your unique system needs with either VXA-2 or M2 tape drives for up to 5TB of data storage. Don't pay for more than you need. The 430 library with VXA-2 is designed to meet the of organizations limited by both budget and network bandwidth. Running at speeds up to 173GB/hr, the 430 with VXA-2 has adequate performance for many mid-range data center environments, priced thousands less than the nearest competitor. If your data center system architecture is optimized for speed, the 430 configured with M2 tape drives delivers the advantages of a higher performance tape drive.
The Scalar 10Ks unique capacity-on-demand scalability lets you scale your storage capacity more easily and quickly than you can with any other library. Capacity-on-demand systems ship with extra capacity that you can activate, in 100-tape increments, using a software key. You pay only for the capacity you use. For high-capacity or mixed-media needs, the Scalar 10K offers traditional library configurations. These maximum capacity models have up to 15,885 tape slots and allow you to combine LTO, SDLT, and AIT technology in the same chassis. The Scalar 10K is the first library to offer integrated storage network support, with certified interoperability that means seamless operation in new or existing SANs. The system supports multiple protocols and heterogeneous
www.wilshiresoft.com info@wilshiresoft.com
fabrics at the same time. Integrated SAN management services, such as serverless backup and data-path conditioning, provide better backup in storage networks. The Scalar 10Ks high-availability architecture, which includes true 2N power and dual data paths, is designed to meet the reliability demands of data consolidation. Features to ensure maximum system uptime include autocalibration, self-configuration, and magazine-based loading of up to 7.9TB (native) at once. For more information on the Scalar 10K, please see the Scalar 10K microsite.
The Ultrium 960 supports the industry's most comprehensive list of compatible hardware and software platforms. Each drive option includes a single-server version of HP Data Protector (license) and Yosemite TapeWare (CD) backup software, as well as support for HP StorageWorks One-Button Disaster Recovery (OBDR) and HP StorageWorks Library and Tape Tools (L&TT). The Ultrium 960 Tape Drive is fully read and write compatible with all second-generation Ultrium media, and adds a further degree of investment protection with the ability to read all
www.wilshiresoft.com info@wilshiresoft.com
first-generation Ultrium media as well. The Ultrium 960 also represents HP's first tape drive solution to deliver support for Write-Once, Read-Many (WORM) media. This feature allows customers to easily integrate a costeffective solution to secure, manage, and archive compliant data records to meet stringent industry regulations. HP customers can now manage all of their backup and archiving data protection needs with just one drive.
Features
800 GB Capacity: The Ultrium 960 tape drive is a high capacity drive that stores 800 GB on a single cartridge with 2:1 compression. 160 MB/s Performance: The world's fastest tape drive with sustainable data transfer rates to 160 MB/s at 2:1 compression. Data Rate Matching (DRM): Allows the tape drive to dynamically and continuously adjust the speed of the drive, from 27 MB/s to 80 MB/s, matching the speed of the host or network. LTO Open Standard: Drive technology based on an open standard that provides for media compatibility across all brands of LTO Ultrium products. Server Compatibility: Qualified on HP ProLiant, Integrity, 9000, NonStop, and AlphaServers platforms, as well as many servers from other leading vendors such as Dell, IBM, and Sun. Software Compatibility: Extensive list of supported backup and archiving software applications from HP, CA, VERITAS, Yosemite, Legato, Tivoli, and many more. Support for WORM Media: Able to read and write to new Write-Once Read-Many (WORM) HP Ultrium Data Cartridges Management and Diagnostics Software Included: HP StorageWorks Library and Tape Tools software provides a single application for managing and troubleshooting your tape drive, media and configuration. Backup Software Included: Includes a single-server version of Yosemite TapeWare XE (CD) and HP OpenView Data Protector (license) One-Button Disaster Recovery (OBDR) Supported: Firmware-based disaster recovery feature that can restore an entire system using a single Ultrium 960 tape drive and data cartridge
Benefits
High capacity drive allows customer to backup more data with fewer data cartridges: High capacity drive reduces the costs associated with data protection by requiring fewer data cartridges to complete backups. Ultra fast performance can backup more data in less time: High performance drive allows customers to scale their backup capacities without having to increase their backup windows. Data Rate Matching optimizes performance while reducing tape and media wear: Data Rate Matching optimizes the performance of the tape drive by matching the host server or network's data transfer rate, putting less stress on the tape drive and media. LTO open standard provides customers with more choices: The LTO open standard ensures compatibility across all brands of Ultrium tape drives, giving customers a greater choice of Ultrium solutions without losing investment protection. Comprehensive hardware and software qualification increase customers agility to adapt to new environments as needed: Support for heterogeneous hardware and software platforms provides customers with a single tape drive solution for all environments. Investment protection through backward write and read compatibility: Backward read compatibility ensures that files from generation one and two Ultrium data cartridges can be recovered using the HP Ultrium 960 tape drive. Backward write compatibility allows the customer to create backups using second-generation Ultrium media with their HP Ultrium 960 tape drive, maximizing their ROI for media that was previously purchased. Easily integrate a secure method for archiving compliant records using Ultrium WORM media: With a single HP Ultrium 960 tape drive and HP's comprehensive support for hardware and software platforms, customers can easily integrate a WORM-based archiving solution into their current data protection strategy using LTO Ultrium solutions. Complete set of management and diagnostics tools included with each tape drive option and available via free download from HP.com: Tape drive management, performance optimization, and troubleshooting is made simple using the HP StorageWorks Library and Tape Tools application that is included with HP Ultrium 960 tape drive.
www.wilshiresoft.com info@wilshiresoft.com Wilshire Software Technologies Ph: 2761-2214 / 6677-2214 / 6452-6173 Rev. Dt: 15-Oct-08 Version: 3
Complete hardware and software solution in the box with each HP Ultrium 960 tape drive: HP Ultrium 960 tape drives ships with a choice of single-server backup software applications (HP OpenView Data Protector and Yosemite TapeWare), tape drive media, and SCSI cables, providing the customer with a complete data protection solution in the box. Simple and fast disaster recovery with One-Button Disaster Recovery (OBDR) included in the drive firmware: HP Ultrium 960 tape drives include a HP-exclusive disaster recovery feature, One-Button Disaster Recovery, that allows the customer to simply and quickly recover a server's operating system, software applications, and data using a single HP Ultrium data cartridge.
Tape Library offers the IBM TotalStorage 3592 Tape Drive and the new IBM TotalStorage 3588 Ultrium Tape Drive Model F3A, utilizing Linear Tape-Open (LTO) Ultrium 3 Tape Drive technology designed to provide high capacity, throughput, and fast access performance Variety of drive technology offerings help increase storage density while protecting your technology investment in supporting LTO Ultrium 1, LTO Ultrium 2, LTO Ultrium 3 and IBM 3592 tape drives and media within the same library Introducing a second library accessor, the 3584 High Availability Frame Model HA1, designed to help increase library availability and reliability Built-in storage management functions designed to help maximize availability and allow for dynamic management of both cartridges and drives Designed to provide multi-petabyte capacity, high performance and reliability in an automated tape library that is scalable to 192 tape drives and over 6200 cartridges for midrange to enterprise open systems environments Patented Multi-Path Architecture designed to help increase configuration flexibility with logical library partitioning while enabling system redundancy for high availability
6.7.3.3 Comparison IBM LTO Ultrium versus Super DLT Tape Technology
Introduction
This white paper is a comparison of Super DLTtapeTM technology with the Ultrium technology developed by the Linear Tape Open (LTO) technology providers, Seagate, HP and IBM. Its focus is on the merits of the two technologies from a customer point of view, and as such it compares the features and benefits of the SDLT 220
www.wilshiresoft.com info@wilshiresoft.com
drive with the three different implementations of Ultrium technology, taking into account the key factors a customer considers when choosing a data protection solution. It draws on secondary data from respected industry analysts such as IDC and Dataquest, independent third party test data, as well as extensive primary research conducted with IT managers in departmental and enterprise IT environments.
Technology Overview
Super DLTtape is the latest generation of the award-winning DLTtapeTM technology. The SDLT 220 drive is a single reel, half-inch magnetic tape drive with a native capacity of 110GB a native transfer rate of 11 MB/sec. It is manufactured by Quantum Corporation and by Tandberg Data, and is sold and marketed by most leading vendors of servers and automated backup systems. It is backward read compatible with all DLTtape IV media written on DLT 4000, DLT 7000, and DLT 8000 tape drives. Ultrium tape drives are the single reel implementation of LTO technology, a new platform developed by Seagate, HP and IBM. They also use half-inch magnetic media, have a native capacity of 100GB and are specified with transfer rates of 15 MB/sec or 16 MB/sec. They are sold by HP and IBMs captive server and automation divisions, as well as by a subset of other vendors. Ultrium drives are not compatible with any previous tape technology.
Open Standards
DLTtape drives and media have served the worlds mid-range backup and archiving needs for much of the last ten years. With an installed base of over 1.7 million drives and over 70 million cartridges shipped to customers, DLTtape systems are recognized as the de facto industry standard for mid-range backup. IDCs latest reported market share numbers indicate that DLTtape had a market share of 73% in the mid-range tape segment1. The chart below summarizes the installed bases of various competing mid-range tape technologies.
www.wilshiresoft.com info@wilshiresoft.com
To provide better backup in storage networks, the Scalar 1000 features management services that ease installation and diagnostics, enhance security and availability, and make data management more efficient. These tools include serverless backup, single-view connectivity, a built-in SAN firewall, and data-path conditioning utilities that increase backup performance and reliability.
www.wilshiresoft.com info@wilshiresoft.com
The Scalar 1000 supports LTO, SDLT/DLT, and AIT technologies in single- or mixed media configurations. It also offers up to 16 virtual library partitions.
www.wilshiresoft.com info@wilshiresoft.com
www.wilshiresoft.com info@wilshiresoft.com
Page i
INDEX
1. Introduction to DAS........................................................................................................................... 1
1.1. Advantages of DAS................................................................................................................................. 1 1.2. Direct Attached Storage (DAS) Model...................................................................................................... 1 1.3. Ideal Situations for DAS .......................................................................................................................... 2 1.4. Adaptec Direct Attached Storage SANbloc 2GB JBOD.......................................................................... 2 1.5. Connectivity............................................................................................................................................ 2 1.5.1. Enhanced IDE.................................................................................................................................. 3 1.5.1.1 PATA......................................................................................................................................... 4 1.5.1.2 SATA......................................................................................................................................... 4 1.5.1.3 Advantages of SATA over PATA ................................................................................................ 4 1.5.1.4. PATA vs. SATA ........................................................................................................................ 4 1.5.1.5. Hardware, Configurations & Pictures ......................................................................................... 5 1.5.2. SCSI................................................................................................................................................ 8 1.5.2.1. Introduction................................................................................. Error! Bookmark not defined. 1.5.2.2. Advantages of SCSI.................................................................................................................. 8 1.5.2.3. Comparison of SCSI Technologies ............................................................................................ 9 1.5.2.4. Single - Ended vs. Differential.................................................................................................... 9 1.5.2.5. SCSI Devices that do no work together.................................................................................... 10 1.5.2.6. SCSI Termination.................................................................................................................... 10 1.5.2.7. Adaptec Ultra320 SCSI ........................................................................................................... 11 1.5.2.8. SCSI Controllers ..................................................................................................................... 11 1.5.3. Fiber Channel ................................................................................................................................ 11 1.5.3.1. Introduction............................................................................................................................. 11 1.5.3.2. Advantages of Fiber Channel .................................................................................................. 12 1.5.3.3. Comparing FC DAS Storage Solutions..................................................................................... 13
www.wilshiresoft.com info@wilshiresoft.com
Page ii
2.16.8. NAS by ADIC ............................................................................................................................... 29 2.16.8.1. Benefits of Using a SAN Behind a NAS Storage Network ....................................................... 29 2.16.8.2. ADIC / Network Appliance Solution Overview......................................................................... 29 2.16.8.3. Benefits for ADIC-Network Appliance NAS backup solution to an enterprise:........................... 30 2.17. StorNext Storage Manager .................................................................................................................. 30 2.17.1. Benefits of StorNext Storage Manager .......................................................................................... 31 2.16.2. Features of StorNext Storage Manager ......................................................................................... 31
Introduction of SAN............................................................................................................................. 33
3.1. Advantages of Storage Area Networks (SANs) ...................................................................................... 34 3.2. Advantages of SAN over DAS ............................................................................................................... 34 3.3. Todays SAN Topologies....................................................................................................................... 35 3.4. Difference between SAN and LAN......................................................................................................... 37 3.5. Difference between SAN and NAS ........................................................................................................ 37 3.6. How do I manage a SAN? ..................................................................................................................... 37 3.7. What is a SAN Manager?...................................................................................................................... 37 3.8. When should I use a Switch vs. a Hub? ................................................................................................. 37 3.9. TruTechnology...................................................................................................................................... 38 3.9.1. TruFiber......................................................................................................................................... 38 3.9.2. TruCache....................................................................................................................................... 38 3.9.3. TruMap .......................................................................................................................................... 38 3.9.4. TruMask ........................................................................................................................................ 39 3.9.5. TruSwap ........................................................................................................................................ 39 3.10. Features of a SAN .............................................................................................................................. 39 3.11. SANs : High Availability for Block-Level Data Transfer.......................................................................... 39 3.12. Server-Free Backup and Restore ........................................................................................................ 40 3.13. Backup Architecture Comparison......................................................................................................... 40 3.14. SAN approach for connecting storage to your servers/network? ........................................................... 40 3.15. Evolution of SANs ............................................................................................................................... 42 3.16. Comparison of SAN with Available Data Protection Technologies......................................................... 43 3.17. SAN Solutions .................................................................................................................................... 44 3.17.1. SAN Hardware Solutions .............................................................................................................. 44 3.17.1.1. ADIC SAN Solutions.............................................................................................................. 44 3.17.1.2. SAN by SUN......................................................................................................................... 45 3.17.1.3. Features of SUN StorEdge .................................................................................................... 46 3.17.1.4. Benefits of SUN StorEdge ..................................................................................................... 46 3.17.2 SAN Management Software Solutions............................................................................................ 47 3.17.2.1. SAN by VERITAS.................................................................................................................. 47 3.17.2.2. Veritas SAN Applications....................................................................................................... 47 3.17.2.3. Example for Increasing Availability Using Clustering............................................................... 49 3.17.2.4. VERITAS SAN Solutions ....................................................................................................... 50 3.17.2.5. VERITAS SAN 2000: The Next Generation ............................................................................ 52 3.17.2.6. Tivoli Storage Manager ......................................................................................................... 52 3.17.2.7. Tivoli SANergy ...................................................................................................................... 53 3.17.2.8. SAN-speed sharing for Application Files ................................................................................ 54 3.18. Fiber Channel ..................................................................................................................................... 55 3.18.1. Introduction of Fiber Channel........................................................................................................ 55 3.18.2. Advantages of Fiber Channel........................................................................................................ 55 3.18.3. Fiber Channel Topologies............................................................................................................. 55 3.18.3.1. Point-to-Point........................................................................................................................ 55 3.18.3.2. Fiber Channel Arbitrated Loop (FC-AL).................................................................................. 55 3.18.3.3. Switched Fabric .................................................................................................................... 56 3.18.4. How do SCSI tape drives connect to a Fiber Channel SAN? .......................................................... 56
www.wilshiresoft.com info@wilshiresoft.com Wilshire Software Technologies Ph: 2761-2214 / 6677-2214 / 6452-6173 Rev. Dt: 15-Oct-08 Version: 3
Page iii
3.18.5. What is an Interconnect? .............................................................................................................. 56 3.18.6. Scalable Fiber Channel Devices ................................................................................................... 57 3.18.7. Features of Fiber Channel ............................................................................................................ 57 3.18.8. Why Fiber Channel?..................................................................................................................... 57 3.18.9. Fiber Channel System .................................................................................................................. 58 3.18.10. Technology Comparisons ........................................................................................................... 59 3.18.11. LAN Free Backup using Fiber Channel........................................................................................ 60 3.18.11.1. Distributed Backup .............................................................................................................. 60 3.18.11.2. Centralized Backup ............................................................................................................. 61 3.18.11.3. SAN Backup ....................................................................................................................... 62 3.18.12. Conclusion ................................................................................................................................. 64 3.18.13. LAN Free Backup Solution Benefits ............................................................................................ 64 3.18.14. Fiber Channel Strategy for Tape Backup Systems....................................................................... 64 3.18.14.1. Stage - 1 (LAN Free Backup)............................................................................................... 64 3.18.14.2. Stage - 2 (Server-Less Backup) ........................................................................................... 65 3.18.14.3. Suggested Deployment Strategy.......................................................................................... 67 3.19. iSCSI.................................................................................................................................................. 67 3.19.1 Introduction of iSCSI ..................................................................................................................... 67 3.19.2. Advantages of iSCSI .................................................................................................................... 68 3.19.3. Advantages of iSCSI on SAN:....................................................................................................... 68 3.19.4. iSCSI describes:........................................................................................................................... 69 3.19.5. How iSCSI Works......................................................................................................................... 70 3.19.6. Applications that can take advantage of these iSCSI benefits include:............................................ 70 3.19.7. iSCSI under a microscope ............................................................................................................ 71 3.19.8. Address and Naming Conventions ................................................................................................ 72 3.19.9. Session Management................................................................................................................... 72 3.19.10. Error Handling............................................................................................................................ 73 3.19.11. Security...................................................................................................................................... 73 3.19.12. Adaptec iSCSI............................................................................................................................ 73 3.19.12.1. Storage Systems................................................................................................................. 74 3.19.12.2. HBAs .................................................................................................................................. 74 3.19.12.3. Adaptec 7211F (Fiber Optic)................................................................................................ 74 3.19.13. Conclusion ................................................................................................................................. 74 3.19.13.1. P.S. .................................................................................................................................... 74 3.19.13.2. Terms and abbreviations: .................................................................................................... 75 3.19.14. Others (iFCP, FCIP) ................................................................................................................... 75 3.19.14.1. Fiber Channel over IP.......................................................................................................... 76 3.19.14.2. FCIP IETF IPS Working Group Draft Standard specifies: ..................................................... 77 3.19.14.3. iFCP ................................................................................................................................... 77 3.19.15. How to Build an iSCSI SAN ........................................................................................................ 77 3.19.16. Setup ......................................................................................................................................... 79 3.19.17. Pain-Free Initiation ..................................................................................................................... 79 3.19.18. SAN Components....................................................................................................................... 79
Page iv
www.wilshiresoft.com info@wilshiresoft.com
Page v
6.4.2 Media and Backward Compatibility ................................................................................................ 102 6.5 Drive Cleaning ..................................................................................................................................... 103 6.6 Technology Roadmaps ........................................................................................................................ 103 6.7 Tape Technologies .............................................................................................................................. 106 6.7.1 DAT.............................................................................................................................................. 106 6.7.1.1 HP DAT 72 Tape Drive........................................................................................................ 106 6.7.1.2 T9940 Tape Drives ................................................................................................................ 108 6.7.2 DLT .............................................................................................................................................. 109 6.7.2.1 Tandberg DLT 8000 Autoloader.............................................................................................. 109 6.7.2.2 SUN StorEdge L8500 Tape Library......................................................................................... 110 6.7.2.3 HP MSL6000 Tape Libraries .................................................................................................. 112 6.7.2.4 EXABYTE 430 Tape Library ................................................................................................... 113 6.7.2.5 Scalar 10K by ADIC ............................................................................................................... 114 6.7.3 LTO (Linear Tape Open) ............................................................................................................... 115 6.7.3.1 HP Ultrium960 Tape Drive...................................................................................................... 115 6.7.3.2 IBM 3584 Tape Library........................................................................................................... 117 6.7.3.3 Comparison IBM LTO Ultrium versus Super DLT Tape Technology ......................................... 117 6.7.3.4 AML/2 LTO by ADIC .............................................................................................................. 118 6.7.3.5 Scalar 1000 AIT by ADIC ....................................................................................................... 119 6.7.3.6 AML/J by ADIC ...................................................................................................................... 120
www.wilshiresoft.com info@wilshiresoft.com