|
ntwrk: Q&A: Storage Networking: SANs Expert(s): part1 |
|
|
Storage Networking: SANs Expert(s): part1
> We have about 7.5TB Dell/EMC FC4700 directly attached to a single W2K server with dual HBAs with about 3-4TB getting added per year. We are thinking about getting a switch (or two for redundancy) to put the LTO tape library that is currently directly attached to the same server on the SAN to make backup throughput rate tolerable. Here are several questions:
1. Is it a bad idea to have one server own all of the LUNs on the FC4700 -- for a variety of reasons -- single point of failure too much disk space to manage for one server, etc.?
2. Does implementing switches sound justified just to relieve the unacceptably slow backups? Right now the throughput rate is about 3-4MB/s with hardware compression -- LTO drives are rated 15MB/s WITHOUT compression.
3. If switches are implemented and more servers are added, can the same LUN be available to more than one server without using special software to create snapshots and clones? For example, if there is a file server assigned to LUN1 and a SQL server assigned to LUN2, can the SQL server access the files in LUN1? Can two nodes in an active cluster access the same LUN? What part of SAN provides that capability -- zoning of the switches?
This question posed on 22 September 2003
I'll take each number in your question in turn:
1. A cluster may be the answer to using a single server for 7.5TB of storage. Without knowing what the application is that needs all that space it would be hard for me to give you a more concise answer. Your capacity to HBA ratio is quite high. You may be able to speed things up a bit by adding more HBAs into the server and using load balancing across the HBAs. Using a W2K cluster, you can spread the access load across servers by assigning disk resources in the cluster to have different owners within the cluster. This will not only let you load balance across servers but also let you do maintenance on your servers by failing over the disks to the other server and bring down the node for maintenance. The FC4700 has four physical connections so you could do this without buying a switch.
2. A switch makes sense if you implement serverless backup through the SAN. You can connect a data router to the switch and use the extended copy command to move data directly from the FC4700 to your SAN connected tape device through the router. Your router must be E-copy capable and your backup software must support serverless backup. Adding a switch (switches for dual pathing) will also allow you to add more than two servers to share the data load. Most of the latest versions of backup software support serverless backup and I think Dell sells data routers that support E-copy (Call your Dell rep.)
3. The same LUN CAN be assigned to more than one server in a SAN. This is how clusters in a SAN work. ALL the cluster members have access to all the disk resources in the SAN although only a single server owns each disk resource at one time. For true concurrent access to a single LUN you would need specialized software to provide lock management for write access to the same LUN. This is why most folks use NAS for applications that need to share the same file. NAS allows concurrent access to the same file through the CIFS or NFS protocol over IP. Some database applications do allow concurrent access such as Oracle clusters. If your using Oracle, contact them for more information. Digital (HP) VMS was the first clustering solution that allowed concurrent access to disk resources, but the applications have to be cluster aware for VMS clustering. Other solutions that allow concurrent access are solutions that use a SAN based Global File System (GFS), which allows access to the same data in the SAN at SAN speeds (not over IP). Veritas, SGI, IBM, HDS, EMC, SUN, Microsoft, etc. all have, or are working solutions in this space. (I'm sure you have heard of CXFS from SGI, SANergy from IBM. and SAMfs from SUN).
…if there is a file server assigned to LUN1 and an SQL server assigned to LUN2, can the SQL server access the files in LUN1?
Yup, but over IP not through the SAN unless you use something like SANergy from IBM. A SAN file system, in IBM's SANergy implementation anyway, will allow CIFS or NFS metadata access to SAN based files through a metadata server. Basically, requests for access for data go over an IP connection, while access to the data itself is re-directed through the SAN connection at SAN speeds. Since each server in the SAN has both a SAN connection and an IP connection to the metadata server nd SAN client can use CIFS or NFS metadata calls to request access and share files based in the SAN.
Can two nodes in an active cluster access the same LUN? What part of SAN provides that capability - zoning of the switches?
Yes but not at the same time (at least in a W2K cluster except for the quorum disk resource) LUN security in the storage array allows access to the same LUN in the SAN at a hardware level. This is done by assigning LUN access to the World Wide Names for the Host Bus Adapters in each server. Say for example you have two servers with two HBAs in each server. Using LUN security you would assign access to all four WWNs (two for each).
TechTarget
|
|
|
|
Posted on Sunday, 05 October 2003 @ 05:35:00 EDT by phoenix22
|
|
|
|
|
Login |
|
|
|
|
|
· New User? ·
Click here to create a registered account.
|
|
|
Article Rating |
|
|
|
|
|
Average Score: 0
Votes: 0
|
|
|