Wednesday, 26 December 2012

Windows Image Backup doesn't seem to work

Of three attempts to recover system image using Windows Setup I recently made, all failed. Usually, I received a message that there was no suitable disk to restore a backup to. Even if I created a system image file, replaced the disk by a blank identical one, and immediately tried to restore that image file on the same PC, it still failed.

So, if you are using System Image in Windows 7 or Windows Vista as a primary tool for backup, you should carefully test your recovery procedure.

Saturday, 22 December 2012

MSI 7681 P67A-GD65-B3 4 short beeps and reboot loop

If you have an MSI 7681 P67A-GD65-B3 mainboard, and it emits four (4) short beeps, then shuts down and goes into reboot loop, even before the BIOS is shown, consider the following

1. There are two generations of Intel iX socket 1155 CPUs, called Sandy Bridge and Ivy Bridge. Of the two generations Ivy Bridge is the latest.
2. The MSI 7681 mainboard supports Sandy Bridge CPUs out of the box. Ivy Bridge CPUs require a BIOS update.

Verify that you aren't trying to install an Ivy Bridge CPU into a motherboard without a BIOS update. If you just unpacked it, it is most certain that you need a BIOS update. Temporarily install an older Sandy Bridge CPU, update the BIOS, and you should be fine.

Now the word of caution, it appears from the Internet forums that installing an update prevents the board from starting with an old version of the CPU.

Tuesday, 18 December 2012

Reliability of Storage Spaces


At various times, there was a variety of tools to organize disk space on the PCs running under Windows both native such as Drive Extender (now extinct) in Windows Home Server, Logical Disk Manager (LDM) in Windows 7, Storage Spaces in Windows 8 Storage Spaces, and third party tools like StableBit or DriveBender.
Now less than two months after Windows 8 with Storage Spaces is released, it would seem that everything is bad - people on different sites cry that Storage Spaces is buggy and doesn't work. However, surprising as it may be, software developed by Microsoft is more reliable and better tested than any third party tools.
By this moment, obviously, Storage Spaces has a larger user base in terms of installations, number of disks, capacity of stored data, and disk-hours online than all the software from different vendors combined for all the time of their existence.

Thursday, 13 December 2012

Downside of having a single large storage pool

The downside of having a single large storage, whatever it is, be it a Storage Spaces pool, or a plain RAID unit, is: if there is a problem with a storage, you have nowhere to back it up to.

With a pool of 15+ disks, there is a pressing need to repair the pool in-place; otherwise, the price tag for a disk set to copy the data off, DIY-style, would be like $1500 in drives alone (15x 2TB hard drives at $100 a pop), not counting controller and power supply requirements.

However, even if the drive purchase is involved, DIY would probably still be cheaper than a service recovery. As a positive side effect, you get a disk set which can hold a backup copy of the data.

Tuesday, 27 November 2012

Probably the world's first Storage Spaces recovery

Today we have done Storage Spaces recovery of a live data using a prototype of ReclaiMe Storage Spaces Recovery. Most likely we are the first in the world who were able to do this.

Our client has a computer consisting of 5 hard disks on which he created Storage Spaces volumes - parity, simple and mirror. A month later the pool failed and the client lost the whole archive of family photos. He asked Microsoft to help, but it didn't bring any good. Then he found us and we decided to try to test our prototype on a real thing.

The prototype gave good results both in recovery quality and in performance. Recovery took about two days on the computer with two processors and 6 GB memory.
So, we congratulate ourselves and the client on the world's first successful Storage Spaces recovery.

Sunday, 18 November 2012

Weird illustration

In PCPro magazine, January 2013, in the article "Getting it taped", they describe the current state of tape drives and the data storage devices using them. All would be well but the article illustration that is quite weird - it would be logical to see these tape drives, however, instead of this the readers are offered a typical NAS device.

More than that, while inspecting in detail this NAS that claims to be Tandberg DPS 2000, I found out that it is like two peas similar to a four bay QNAP device. So, go for the original - buy QNAP.

Monday, 12 November 2012

The current status of ReclaiMe Storage Spaces Recovery


At this moment, the situation with the software prototype is as follows:
  • Scan and analysis of four 2 TB disks on the machine with two CPUs take a day to complete and require 6 GB of memory. Adding two more processors can probably speed the process by the factor of 1.5.
  • Although the prototype can successfully recover Storage Spaces configuration on small sized LUNs (approx. 16 GB per LUN), it yet cannot cope with the large LUNs.
  • The prototype is capable of recovering only Storage Spaces volumes with NTFS; as for ReFS, this is still to be implemented.

Monday, 5 November 2012

RAID5 on two disks


When recovering data from a client's NETGEAR ReadyNAS device we saw a rather strange layout - RAID5 of two disks, which appeared once ReclaiMe File Recovery has processed md-raid records. After looking at it for a while we realized that this is, surprisingly, possible given that we ignore the requirement of a minimum number of disks. So let's see what happens when placing data and parity blocks on two disks in a RAID5 layout where a stripe contains, say, three sectors:

Now let's see what are there in the parity blocks in such a layout? In order to get an even row of two elements it is needed that all the elements would be the same, so the content of the parity block is the same as the content of its corresponding data block. Therefore we deal with a typical RAID1 (mirror) layout. Despite all the wildness of the layout, it meets all the basic criteria of a RAID5:
  • survives a single disk failure,
  • the disk space overhead equals to the capacity of one member disk.
As far as data recovery goes, such a RAID5 layout is better than a typical one since it doesn't require the recovery of RAID configuration parameters should the md-raid setup fail. In this case it would be enough to get each disk in turn, recover data as in case of a regular hard drive recovery, compare the results, and chose the best.

Monday, 29 October 2012

Storage Spaces Recovery system requirements


Based on the first real-world test of ReclaiMe Storage Spaces Recovery we found out that Storage Spaces recovery is generally limited by CPU power rather than memory and disk throughput. Thus, new projected system requirements look like this:
  • no less than 512 MB RAM per disk
  • although there is no minimum CPU limit, it is desirable to have one core per each two disks.
While the number of processor cores is less than half the number of disks, adding cores increases performance linearly. We know definitely that if the number of processor cores is equal to the number of disks, adding cores no longer makes sense.
In the intermediate case, when one core handles two or less disks, it is impossible to accurately predict what the performance will be in terms of speed gain.
So the system with four cores and 8 GB of memory should cover most needs. In other words, you can still get away with some high-end desktop hardware.

Monday, 22 October 2012

Storage Spaces vs. Drive Extender


People who want to get back Drive Extender instead of Storage Spaces usually test Storage Spaces using a parity layout. Then they complain that there is no rebalancing, speed is low, and all in the same spirit. However, with all of this testing they forget that Drive Extender operates on the file level and therefore the parity is out of question. Test Storage Spaces on the mirror or simple layouts and you will get better results than in case of Drive Extender.

Monday, 15 October 2012

Too complex for wide acceptance


Storage Spaces is too complex a technology for practical use without special training. A typical computer reviewer who is not tech-savvy in RAID technology spends a week on average (like between this and that posts) to realize that it is impossible to continue writing data to a RAID 5 array of three disks once the smallest member disk is full. For those who know how RAID is organized the reason should be immediately evident - there is no disk space to write parity any longer.

Wednesday, 10 October 2012

Combination of Storage Spaces and Dynamic Disk RAIDs


It doesn't make sense to combine several mirrored virtual disks created in Storage Spaces to a RAID0 using Disk Management, Dynamic Disks, and LDM as described here, at least when dealing with not too many disks. Apparently, too many is more than eight disks. If you have eight disks or less, the Storage Spaces driver allocates data in such a way that you get a RAID10 consisting of four disk pairs maximum. It is pointless to impose one more RAID layout on top of this configuration; doing so you can even make the matter worse due to alignment issues.
Talking about alignment, it should be noted that stripe on a Storage Spaces virtual disk is 256 KB in size and starts from the beginning of the virtual disk.

Monday, 1 October 2012

Helium hard drives revisited


In the article about use of helium in hard disks from Xyratex some aggressive change is offered. Helium is absolutely inert - it interacts neither with surface of platters nor other disk components.
Thus, if the disk is filled with helium, the protective platter covering can be made thinner or even completely omitted.
However, the side effect may arise: if the disk loses hermeticity and helium escapes, surface of platters will be oxidized quite quick and data will be lost. To kill a regular hard disk you need to open it and fill it with dust. For disk without protective covering it is enough to leave it without helium for a month. In this case it is impossible to put the disk aside and try to recover data from it, say, in a year.

Monday, 24 September 2012

Helium hard drives


Hitachi (HGST) is going to create hard disks filled with helium. Theoretically, it improves a thermal conduction and also allows getting more platters into the same form-factor. Obviously, such disk will be more difficult to repair, at least because a distance between platters decreases. It is less obvious that a vent hole (aimed to equalize a pressure inside the typical disks filled with air when the temperature changes) must be closed.
Maybe incipient pressure changes will be compensated by the lesser viscosity of helium; however, it the more severe temperature limits might be introduced. Additionally, it is known that it is more difficult to hermetically seal a volume of helium rather than that of air, because helium better percolates through gaskets. Thus, we cannot conjecture about the lifetime of helium disks.

Monday, 17 September 2012

Seek errors in RAID recovery


Theoretically, data recovery tools are read-only that means that their usage cannot cause any damage. However, in practice, when recovering data you may observe the effect as if the hard disks are being destroyed mechanically.

For example, we took four disks from the NAS, connected them to a PC and launched the RAID recovery tool. Immediately S.M.A.R.T. monitoring software (Cropel) raised alarm because too many seek errors arose. Indeed, these seek errors were caused by the disk vibration that was provoked by the RAID recovery command to move disk heads on all the disks simultaneously. To tell the truth, a NAS device does the same when reading data from an array. However, a regular NAS device is equipped with more vibration-resistant drive mounts and fastenings.

So, when transferring disks from the device managing RAID array to a regular PC, you may get alerts from S.M.A.R.T. monitoring software telling that the values of the Seek Error Rate attribute have changed significantly. Don't worry about the alerts; after a while values of this S.M.A.R.T. attribute will settle at the new level and alarm will stop.

Monday, 10 September 2012


In this article it is said that NetGear has sold 108,876 devices in 2011 and had a 16.4 % share of the market.
Assuming that a NAS device contains three disks on the average and disk attrition rate is 4% per year (that corresponds to the Google data in the article about disk reliability) we get that about 200 disks fail per day. Note that this is only among NAS devices sold in 2011.
Sales of hard drives are at least 600 million per year. On the basis of the same attrition rate we get
600,000,000 * 0.04 / 365 / 24 ~ 2700 disk failures per hour

Saturday, 8 September 2012

TRIM and Storage Spaces


It is known that filesystems that you will probably use on Storage Spaces, namely NTFS and ReFS, support TRIM. We have no information about FAT but it is unlikely that someone will use FAT in conjunction with Storage Spaces.
So, if you delete a file on a volume located on a virtual Storage Spaces disk, the filesystem driver sends a TRIM command to the Storage Spaces driver. The latter uses the same mechanism as TRIM uses in order to free slabs, i.e. to return slabs that are marked as unused by the filesystem to the pool of free slabs. Returning slabs to the pool of free slabs will take place if:
  • a file being deleted fully occupies one or several slabs;
  • a file being deleted is a single file in the slab meaning that after its deletion this slab would contain no data at all.
Once a slab is withdrawn back to the pool of free slabs, it is impossible to easily identify the slab - what volume it belonged to, what location it had in the volume. Therefore, regular data recovery from the filesystem side will not work because of some slabs just are absent. In this case you need to do a full-size Storage Spaces recovery aiming to match virtual and physical slabs.

Note that all the above applies to the virtual disks which use thin provisioning, rather than to the fixed ones. When creating a fixed virtual disk, the Storage Spaces driver assigns physical slabs to the virtual disk and then never takes them away.

Wednesday, 5 September 2012

Data recovery time in different filesystems


In FAT filesystem, structures describing directories are spread over the area of data and therefore mixed with the contents of files. If a directory is deleted, easily accessible information about its location is no longer available. In this case it is necessary to do a full scan of the data area to be sure that all the directories are found. Thus, data recovery time on FAT is proportional to the size of the disk and is mainly determined by the time needed to read the entire disk.

NTFS stores metadata densely at more or less known location; when recovering data from an NTFS volume, data recovery software can just look at this small area rather than scan the entire disk. Data recovery time on NTFS is mainly limited by computing resources required to recreate the file table. The total time doesn't depend on the disk capacity but it depends on the number of files actually stored on the disk.

ReFS again spreads its metadata over the disk, mixing it with the content of files to save disk head movement during read/write operations. From the recovery point of view, it means that you need to scan the entire disk even if you just want to find a single deleted file.

From the stability point of view, the filesystem with the spread storage wins over the filesystem with the localized storage of metadata. However, it doesn't apply to FAT filesystem since file allocation tables are stored compactly in the beginning of the volume.

Tuesday, 4 September 2012

Windows filesystems and TRIM


On NTFS, the process of file deletion is not limited by only the work of the file system driver such as zero filling pointers in MFT. Physical or virtual store takes part in this process as well. The filesystem driver sends a TRIM command to the store driver informing a data storage device that the blocks containing file data are not used any longer and therefore can be erased.
Depending on the type of underlying device, TRIM can lead to different results:
  • for a volume located on a regular hard drive, TRIM has no effect;
  • for a volume created in Storage Spaces, TRIM leads to unexpected consequences depending on how the files are located in relation to 256 MB slabs of Storage Spaces;
  • for an SSD, exactly for which the TRIM command was introduced, the blocks that are no longer in use are erased immediately.
However, it should be noted that NTFS never frees blocks with metadata and so the NTFS filesystem driver never sends the TRIM command to erase these blocks. This NTFS peculiarity generates one interesting consequence - since NTFS stores content of small files (resident files) along with metadata, these small files can be recovered even if the TRIM command is used.
As for ReFS filesystem, it doesn't use resident files and therefore nothing can be recovered if the TRIM command was applied on a ReFS volume.

Monday, 3 September 2012

Symmetrical vs. asymmetrical disk arrays


There are symmetrical (for example RAID5) and asymmetrical (like RAID4) RAID arrays.
1
2
p
3
p
4
p
5
6
RAID 5
 
1
2
p
3
4
p
5
6
p
RAID 4
As load increases, performance of an asymmetrical array is limited in some particular point. For example, in RAID 4, during a write operation, a disk with parity will be saturated first. In case of a symmetrical RAID 5 array all the member disks are loaded in the same way; therefore there is no specific disk that limits the performance.

From this two consequences follow:

1.   In a symmetrical RAID 5 array write performance can be increased by adding the disks. Write performance of asymmetrical RAID 4 doesn't change as the number of drives increases because parity data is still written to the single disk.

2.   If you add one speedy rotational disk or SSD to a RAID 5 array, you will not get noticeable speed-up. In case of RAID 4, the replacement of the parity disk with an SSD increases performance significantly because "bottleneck" related to parity update is removed.

All this applies in a similar manner to symmetrical RAID 6 and RAID 1E, and asymmetrical RAID3 and RAID-DP.

Friday, 31 August 2012

Fixing out-of-space Storage Spaces pool


If when creating a Storage Spaces volume you used a "thin provisioning" feature, then sooner or later you face the situation when physical space for your virtual hard drive is over. Then, if you need to write data, the Storage Spaces driver can no longer allocate a new slab; it just disables the virtual drive, i.e. makes the drive offline, interrupting the last write operation. To resolve this problem you need to add disk space to the pool. However, it is not always possible, for example in case of four-disk array you would have to bring four disks at once. So if for some reason you cannot add disks immediately but you need to get access to the data, you can go to Windows Disk Management, make the disk online again, then delete unnecessary files and folders. For a while this should be enough but you will have to add disks eventually.

However, this doesn't always work. It may happen that whenever Windows Disk Management brings the disk online, the Storage Spaces driver immediately makes it offline again and you just do not have time to delete files to free disk space. The solution is to create several virtual hard drives (.vhd) of small sizes (say, five GB per disk but not less than four GB each) and add them to the pool.

Tuesday, 28 August 2012

Write speed in Storage Spaces


People note that write speed on mirror or parity volumes in Storage Spaces is slower as compared to a traditional RAID1 or RAID5. Indeed, parity and mirror volumes of Storage Spaces are not identical to a regular RAID1 or RAID5 in terms of maintenance costs. The ReFS driver that manages the filesystem structure on the Storage Spaces volumes must also calculate checksums over data to avoid the write hole issue.

Thursday, 16 August 2012

Getting used to it...



4x Lowvel erasing 4x test hard drives in parallel during preparation to another Storage Spaces test.

Tuesday, 14 August 2012

Cost-saving

They say that when Storage Spaces is officially released and when it wins user's confidence the prices of NASes like Drobo, QNAP, or ReadyNAS will drop. We suppose that it is quite possible because a bunch of external USB 3.0 disks connected to a USB 3.0 hub and winded together using a sticky tape may become a cheap and effective alternative.

Let's compare these two options based on the NewEgg prices

$528 QNAP TS-419PII-US
or
$535 NETGEAR ReadyNas Ultra 4-bay

against the set of
$25 Rosewill RHB-610 (RIUH-11001) 4-Port USB 3.0 hub
$164 4x SAT3510BU3 3.5" USB 3.0 SATA Enclosure w/ Fan
$1 Duct Tape
for a total of $190

All additional functions that are available in QNAP or other NASes for example torrent downloads or the capability to access the data via a network from multiple computers, Windows 8 can provide as well (though you may need to use free software like uTorrent)

Sunday, 12 August 2012

Storage Spaces complaints


At homeservershow we came across a pcdoc's story telling about Storage Spaces failure. However based on the description we may suppose that in this case Storage Spaces worked properly, a culprit of failure was a HPT 2680 controller. A RAID controller beeps only in two cases: when the controller itself failed or when one of the disks connected to it failed. If a Storage Spaces pool placed on the disks connected to a controller is about to fail, a controller will not know about it. 

Since no drivers were installed, it doesn't make sense to expect a detailed diagnostic from the controller. We can assume that disks were not guilty of a failure, since checking did not detect anything and the disks had good SMART characteristics. So the only remaining possibility is that a controller failure occurred. It is impossible to point out the reason of failure without access to the controller itself. Having looked over user's comments on NewEgg we may assume that it was overheating issue.

Friday, 10 August 2012

Storage Spaces Recovery project


Nowadays we are busy developing the tool that will be able to recover data from a deleted Storage Spaces pool. The recovery should be based on only the data stored in the pool, not on a Storage Spaces data base.

We have understood already that the tool will demand quite stern system requirements and will be slow since two passes over disks will be needed. Additionally it is required that all the data is read for each pass. The chance to get an absolutely correct recovery solution is not very good, at least on this stage of design and testing. It is not ruled out that about ten percent of data will be lost irreversibly.

We plan to produce the first preview version of the tool in the end of this  year (2012).  The web site for the project is here.

Tuesday, 31 July 2012

Hindsight in filesystem design


NT4 (NTFS 1.2) did not store numbers of MFT records. If MFT became fragmented, it appeared that there are millions of file records linked in the tree based on the record numbers. However, it was difficult to establish what number belonged to what record. Algorithms for detecting record numbers were complex, unstable, and requiring a lot of resources. Starting with Windows XP, the MFT entry number is stored inside the entry itself that, as it was planned, had to facilitate data recovery. 

When the volume was updated, for example, when a volume created in NT4 was connected to Windows XP for a first time, all records remained in the old format, without numbers. Records were brought into a new format only when a file, which the records referred to, was changed. So, we were encountering hybrid volumes over the several years thereafter. When at last, users ceased to format disks using NT4, all the records were numbered and data recovery software eventually stopped to support the old NTFS version.

The old ext2 implementation has the similar quirk when a superblock did not contain the number of group to which the superblock belonged to. Thus, the redundancy that designers have planned to have could not be used. The reason is that in case of the failure of the first superblock it was possible to find all the remaining superblocks, but even knowing all superblocks it was impossible to detect the first one. Therefore there was no chance to know where the files were. In the newer version of ext2 a superblock contains the numbers of groups that simplifies data recovery significantly.

Thursday, 26 July 2012

Storage Spaces

Been doing some research on Storage Spaces recovery lately, learned two important things,

  1. If you delete a Storage Space pool, the pool layout information is well and truly gone, overwritten with zeros.
  2. Once the pool layout is lost, if you've been using thin provisioning, you're most certainly dead in the water, and if not, recoverability depends on how the volumes and filesystems were created/deleted.
We trying to make a working prototype to recover thin-provisioned volumes from a deleted Storage Space pool, but it is still weeks from any usable result.

Friday, 13 July 2012

24x 1TB RAID

RAID 10 or RAID6? - from this topic at hardware analysis.

The question is which one would I choose for total data protection? Quite typically, the answer is neither, although the idea never came up in the original thread.

What would be the correct setup and why? The correct setup would be to split the drives in two systems, with separate power, and LAN connection between them. First system is hardware RAID5 (with 11+1 disks), or RAID 10 (with 2x6 disks) depending on what you need, and second is the backup machine with 12 drives based on software RAID (because only one hardware controller is available per the original question).

Now if the backup system is normally shut off with wall plug removed from the wall, and is only connected once a week while the backup is underway, the data is protected against most threats (save for fire and natural disasters).

And as far as boot time goes, no, you rather design a system so it boots using a RAID 1 or maybe even a plain vanilla drive.

Thursday, 5 July 2012

Reducing opportunity for human error

Based on the number of incidents where people mistake RAID 5 for RAID 10 and vice versa, and that happens even to people who are supposed to know better, we will be soon adding automatic detection for these cases. This way, the possibility for confusion will be eliminated.

Sunday, 1 July 2012

Probably the first real-life ReFS recovery

We probably got the first real-life (as in this is not a drill) data recovery involving ReFS filesystem. There are still some issues to work out as far as speed is concerned, but ReclaiMe did remarkably well. Remember that the filesystem itself did not reach a production status yet, it is still a release candidate.

Saturday, 9 June 2012

Terabyte and TeraStation

When talking about storage capacity, the correct term is Terabyte, with a single R. The incorrect form Terrabyte has to have something with ground and soil, as in terra firma.

Along the same tune, the Buffalo product is called TeraStation, meaning a "station where terabyte stops", not Terrastation (which could probably mean a spaceport or subway station, or some other ground facility).

This is specifically annoying when folks in data recovery mix up these two. They are, like, supposed to know better.

NTFS Deduplication

If you use NTFS Deduplication feature on Windows 8 Server (Windows 2012 build 8400), the deduplicated data is not recoverable by regular NTFS recovery software. If there are two copies of the same file on the disk, and the file gets deduplicated, neither of the copies is recoverable with traditional approach.

All the existsing data recovery software working with NTFS will need an upgrade to be able to recover deduplicated files.

Actually, deduplication makes filesystem more fragile.

For a regular file to be recoverable, you need to find its inode and intact file data.

To recover two identical regular files, you need either of the "inode + file data" pairs. Note that file data comes in two copies.

To recover two identical deduplicated file, you need a long list of items all in good condition
  • The inode for either of the file's
  • The inodes of deduplication folder structure to be intact
  • The contents of deduplication tables to de intact
  • The single copy of the data to be intact
  • Last not least, you need a capability to put everything back in proper order. Typically, this involves some reverse engineering effort.

Thursday, 7 June 2012

ReFS build 8400

ReFS build 8400 still has its cluster size locked at 64KB.

Thursday, 31 May 2012

Group-based filesystems and JBODs.

As of now, there is no automatic recovery of JBOD parameters. At least, I'm not aware of any automatic software that really works.

With a filesystem which stores its metadata all in one place and close to the start of the partition, like FAT or NTFS, you only get the data from the first JBOD member. All the files on the second and further members is lost.

However, with a group-based filesystem, like Linux EXT-series, you can get much of the data by just feeding JBOD members to the data recovery software in turn. In group-based filesystems, metadata is spread evenly across the partition, and file contents are put close to their corresponding metadata. So, if you scan separate JBOD members and then combine the results (skipping empty files and correcting for a loss of folder tree), you can get most of the files out. You cannot recover files which have contents and metadata on two different JBOD members, and also parent-child relationships crossing the disk boundary will be lost. Still, this is much better than NTFS or FAT behavior.

Sunday, 6 May 2012

NETGEAR ReadyNAS

Q: What is the best feature of NETGEAR ReadyNAS?

A: A carrying handle at the back of the unit.

Ease of setup? There isn't one.

Performance? We got, like, 3MB/sec out of it. The network is Gigabit Ethernet, serverd by D-Link switches. Not a best thing there is, but QNAP manages to pump data at least ten times faster on the same network.

What model? ReadyNAS NV+ v2.

Overall impression? Not quite good.

Tuesday, 1 May 2012

Fake hard drive

Now there is a fake of the rotational hard drive. OK, we've seen fake flash already, but this one is new

Note the nuts and bolts added so that weight and balance match the original. The only way you can quickly tell there is something fishy about it, is to plug it into USB - it does not actually spin up because there is nothing to spin.

Saturday, 28 April 2012

When is the image file of the entire RAID array useful?

Creating an image might be useful if one of these conditions are met

  1. The drives are connected via USB and power failures occur often. ReclaiMe + RAID recovery pair operates using "disk paths", and paths for USB drives may change on reboot. So you will need to redo the RAID part every reboot.
  2. One of the RAID member disks has a bad sector, or is otherwise damaged. In this case, it is best to create an image of the suspect disk first, even before you start with RAID recovery.
  3. The filesystem on the array is damaged, and you want to try some other data recovery tool on it.

Other than that, the image of the whole array is a waste of disk space. With a large array, you have to be careful what you are placing the image file onto. You should be acutely aware of the image file size. While most filesystem specifications say 12TB single file is acceptable, you will find that in practice, there is a limited set of configurations which accept files that big. By the way, this is where ReFS comes in handy even though it is still in beta.

Saturday, 14 April 2012

QNAP reboot

Always remember that

as the uptime grows, the chance of not being able to restart susccessfully increases.

That is, unless you take some special precautions.

Had an UPS fail on our QNAP, like a couple of days ago. Before that, the uptime was something like several months. Lo and behold, the QNAP failed to come back online. It just sit there "SYSTEM BOOTING >>>" with all the drive LEDs blinking, no ping, nothing, and looked very much like a power supply failure. However, all the LEDs were blinking, and generally it look very busy doing something for about 30 hours I let it sit. When I finally get around to connect a VGA to it, there was nothing to be seen.

Then, a couple of hard reboots took care of the problem, at least for the time being.
Looks like next time the PSU has to be replaced.

Monday, 9 April 2012

ReFS recovery

We now have ReFS recovery capability in our ReclaiMe data recovery software (www.ReclaiMe.com). Obviously, this is not the final version, because the filesystem is not officially released yet. However, ReclaiMe works farily well against the current Windows 8 Server beta build.

Wednesday, 28 March 2012

Storage Spaces

Based on this MS blog post, once you lost a storage space configuration, it is just lost.

At the moment, there is a significant difficulty recovering JBODs automatically. What the Storage Spaces subsystem does is, in effect, creation of JBODs of 256MB blocks. So in the end you can have just a plain JBOD, RAID 1 over JBODs, or RAID 5 over JBOD configuration. The capability to put 256MB blocks back into the pool and then use them again as needed, probably for another volume, leads to fragmentation on the pool level. That is, the volume itself gets fragmented, and obvously alignment gets out of window.

This means no RAID recovery on Storage Spaces unless some significant breakthroughs are made. Even simple volumes will not be recoverable if they got fragmented.

Saturday, 24 March 2012

ReFS, first impression

ReFS

1. Has more disk space overhead, both per-filesystem and per-file, than probably any other filesystem in existence.

2. Does not have a CHKDSK, which they might have to correct later.

3. Looks like it cannot achieve 32767 Unicode characters in file name, stopping short by ten characters or so; however we still did not test that.

4. Has several single points of failure, regardless of what they might say.

Monday, 12 March 2012

RAID levels explained

Once again an explanation of RAID levels, this time a fun one




and by the way, drop me a note if anyone knows the author?

Tuesday, 6 March 2012

Hot swap and hot spare revisited.

There are three levels of hot-something hardware repair capability in a storage system
  1. cold swap
  2. hot swap
  3. hot spare

differing on two properties
  1. if a downtime is required to perform the repair
  2. if a human intervention is required

Hence, the levels break down as follows:
  1. Cold swap: both downtime and human intervention required
  2. Hot swap: human intervention is required but no downtime
  3. Hot spare: no downtime and no human intervention


The price of the system increases with the level of hotness, while the maintenance cost decreases. In certain applications, where the maintenance cost is high, it is cheaper to put in enough "hot spare" parts for the entire useful life of the system than to use a hot-swap and human intervention. While an unmanned spacecraft is the extreme example of the high-cost maintenance, we will probably see maintenance-free end-user systems soon after hard drive production finally comes back to proper rates.

Wednesday, 15 February 2012

Photos

When making a photo of a PCB, make sure there is plenty of ambient light and the flash is off. Otherwise, the most significant feature of the photo would be that damned reflection of the flash on PCB.

Monday, 6 February 2012

Bug reports

Do we follow up the suspected bug reports?

Sometimes.

The most likely case where we'd skip the bug is when from the customer description we can derive that damage is too severe. This means whatever the problem is, it is likely not fixable anyway.

Sunday, 5 February 2012

ReFS

At the moment, the most available information about ReFS looks like our new filesystem will be better than our old one, and it will contain this-and-that cool features.

At the moment, we still don't know if we are going to get a more recoverable or less recoverable filesystem. Most if not all of the improvements in ReFS (compared to NTFS) are aimed at
  • better resistance to imperfect hardware (which does not follow fail-stop model), and
  • better handling of large files and/or files in large numbers.


The recoverability depends significantly on implementation details. For example, B-Tree is fast, but often happens to be a highly vulnerable failure point, difficult to rebuild if lost.

Also, steps to improve fault-prevention and ensure continuous operation often degrade the recovery capability. Once it comes to recovery, you are often better with the filesystem which crashes earlier and easier. To put it the other way round, the filesystems that crash quickly tend to be easily recoverable. Typical example is the ext journal. For the journaling to work, the unused inodes must be immediately zeroed, which adversely affects your undelete capability.

Wednesday, 25 January 2012

S.M.A.R.T.

Looks like our S.M.A.R.T. tool at www.cropel.com might be ready for slow-ish initial release.

Friday, 13 January 2012

Simultaneous disk failure

Modern hard drives have annual failure rate of about 5%. With this sort of the failure probability you cannot realistically talk about the simultaneous hard drive failures, even if the window of simultaneousness spans several hours. The simultaneous failure of two drives is just not probable enough. The catch is, the statistics only applies to independent drives. The common-mode failure, when several drives fail because of a single cause, is not accounted for.

As a consequence of this, in fault-tolerant RAID applications more effort should be put to eliminate possible single points of failure. Realistically, this means the most effective thing you can do is to eliminate the operator, eh.

Monday, 2 January 2012

Why don't we use RAID 8?

All RAID types are built using three elements:
  1. Striping Data blocks on several disks.
  2. Writing redundant data (it is usually the result of calculating particular functions).
  3. Writing multiple copies of data (usually two copies are written).
Different combinations of these elements allow to get data placement patters (RAID levels) which provide desired balance between speed, reliability and price.

If we only use striping, we get a RAID 0. Using only one function to calculate redundant data we will get us a RAID 5. If we add one more set of redundant data to a RAID 5, it becomes a RAID 6. We get a RAID 1 from identical copies, but if you combine the striping technique with exact copies then you get a RAID 10. These data placement patterns are the most widespread and well-known, forming a so-called raid triangle.

Except these RAID types, we can also see exotic combinations.

For example, if in RAID 10 we use the number of disks which is not an integral multiple of the number of data copies we will get significantly less symmetrical array called RAID 1E. Also in RAID 1 we can use three copies of data instead of two. Fault tolerance increases and the price of the device increases as well. The same way, if you add the third set of redundant data to a RAID 6, fault tolerance increases even more along with the price and performance decreases.

Exotic RAID types are rarely used because they have too much imbalance between price, speed and fault tolerance. RAID 6 is an still acceptable tradeoff, but RAID 8 is not.