GET Galvanization of a corpse: how to revive a broken HDD for storing something unnecessary / Sudo Null IT News FREE

I recently got a broken external hard drive ... Well, how did you latch on? I bought IT connected the cheap myself.

A disc is like a platter: an iron box, inside - a USB2SATA controller and a Samsung laptop platter for 1 TB. . According to the seller's description, it revolved out that the USB controller was buggy. First, they allege, he writes and reads intimately, and then gradually begins to lento down and generally falls off. The phenomenon for outer drives without extra power is quite vernacular, so of course I believed him. Well, what - it's brassy.

So, I gleefully dismantle the box, invite out the drive from there and stick IT into the adaptor checked by time and adversity. The disk aroused, wound up, compulsive, and smooth mounted connected Linux. An NTFS file system and a twelve films were found on the disk. Nobelium, not about sexy adventures, simply quite the adverse: there are all kinds of "Leviathans". It would appear - cheers! But no, IT was sportsmanlike beginning.

Wake SMART showed a disappointing picture: the Raw Read Error Rate attribute dropped to unity (at doorsill 51), which means only incomparable thing: the disk has something very, very wrong with reading from the plates. The rest of the attributes, even so, were within argue, merely that was nobelium easier.

An attempt to format the disk led to the expected result: a spell error. You could, naturally, make a leaning of bad sectors with the regular badblocks utility, then slip this list when creating the file arrangement. But I rejected this idea as impractical: it would take too long to hold off for the result. And, as IT wrong-side-out exterior later, a compiled leaning of sectors would be useless: in damaged areas, sectors are unstable, so what is read once, next meter may produce a read wrongdoing.

Having played enough with all sorts of utilities, I found out the favourable details:

  1. There are many bad sectors, but they are not located indiscriminately end-to-end the phonograph record, but in dense groups. 'tween these groups there are quite walloping areas where reading and writing go without some problems.
  2. An attempt to doctor a immoral sector by overwriting (so that the accountant replaces it with a backup one) does not work. Sometimes afterward that the sector is read, sometimes non. What is more, sometimes an endeavour to write to a bad sector causes the disk to "pin off" from the system for a few seconds (apparently, the controller of the phonograph record itself resets). When reading, there are no resets, but information technology takes half a arcsecond or even more to try to read the beaten sector.
  3. "Broken areas" are jolly stable. So, the very first of them begins in the region of the 45th gigabyte from the beginning of the disk, and stretches quite far (how much, it was not imaginable to determine with a snap). Through and through trial and misplay, we also managed to find the beginning of the second such area somewhere in the middle of the disk.

The thought immediately arose: what if we split the disk into two or three partitions so that the "rugged fields" stay between them? And so the disk can be used to store something that is not very valuable ("watch IT once" films, for example). Course, for this you first demand to find out kayoed the boundaries of the "good" and "humiliated" areas.

No rather aforementioned than done. A utility was written on the knee that reads from the disk until a hard sphere is caught. After that, the utility marked as an unsuccessful (in its own plate, of course) an entire area of ​​a given duration. Next, the marked area was skipped (wherefore check information technology - IT was already marked as bad) and the utility read the sectors further. After a couple of experiments, IT was decided to mark the failed area of ​​10 megabytes: this is already large plenty for the utility to work quickly, but as wel small sufficiency so that the loss of disc space becomes too large.

For clarity, the result of the work was recorded as a picture: albumen dots - good sectors, loss - bad, leaden - a uncollectible area around the crappy sectors. After almost a day of ferment, the list of unsmooth areas and a clear picture of their location were ready and waiting.

Hera it is, this visualise:


Interesting, isn't IT? There were much more storm-beaten areas than I imagined, but undamaged areas clear account for more half of the disk blank space. It seems to be a shame to misplace thusly more space, but I do not require to fence a dozen moderate sections.

But we have long been the 21st centred, the time of new technologies and disk arrays! Indeed, you can mucilage one phonograph recording array from these small partitions, create a filing system thereon and not know the grief.

Happening the map of the beaten areas, a mega-team was created to create partitions. I used the GPT to not worry about which ones should be important and which ones extended:

          # compound -s -a none /dev/sdc unit s mkpart 1 20480 86466560 mkpart 2 102686720 134410240 mkpart 3 151347200 218193920 mkpart 4 235274240 285306880 mkpart 5 302489600 401612800 mkpart 6 418078720 449617920 mkpart 7 466206720 499712000 mkpart 8 516157440 548966400 mkpart 9 565186560 671539200 mkpart 10 687595520 824811520 mkpart 11 840089600 900280320 mkpart 12 915640320 976035840 mkpart 13 991354880 1078026240 mkpart 14 1092689920 1190871040 mkpart 15 1205288960 1353093120 mkpart 16 1366794240 1419919360 mkpart 17 1433600000 1485148160 mkpart 18 1497927680 1585192960 mkpart 19 1597624320 1620684800 mkpart 20 1632808960 1757368320 mkpart 21 1768263680 1790054400 mkpart 22 1800908800 1862307840 mkpart 23 1872199680 1927905280 mkpart 24 1937203200 1953504688                  

The team worked for quite some meter (several minutes). A total of 24 (!) Sections, each of its own size.

Sections

              # compound /dev/sdc print Model: SAMSUNG HM100UI (scsi) Disk /dev/sdc: 1000GB Sector sizing (logical/physical): 512B/512B Partition Table: gpt Number  Start   End     Sizing    File system  Name  Flags  1      10.5MB  44.3GB  44.3GB               1  2      52.6GB  68.8GB  16.2GB               2  3      77.5GB  112GB   34.2GB               3  4      120GB   146GB   25.6GB               4  5      155GB   206GB   50.8GB               5  6      214GB   230GB   16.1GB               6  7      239GB   256GB   17.2GB               7  8      264GB   281GB   16.8GB               8  9      289GB   344GB   54.5GB               9 10      352GB   422GB   70.3GB               10 11      430GB   461GB   30.8GB               11 12      469GB   500GB   30.9GB               12 13      508GB   552GB   44.4GB               13 14      559GB   610GB   50.3GB               14 15      617GB   693GB   75.7GB               15 16      700GB   727GB   27.2GB               16 17      734GB   760GB   26.4GB               17 18      767GB   812GB   44.7GB               18 19      818GB   830GB   11.8GB               19 20      836GB   900GB   63.8GB               20 21      905GB   917GB   11.2GB               21 22      922GB   954GB   31.4GB               22 23      959GB   987GB   28.5GB               23 24      992GB   1000GB  8346MB               24                          

The next step is to create a single phonograph recording from them. The perfectionist internal me suggested that it would be most correct to muddle much sort of RAID6 lay out that is fault tolerant. The practitioner objected that all the very, the section that fell into the heavenly body plane would have nothing to replace, so that the usual JBOD would boil down - why drop off space vainly? The practitioner won:

          # mdadm --create /dev/md0 --lump=16 --level=linear --raid-devices=24 /dev/sdc1 /dev/sdc2 /dev/sdc3 /dev/sdc4 /dev/sdc5 /dev/sdc6 /dev/sdc7 /dev/sdc8 /dev/sdc9 /dev/sdc10 /dev/sdc11 /dev/sdc12 /dev/sdc13 /dev/sdc14 /dev/sdc15 /dev/sdc16 /dev/sdc17 /dev/sdc18 /dev/sdc19 /dev/sdc20 /dev/sdc21 /dev/sdc22 /dev/sdc23 /dev/sdc24                  

That's it. It remains to create a file system and ride an animated disk:

          # mkfs.ext2 -m 0 /dev/md0 # climb down /dev/md0 /mnt/ext                  

The disk turned out to be quite capacious, 763 gigabytes (i.e., it was possible to apply 83% of the disk capacity). In other quarrel, lone 17% of the first T went to heap:

          $ df -h Filesystem                                              Size  Put-upon Help Use% Mounted along rootfs                                                  9.2G  5.6G  3.2G  64% / ... /dev/md0                                                763G  101G  662G  14% /mnt/ext                  

The trial run set out of refuse films was uploaded to the disk without errors. True, the write speed was small and floated from 6 to 25 megabytes per second. Reading was stable at a speed of 25-30 mb / s, that is, it was limited to an adapter connected to USB 2.0.

Course, such a perversion cannot beryllium used to store something important, but it send away be useful as entertainment. When the question is, disassemble the phonograph recording connected magnets or torment first, my do: "of naturally, rack!".

Finally - a link to the repository with the utility program: github.com/dishather/showbadblocks

DOWNLOAD HERE

GET Galvanization of a corpse: how to revive a broken HDD for storing something unnecessary / Sudo Null IT News FREE

Posted by: rollinsnowlielinuld81.blogspot.com

0 Response to "GET Galvanization of a corpse: how to revive a broken HDD for storing something unnecessary / Sudo Null IT News FREE"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel