Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to index page
Thread view  Board view
CLINT

Homepage E-mail

Tennessee,
09.06.2022, 11:37
 

Big Disks, Large Registers (Users)

Can FreeDos handle 16 TeraByte disks ?

Can I write 64 Bit Register Instructions to run under FreeDos ?

I want to write some stuff that will allow me to access specific disk sectors.

In my case I want to verify that the sectors actually exist.

e.g.,

- Read "N" bytes of data at disk sector XYZ
- Write a different string of "N" bytes back onto sector XYZ
- Read that same data from that same sector back into a different buffer
- Compare the two
- If good, put the original data back into its place
- (Else, If Bad, worry about that later)
- Repeat this process for higher and higher disk sectors

RayeR

Homepage

CZ,
09.06.2022, 19:08

@ CLINT
 

Big Disks, Large Registers

On what filesystem? FreeDOS natively supports only FAT32 so 2TB limit. Of course you can write your own low-level code to R/W entire disk...

---
DOS gives me freedom to unlimited HW access.

Oso2k

09.06.2022, 21:02

@ RayeR
 

Big Disks, Large Registers

> On what filesystem? FreeDOS natively supports only FAT32 so 2TB limit. Of
> course you can write your own low-level code to R/W entire disk...

NTFS can handle 16EB disks [0]. Newer BIOSes will have support for LBA48 [1] and 144PB drives. You can access drives directly via BIOS [2] or maybe try NTFS4DOS [3]?

marcov

09.06.2022, 22:19

@ RayeR
 

Big Disks, Large Registers

> On what filesystem? FreeDOS natively supports only FAT32 so 2TB limit. Of
> course you can write your own low-level code to R/W entire disk...

Or just take 8 2TB partitions? Nobody said it was to be a single partition/volume:-) , just a 16TB "disk"

mceric

Germany,
09.06.2022, 22:35

@ CLINT
 

Big Disks, Large Registers

> Can FreeDos handle 16 TeraByte disks ?

Depends on the definition of "handle". As mentioned earlier in this thread, the BIOS will support this if it has LBA48. You do not need 64-bit registers, because you pass a data structure in RAM to the BIOS which contains a 64-bit sector number of which you can use the low 48 bit if the BIOS supports LBA48. See RBIL about int 0x13 functions 0x42 and 0x43. For 16 terabyte, even with 512 byte per sector, you only need 35 bits for the sector numbers.

> Can I write 64 Bit Register Instructions to run under FreeDos ?

You can use FPU registers which can be 64, 80, 128... I have lost track of all those weird extensions for parallel computation on floating point units of modern processors. You probably can NOT use 64 Bit main CPU integer registers in plain DOS. For that, you need to run in a task which is in 64 Bit Long Mode. But some bold DOS programmers are working on DOS extenders for Long Mode if you want to give it a try. This also lets you use more than 4 GB RAM without having to switch between different areas and without having to copy stuff around into buffers in the first 4 GB etc.

> I want to write some stuff that will allow me to access specific disk
> sectors.

You can do that easily with the help of the BIOS.

> In my case I want to verify that the sectors actually exist.

Sounds easy enough. Also read int 0x13, function 0x48 and similar to get info about how large your disk claims to be.

Note that FreeDOS does not yet understand GUID partition tables, nor sectors larger than 512 bytes, so you cannot partition disks above 2 TB in a way which FreeDOS would understand. But for example USB disk drivers can handle the partitioning themselves to free DOS from having to understand it ;-)

In your specific example, you do not seem to need any understanding of partition tables anyway, as you plan to only access raw sectors.

Note that writing 16 TB of data to a disk just to see whether you can write it will take very long. Also, if anything goes wrong, you will have invented a disk wiper instead of a disk tester.

---
FreeDOS / DOSEMU2 / ...

tom

Homepage

Germany (West),
09.06.2022, 23:20

@ mceric
 

Big Disks, Large Registers

> Note that writing 16 TB of data to a disk just to see whether you can write
> it will take very long.

Note that clever people worked out algorithms like

write single sector at ( 1 2 4 8 16 TB)
...


or even ask the disk drive about maximum capacity

are a bit more efficient than your dump approach

mceric

Germany,
10.06.2022, 02:00

@ tom
 

Big Disks, Large Registers

> > Note that writing 16 TB of data to a disk just to see whether you can
> write
> > it will take very long.
>
> Note that clever people worked out algorithms like
>
> write single sector at ( 1 2 4 8 16 TB)
> ...
>
>
> or even ask the disk drive about maximum capacity
>
> are a bit more efficient than your dump approach

we both know that int 13.48 already answers the question how large a disk is without probing it. i had guessed that the original poster wanted to test-write the disk to see whether writes are possible everywhere, for example meaning that there is no damage in a specific area and the disk info has not lied about the size.

however, doing a "disk surface" test that way for the whole disk would be very slow, which is what i was referring to.

also, if you just test-write sectors to see whether the disk actually is as large as it claims, then the disk could just always show you the same few megabytes of the disk by ignoring the higher sector number bits and you would not notice it, so the original poster would need a more clever way to test the disk if the goal is to check whether the disk info has lied about the disk size ;-)

---
FreeDOS / DOSEMU2 / ...

CLINT

Homepage E-mail

Tennessee,
11.06.2022, 19:06

@ tom
 

Big Disks, Large Registers

Regarding...
>
>
> or even ask the disk drive about maximum capacity
>
> are a bit more efficient than your dump approach

No

That isn't an answer, that's the source of the problem

They have written dishonest code into their firmware to lie to the host.

Windows reports 1,99''''(whatever)''''' bytes

I spent days copying 760 Gigs of data onto the 2TB drive; he reported all things went well.

I copied that file back onto another 4TB drive, and compared the two.

At a certain address of which I am unaware, a perfect compare changes immediately and permanently to fails against the remainder of the file holding 0xFF bytes

Liar's poker bet here appears to be something between 8GB and 32GB

But again, that's just me guessing. What I know for certain is that it isn't 2TB

glennmcc

Homepage E-mail

North Jackson, Ohio (USA),
11.06.2022, 19:53
(edited by glennmcc, 11.06.2022, 20:29)

@ CLINT
 

Big Disks, Large Registers

> Regarding...
> >
> >
> > or even ask the disk drive about maximum capacity
> >
> > are a bit more efficient than your dump approach
>
> No
>
> That isn't an answer, that's the source of the problem
>
> They have written dishonest code into their firmware to lie to the host.
>
> Windows reports 1,99''''(whatever)''''' bytes
>
> I spent days copying 760 Gigs of data onto the 2TB drive; he reported all
> things went well.
>
> I copied that file back onto another 4TB drive, and compared the two.
>
> At a certain address of which I am unaware, a perfect compare changes
> immediately and permanently to fails against the remainder of the file
> holding 0xFF bytes
>
> Liar's poker bet here appears to be something between 8GB and 32GB
>
> But again, that's just me guessing. What I know for certain is that it
> isn't 2TB

I ran into a similar situation with a USB flashdrive which was supposedly 2TB

After copying over 500GB onto that drive
(several thousand files ranging from a few KB to several GB in size),
many of the files were corrupt even tho no errors were reported during
the copying process.

One in particular that I recall was a 6.6GB MP4 video that was so completely
corrupted that it would not play at-all with any of the several player programs.

So, I returned that USB flashdrive to the seller and got a full refund.

---
--
http://glennmcc.org/

tom

Homepage

Germany (West),
12.06.2022, 13:56

@ CLINT
 

Big Disks, Large Registers

> Regarding...
> >
> >
> > or even ask the disk drive about maximum capacity
> >
> > are a bit more efficient than your dump approach
>
> No
>
> That isn't an answer, that's the source of the problem
>
> They have written dishonest code into their firmware to lie to the host.
>
> Windows reports 1,99''''(whatever)''''' bytes
>
> I spent days copying 760 Gigs of data onto the 2TB drive; he reported all
> things went well.
>
> I copied that file back onto another 4TB drive, and compared the two.
>
> At a certain address of which I am unaware, a perfect compare changes
> immediately and permanently to fails against the remainder of the file
> holding 0xFF bytes
>
> Liar's poker bet here appears to be something between 8GB and 32GB
>
> But again, that's just me guessing. What I know for certain is that it
> isn't 2TB

it would have been helpful if you had given this context when you started the discussion.

in this situation, you need a test that tests the entire disk, something like
https://www.heise.de/download/product/h2testw-50539
.

I don't see how this is related to anything 64 bit or not.

glennmcc

Homepage E-mail

North Jackson, Ohio (USA),
12.06.2022, 19:55
(edited by glennmcc, 12.06.2022, 20:21)

@ tom
 

Big Disks, Large Registers

> > Regarding...
> > >
> > >
> > > or even ask the disk drive about maximum capacity
> > >
> > > are a bit more efficient than your dump approach
> >
> > No
> >
> > That isn't an answer, that's the source of the problem
> >
> > They have written dishonest code into their firmware to lie to the host.
> >
> > Windows reports 1,99''''(whatever)''''' bytes
> >
> > I spent days copying 760 Gigs of data onto the 2TB drive; he reported
> all
> > things went well.
> >
> > I copied that file back onto another 4TB drive, and compared the two.
> >
> > At a certain address of which I am unaware, a perfect compare changes
> > immediately and permanently to fails against the remainder of the file
> > holding 0xFF bytes
> >
> > Liar's poker bet here appears to be something between 8GB and 32GB
> >
> > But again, that's just me guessing. What I know for certain is that it
> > isn't 2TB
>
> it would have been helpful if you had given this context when you started
> the discussion.
>
> in this situation, you need a test that tests the entire disk, something
> like
> https://www.heise.de/download/product/h2testw-50539
> .
>
> I don't see how this is related to anything 64 bit or not.

I see that it's only available for Windows.

Do you know of a similar testing program preferably for DOS ?

Here are some alternatives for Linux.

https://alternativeto.net/software/h2testw/?platform=linux

root@glennmcc-i7:~/build/f3# f3probe --destructive --time-ops /dev/sdc
F3 probe 8.0
Copyright (C) 2010 Digirati Internet LTDA.
This is free software; see the source for copying conditions.

WARNING: Probing normally takes from a few seconds to 15 minutes, but
it can take longer. Please be patient.

Good news: The device `/dev/sdc' is the real thing

Device geometry:
*Usable* size: 3.68 GB (7725056 blocks)
Announced size: 3.68 GB (7725056 blocks)
Module: 4.00 GB (2^32 Bytes)
Approximate cache size: 0.00 Byte (0 blocks), need-reset=no
Physical block size: 512.00 Byte (2^9 Bytes)

Probe time: 3'06"
Operation: total time / count = avg time
Read: 810.2ms / 4812 = 168us
Write: 3'03" / 3530817 = 52us

---
--
http://glennmcc.org/

Laaca

Homepage

Czech republic,
12.06.2022, 22:05

@ glennmcc
 

Big Disks, Large Registers

> Do you know of a similar testing program preferably for DOS ?
>

Maybe HDAT2

---
DOS-u-akbar!

Zyzzle

14.06.2022, 18:14

@ glennmcc
 

Big Disks, Large Registers

> Do you know of a similar testing program preferably for DOS ?

Spinrite *used* to work well, but has been broken for a decade, does not support drives > 750gb, due to buggy code. No new version has been released, despite promises for 15 years!

HDAT2 seems to only have limited scan ability, it doesn't a have sector editor / tester.

tom

Homepage

Germany (West),
15.06.2022, 09:36

@ Zyzzle
 

Big Disks, Large Registers

> > Do you know of a similar testing program preferably for DOS ?
>
> Spinrite *used* to work well,

Spinrite was a very good program indeed. However it would have never detected this - criminal - disk drive.

Spinrite is read-only and tries to repair 'unreadable' sectors by reading them many times and using some semi-documented tricks.

a disk drive that simply ignores the high bits of the sector address, but returns 'OK' for all sectors wouldn't flag any error in Spinrite at all.

glennmcc

Homepage E-mail

North Jackson, Ohio (USA),
15.06.2022, 18:18

@ tom
 

Big Disks, Large Registers

> > > Do you know of a similar testing program preferably for DOS ?
> >
> > Spinrite *used* to work well,
>
> Spinrite was a very good program indeed. However it would have never
> detected this - criminal - disk drive.
>
> Spinrite is read-only and tries to repair 'unreadable' sectors by reading
> them many times and using some semi-documented tricks.
>
> a disk drive that simply ignores the high bits of the sector address, but
> returns 'OK' for all sectors wouldn't flag any error in Spinrite at all.

Speaking of 'criminal'...

Back in April, I bought a 2TB external USB hard drive which is defective.

When I copied 500GB of files onto it, most of them were corrupted.

Contacted Amazon and am getting a refund.

So, I decided to pop open the case and see if there was a bad connection or some such.

Well, it turns-out that it's NOT a hard drive after-all but rather is a
USB flash drive put into a hard drive case.

Googled the part number printed on the chip and it's only 32GB but
the ROM has been 'masqueraded' with false data to show that it's 2TB

So, deleted the 2TB partition and made a new one of only 32GB and it seems to be working fine with its REAL size instead of the FAKE size.

http://glennmcc.dynu.com/my-stuff/images/Fake_2TB-REAL_32GB.jpg

---
--
http://glennmcc.org/

Zyzzle

16.06.2022, 15:32

@ glennmcc
 

Big Disks, Large Registers

> Googled the part number printed on the chip and it's only 32GB but
> the ROM has been 'masqueraded' with false data to show that it's 2TB
>
> So, deleted the 2TB partition and made a new one of only 32GB and it seems
> to be working fine with its REAL size instead of the FAKE size.
>
> http://glennmcc.dynu.com/my-stuff/images/Fake_2TB-REAL_32GB.jpg

Fake flash drives are infamous, legendary, and nearly universal scam. They're ridiculously easy to "create" by unscrupulous third-party scammers; just destroy the original File allocation tables, rewrite a fake FAT32 partition of 512GB, 1TB, or whichever ridiculous fake size to your cheap, slow 16GB or 32GB USB flash media. And presto! you've got some sucker on Amazon willing to buy this phony product, to further enable this ridiculous charade. I have heard that Amazon and Ebay, et al have attempted to crack down on this awful swindle, but it's so rampant and easy to do, that it will probably never be eradicated.

How many people actually verify in toto their newly-purchased flash media? Probably close to nil. It takes a long time to verify every sector, and some of these drives are criminally slow (single-digit, low MBs per second read and write speeds), especially the fake ones. They could take hours to verify, and who the heck wants to do that? Hence, the rampant, easy criminal activity of pandering them to suckers...

glennmcc

Homepage E-mail

North Jackson, Ohio (USA),
16.06.2022, 22:45

@ Zyzzle
 

Big Disks, Large Registers

> > Googled the part number printed on the chip and it's only 32GB but
> > the ROM has been 'masqueraded' with false data to show that it's 2TB
> >
> > So, deleted the 2TB partition and made a new one of only 32GB and it
> seems
> > to be working fine with its REAL size instead of the FAKE size.
> >
> > http://glennmcc.dynu.com/my-stuff/images/Fake_2TB-REAL_32GB.jpg
>
> Fake flash drives are infamous, legendary, and nearly universal scam.
> They're ridiculously easy to "create" by unscrupulous third-party scammers;
> just destroy the original File allocation tables, rewrite a fake FAT32
> partition of 512GB, 1TB, or whichever ridiculous fake size to your cheap,
> slow 16GB or 32GB USB flash media. And presto! you've got some sucker on
> Amazon willing to buy this phony product, to further enable this ridiculous
> charade. I have heard that Amazon and Ebay, et al have attempted to crack
> down on this awful swindle, but it's so rampant and easy to do, that it
> will probably never be eradicated.
>
> How many people actually verify in toto their newly-purchased flash media?
> Probably close to nil. It takes a long time to verify every sector, and
> some of these drives are criminally slow (single-digit, low MBs per second
> read and write speeds), especially the fake ones. They could take hours to
> verify, and who the heck wants to do that? Hence, the rampant, easy
> criminal activity of pandering them to suckers...


In this situation, it's even worse 'cus they put it in an HDD case
so they could pass it off as a hard drive instead of a flash drive. :(

---
--
http://glennmcc.org/

CLINT

Homepage E-mail

Tennessee,
11.06.2022, 19:15

@ tom
 

Big Disks, Large Registers

> write single sector at ( 1 2 4 8 16 TB)


Not precisely my thoughts, but mighty close

tom

Homepage

Germany (West),
09.06.2022, 23:12

@ CLINT
 

Big Disks, Large Registers

> Can FreeDos handle 16 TeraByte disks ?
no. FreeDOS (as of now) is limited to a 2 TB MBR partitioning scheme.

although
> Can I write 64 Bit Register Instructions to run under FreeDos ?

absolutely yes. FreeDOS doesn't interfere with that. AT ALL.

it only communicates with disks via INT13, and doesn't care at all about register level access.

glennmcc

Homepage E-mail

North Jackson, Ohio (USA),
10.06.2022, 01:14

@ CLINT
 

Big Disks, Large Registers

> Can FreeDos handle 16 TeraByte disks ?
>
> Can I write 64 Bit Register Instructions to run under FreeDos ?
>
> I want to write some stuff that will allow me to access specific disk
> sectors.
>
> In my case I want to verify that the sectors actually exist.
>
> e.g.,
>
> - Read "N" bytes of data at disk sector XYZ
> - Write a different string of "N" bytes back onto sector XYZ
> - Read that same data from that same sector back into a different buffer
> - Compare the two
> - If good, put the original data back into its place
> - (Else, If Bad, worry about that later)
> - Repeat this process for higher and higher disk sectors

IMO, doing all of that with a DOS program while booted to FreeDos
would be pretty-much the same as this little 'experiment'
of a mine a few months ago.

Namely.... not for any actual useful purpose
but rather simply to prove it could be done.

https://www.linuxquestions.org/questions/slackware-14/just-to-prove-it-*could*-be-done-4175709646/

;-)

---
--
http://glennmcc.org/

tkchia

Homepage

10.06.2022, 14:40

@ CLINT
 

Big Disks, Large Registers

Hello CLINT,

> Can FreeDos handle 16 TeraByte disks ?
>
> Can I write 64 Bit Register Instructions to run under FreeDos ?

:confused: As mentioned, you can work with 64-bit sector numbers without having to use 64-bit CPU registers (or even a 64-bit CPU).

You can quite easily do basic 64-bit arithmetic with 32-bit — or even 16-bit — CPU registers.

Thank you!

---
https://gitlab.com/tkchia · https://codeberg.org/tkchia · 😴 "MOV AX,0D500H+CMOS_REG_D+NMI"

tkchia

Homepage

10.06.2022, 15:24

@ CLINT
 

Big Disks, Large Registers

Hello CLINT,

Also, it is probably a bad idea to write stuff to a hard disk just to "probe" for things. If the power on your PC goes out in the middle of a write, then you end up with a corrupted hard disk with no easy way to undo the corruption.

Thank you!

---
https://gitlab.com/tkchia · https://codeberg.org/tkchia · 😴 "MOV AX,0D500H+CMOS_REG_D+NMI"

bretjohn

Homepage E-mail

Rio Rancho, NM,
10.06.2022, 20:47

@ tkchia
 

Big Disks, Large Registers

> Also, it is probably a bad idea to write stuff to a hard disk just to
> "probe" for things. If the power on your PC goes out in the middle of a
> write, then you end up with a corrupted hard disk with no easy way to undo
> the corruption.

When using INT 13h functions for reading and writing disks, you usually only provide the number of blocks (sectors) in the INT 13h call. The size of each block/sector is CRITICAL. Most disks, at least the smaller ones, are 512 bytes per sector. The bigger the disk the more likely it is to be something like 2048 or 4096 instead of 512. You'll end up with all kinds of problems and corruptions if the two sides don't agree on how big the blocks/sectors are. You shouldn't just always assume 512.

Disks can also have issues with caching/buffering that can screw up some things in certain circumstances.

There are also similar issues with checking/testing memory (whether RAM or MMIO) and I/O. I won't go into the details here since it's not really relevant to the original question. For now, let's just say "it's messy" and there are lots of gotcha's that you need to worry about. You usually don't need to worry much about power interruptions when messing with memory or I/O like you do with disks, though.

marcov

11.06.2022, 12:27

@ bretjohn
 

Big Disks, Large Registers

> something like 2048 or 4096 instead of 512.

If the drive is labeled as "advanced format", then this is so:

https://en.wikipedia.org/wiki/Advanced_Format

Back to index page
Thread view  Board view
22049 Postings in 2034 Threads, 396 registered users, 134 users online (1 registered, 133 guests)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum