Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to the board
Thread view  Mix view  Order
lucho

30.07.2007, 17:38
 

Ultimate DOS kernel file copy speed test (Miscellaneous)

The eternal question: Which DOS to choose? Speed is also an important criteria! Today I did the following simple test. Copied a 68 MB file from one FAT32 drive with 16 KB clusters to another FAT32 drive with 4 KB clusters. The machine is a Celeron PIII-1.2GHz, ATA-66 HDD, 512 MB of RAM, with Jack's drivers loaded. The times in seconds (rounded to an integer number for clarity) are, as follows:

MS-DOS: 17
PC-DOS: 17
EDR-DOS: 33
PTS-DOS: 47
FreeDOS: 80
ROM-DOS: test failed resulting in hang-up and lost clusters.

(As to the alleged future in FreeDOS: No, it's past. All DOS is past. Let's face it. The only past that is future too is Unix. Eternal like the Egypt pyramids... Still alive and evolving in Mac OS X, Solaris, the BSD family, Linux and so on.)

rr

Homepage E-mail

Berlin, Germany,
31.07.2007, 09:20

@ lucho

Ultimate DOS kernel file copy speed test

> clusters. The machine is a Celeron PIII-1.2GHz, ATA-66 HDD, 512 MB of RAM,
> with Jack's drivers loaded. The times in seconds (rounded to an integer

What versions? What options?

> MS-DOS: 17

MS-DOS 7.x? (for FAT32)

> PC-DOS: 17

No surprise here. ;-)

> EDR-DOS: 33

Still room for improvements.

> FreeDOS: 80

Ouch! :-(

> ROM-DOS: test failed resulting in hang-up and lost clusters.

Does ROM-DOS support FAT32 natively?

> (As to the alleged future in FreeDOS: No, it's past. All DOS is past.
> Let's face it. The only past that is future too is Unix. Eternal like the
> Egypt pyramids... Still alive and evolving in Mac OS X, Solaris, the BSD
> family, Linux and so on.)

OK, then "Good bye!" to you. I don't want you to live in the past.

---
Forum admin

Rugxulo

Homepage

Usono,
31.07.2007, 10:12
(edited by Rugxulo, 31.07.2007, 12:20)

@ lucho

Ultimate DOS kernel file copy speed test

> The eternal question: Which DOS to choose? Speed is also an important
> criteria! Today I did the following simple test. Copied a 68 MB file from
> one FAT32 drive with 16 KB clusters to another FAT32 drive with 4 KB
> clusters.

Copying a huge file isn't exactly the most speed-sensitive task around. (Granted, I do greatly appreciate such informal benchmarks.) BTW, I read somewhere that someone said (MS-DOS?) XCOPY is faster than normal COPY, so maybe you should try that. :)

P.S. You think FreeDOS' COPY is slow now? You should try "beta8" (esp. copying from floppy) and see how much it could really chug!!! :-P

> (As to the alleged future in FreeDOS: No, it's past. All DOS is past.
> Let's face it. The only past that is future too is Unix. Eternal like the
> Egypt pyramids... Still alive and evolving in Mac OS X, Solaris, the BSD
> family, Linux and so on.)

For home desktop use, yes, most people want all modern Internet conveniences and GUIs, hardware support, etc. DOS probably will not ever be at the forefront of that. However, DOS is far from dead: virtualization (QEMU, BOCHS), embedded systems (digital cameras, .mp3 players), old legacy recycled PCs (dedicated gaming PCs, point-of-sale machines), bootstrap OS for other OSes (OctaOS or DexOS or DOS-Minix) or very low-level stuff (e.g. upgrade your BIOS), emulation (DOSBox, DOSEMU), or maybe it will spawn yet another OS (e.g. FreeDOS-32), etc. :-P

Yes, Linux is good and popular. Linux Linux Linux Linux Linux Linux Linux Linux Linux ... (as if it needs the advertising, sheesh!)

---
Know your limits.h

lucho

31.07.2007, 17:14

@ rr

Ultimate DOS kernel file copy speed test

> Copying a huge file isn't exactly the most speed-sensitive task around.

For FAT processing (searching and allocation of clusters), it is. And actually that's the most important job of the DOS kernel here, not copying data itself.

> (Granted, I do greatly appreciate such informal benchmarks.) BTW, I read
> somewhere that someone said (MS-DOS?) XCOPY is faster than normal COPY,
> so maybe you should try that. :)

I used 4DOS COPY /B, but as I've already tried, XCOPY or almost any other used application doesn't make significant difference here.

> What versions? What options?

Last versions, same configuration for all kernels. No need to post my CONFIGs?

> MS-DOS 7.x? (for FAT32)

Yes, if you re-read my previous post, you'll see that both drives are FAT32.

> Does ROM-DOS support FAT32 natively?

Yes, ROM-DOS 7.10 which I tested does. It also can support LFNs natively.

> OK, then "Good bye!" to you.

You drive me, an innocent and even unregistered user of your forum, away?! OK!

> I don't want you to live in the past.

Thank you for your care, but perhaps I myself prefer living in the past?!

OK, before I go away as you want, I'll confess: DOS, Mac OS X, Solaris, *BSD and Linux have all something very important in common: They have nothing to do with MS-Windows! And that's why I like them all, despite some disappointments.

rr

Homepage E-mail

Berlin, Germany,
31.07.2007, 17:34

@ lucho

Ultimate DOS kernel file copy speed test

> > Copying a huge file isn't exactly the most speed-sensitive task around.

I didn't write this.

> > (Granted, I do greatly appreciate such informal benchmarks.) BTW, I
> read
> > somewhere that someone said (MS-DOS?) XCOPY is faster than normal COPY,
> > so maybe you should try that. :)

I didn't write this.

> > What versions? What options?
>
> Last versions, same configuration for all kernels. No need to post my
> CONFIGs?

Why not? Maybe others would like to some run benchmarks and compare results.

> > MS-DOS 7.x? (for FAT32)
>
> Yes, if you re-read my previous post, you'll see that both drives are
> FAT32.

That's why I wrote "(for FAT32)". IIRC FAT32 TSR from DR-DOS works in MS-DOS 6.22 as well.

> Yes, ROM-DOS 7.10 which I tested does. It also can support LFNs natively.

Ah, OK. :-)

> > OK, then "Good bye!" to you.
>
> You drive me, an innocent and even unregistered user of your forum, away?!

Men are never innocent. Ask some girl about it. ;-)

No, but from your previous message I got my impression, you wish to leave the DOS community.

> > I don't want you to live in the past.
>
> Thank you for your care, but perhaps I myself prefer living in the past?!

Then that's something completely different.

---
Forum admin

lucho

31.07.2007, 19:07

@ Rugxulo

Who said that DOS was dead?

> However, DOS is far from dead:

No need to say that to a DOS maniac like me :-D
(Not all things of the past are dead, and even some "dead" tings are still used)

> Yes, Linux is good and popular. Linux Linux Linux Linux Linux Linux
> Linux Linux Linux ... (as if it needs the advertising, sheesh!)

Why not "Mac OS X Mac OS X Mac OS X Mac OS X Mac OS X Mac OS X Mac OS X Mac OS X Mac OS X (and it doesn't need advertising :-D

As to "sheesh", my English isn't good enough to understand interjections :-(

By the way, thanks for your 4DOS announcement ;-)

Steve

Homepage E-mail

US,
31.07.2007, 19:50

@ lucho

Who said that DOS was dead?

> > Yes, Linux is good and popular. Linux Linux Linux Linux Linux Linux
> > Linux Linux Linux ... (as if it needs the advertising, sheesh!)
>
> Why not "Mac OS X Mac OS X Mac OS X Mac OS X Mac OS X Mac OS X Mac OS X
> Mac OS X Mac OS X (and it doesn't need advertising :-D

Not free. Not even cheap.

> As to "sheesh", my English isn't good enough to understand interjections
> :-(

Definition at http://www.yourdictionary.com/ahd/s/s0330750.html

rr

Homepage E-mail

Berlin, Germany,
31.07.2007, 20:17

@ lucho

Who said that DOS was dead?

> > However, DOS is far from dead:
>
> No need to say that to a DOS maniac like me :-D

Then you're welcome, of course! :yes:

---
Forum admin

Rugxulo

Homepage

Usono,
01.08.2007, 04:13

@ lucho

Who said that DOS was dead?

> > However, DOS is far from dead:
>
> No need to say that to a DOS maniac like me :-D
> (Not all things of the past are dead, and even some "dead" tings are still
> used)
>
> > Yes, Linux is good and popular. Linux Linux Linux Linux Linux Linux
> > Linux Linux Linux ... (as if it needs the advertising, sheesh!)
>
> Why not "Mac OS X Mac OS X Mac OS X Mac OS X Mac OS X Mac OS X Mac OS X
> Mac OS X Mac OS X (and it doesn't need advertising :-D
>
> As to "sheesh", my English isn't good enough to understand interjections
> :-(

All I meant was that, "Yikes, does Linux still need to be announced, even as a footnote??" The big Linux users are always on top of it anyways. It's more useful to people like me, IMO, to mention newer versions of popular tools. ;-)

---
Know your limits.h

lucho

01.08.2007, 14:55

@ rr

Who said that DOS was dead?

> > > However, DOS is far from dead:
> >
> > No need to say that to a DOS maniac like me :-D
>
> Then you're welcome, of course! :yes:

Danke schön!

DOS386

02.08.2007, 15:26

@ lucho

Ultimate DOS kernel file copy speed test || Slowest FreeDOS

Lucho wrote:

> MS-DOS: 17 :surprised:
> PC-DOS: 17
> EDR-DOS: 33
> PTS-DOS: 47
> FreeDOS: 80 :no:
> ROM-DOS: test failed resulting in hang-up and lost clusters.

FreeDOS was somewhat slow in my tests as well ... but not that extremely as you report here :crying:

My opinion about existence of "MS-DOS" supporting FAT32 should be sufficiently known already :lol3:

Finally I hope that EDR-DOS issues, both technical and non-technical, will get solved one day ... maybe I'm dreaming too much ? :-|

---
This is a LOGITECH mouse driver, but some software expect here
the following string:*** This is Copyright 1983 Microsoft ***

DOS386

02.08.2007, 15:28

@ Rugxulo

Ultimate DOS kernel file copy speed test LINUX LINUX ... ...

> Yes, Linux is good

Do you use it ? Was unusable for me in my test some years ago :no:

> and popular.

:-|

> Linux Linux Linux Linux Linux Linux Linux Linux Linux

Reserve 100 GiB of space for that many Loonixes :lol3:

---
This is a LOGITECH mouse driver, but some software expect here
the following string:*** This is Copyright 1983 Microsoft ***

Rugxulo

Homepage

Usono,
03.08.2007, 05:09

@ DOS386

Ultimate DOS kernel file copy speed test LINUX LINUX ... ...

> > Yes, Linux is good
>
> Do you use it ? Was unusable for me in my test some years ago :no:

It depends on the distro. Honestly, I've hardly ever tried it (a bit confusing and easy to not do what you want), but recently I've been trying a few LiveCDs (e.g. FreeDOS [didn't really work] ReactOS [fully broken], Slax "Kill Bill" [good, RAM hungry], NetBSD Live! 2007 [good, RAM hungry]) on an old P6 333 Mhz w/ 128 MB RAM (on which my bro installed Damn Small Linux).

> > and popular.
>
> :-|

PRO: Good auto-detection. Good compatibility. Lots of software. Free. Nice LiveCDs. Can do almost all of what a typical person (like me) would need or want.

CON: Almost exclusively not for old computers. MAJORLY overhyped. Too much talk emphasis on servers and networking. Too many weird hacks and anti-MS rhetoric. Too intimidating (intentionally?). Changes too often. Not a quick fix for using a non-Windows OS. Can be hard to install programs (maybe). Too many unneeded features.

> > Linux Linux Linux Linux Linux Linux Linux Linux Linux
>
> Reserve 100 GiB of space for that many Loonixes :lol3:

You can install it in a lot less than that, but the main thing that takes up a lot is swap space. I don't quite understand the ins and out of why it uses swap when free RAM is available (or how to tweak such) yet.

Of course, I don't have to tell you, DOS386, that DexOS, FreeDOS, OctaOS, and Menuet32 are also good OSes (better, in some ways) as well as much smaller. I think multi-booting (or running in a VM) ain't such a bad thing. ;-)

sol

30.11.2007, 17:51

@ lucho

Ultimate DOS kernel file copy speed test

> I used 4DOS COPY /B, but as I've already tried, XCOPY or almost any other
> used application doesn't make significant difference here.

I actually wouldn't mind seeing another benchmark, but using XCOPY or something else instead. It would make the test more even - and it could actually make a huge difference. If one DOS' copy command uses a much smaller buffer to copy data, it could actually be a lot slower.

Though, I imagine the speed has everything to do with each DOS' disk reading instead. It's probably always reading 1 sector at a time, which is quite slow. If a little bit of code were added to determine the max number of sectors that are contiguous on disk for a read, it would be *much* faster.

For example, if I call the "read from file handle" API requesting 32768 bytes, and I have a defragged FAT32 partition (4k cluster size) with the file I'm trying to read being 32k...DOS should see that the file is in order based on the FAT, and knowing I want to read 32k should read it all at once.

It adds more code & more complicated logic, but it would be much faster.

The cheap way to do it would be to read 1 cluster at a time, and it would still gain a fair performance increase. If all the DOSes are doing this, then using a different copy tool that's known to be fast, like XCOPY, should reveal different benchmark results.

Japheth

Homepage

Germany (South),
03.12.2007, 12:25

@ sol

Speed differences negligible

one year ago we did a "file copy" test with FreeDOS, EDR-DOS and MS-DOS.

I used this benchmark with a 40 MB file (the benchmark limits file size to 64 MB).

MS-DOS usually was the fastest. First EDR-DOS was the slowest DOS, but after some adjustments speed was increased and it reached MS-DOS speed. FreeDOS is now the slowest.

However, a test with a current SATA drive and without any cache program loaded shows that the differences in speed are negligible.

---
MS-DOS forever!

Rugxulo

Homepage

Usono,
05.12.2007, 22:31

@ Japheth

Speed differences - be more specific

> one year ago we did a "file copy" test with FreeDOS, EDR-DOS and MS-DOS.
>
> I used this
> benchmark with a 40 MB file (the benchmark limits file size to 64
> MB).
>
> MS-DOS usually was the fastest. First EDR-DOS was the slowest DOS, but
> after some adjustments speed was increased and it reached MS-DOS speed.
> FreeDOS is now the slowest.

EDIT: What adjustments? More BUFFERS specified in CONFIG.SYS? Shorter PATH to search? Defragged drive w/ full file reorder? HIMEM only or maybe JEMMEX (w/ VME switched on) on a 586?

> However, a test with a current SATA drive and without any cache program
> loaded shows that the differences in speed are negligible.

I do not doubt FreeDOS is the slowest, but I do doubt that it's in dire need of a speedup. In particular, I personally would like to know a few things about any such tests in the future (just to be more accurate):

* what kernel version (2036? 2037?)
* compiled by what (Turbo C? Turbo C++? OpenWatcom?)
* compiled-for target cpu (8086/FAT16? 386/FAT32?)
* what cpus tested in the benchmark (286? 486? Pentium? P4? AMD64?)

For instance, I've heard that a PentiumPro will run 16-bit code slower than a plain ol' Pentium. And of course, just from experience, I know my 486 Sx/25 is dog slow compared to my P166 (which is quite slow compared to my P4 or AMD64).

So, if someone out there wants to test, be prepared to give detailed information on what you used. :-P

---
Know your limits.h

Japheth

Homepage

Germany (South),
05.12.2007, 23:02

@ Rugxulo

Speed differences - be more specific

> EDIT: What adjustments? More BUFFERS specified in CONFIG.SYS? Shorter PATH
> to search? Defragged drive w/ full file reorder? HIMEM only or maybe JEMMEX
> (w/ VME switched on) on a 586?

No. The EDR-DOS kernel itself was adjusted. There's a thread about this issue in the EDR-DOS forum ("Evil DoctoR"), date ~ november 2006.

---
MS-DOS forever!

sol

05.12.2007, 23:05

@ Rugxulo

Speed differences - be more specific

> I do not doubt FreeDOS is the slowest, but I do doubt that it's in dire
> need of a speedup. In particular, I personally would like to know a few
> things about any such tests in the future (just to be more accurate):
>
> * what kernel version (2036? 2037?)
> * compiled by what (Turbo C? Turbo C++? OpenWatcom?)
> * compiled-for target cpu (8086/FAT16? 386/FAT32?)
> * what cpus tested in the benchmark (286? 486? Pentium? P4? AMD64?)
>
> For instance, I've heard that a PentiumPro will run 16-bit code slower
> than a plain ol' Pentium. And of course, just from experience, I know my
> 486 Sx/25 is dog slow compared to my P166 (which is quite slow compared to
> my P4 or AMD64).
>
> So, if someone out there wants to test, be prepared to give detailed
> information on what you used. :-P

Doesn't make any difference as far as hardware goes, since they'd have used the same machine for the benchmark. The compiler/cpu/etc don't make a difference either, since what's taking all the time is hard disk IO.

And yes, if FreeDOS is slowest, it probably needs a better method to read files from the disk. It probably does something stupid.

For example, here's two ways of reading:

1.
a) Read directory info. Scan it for filename + grab cluster #.
b) Read data one sector at a time from cluster until we've read it.
c) Read the fat for the next cluster #
d) Read data one sector at a time from cluster until we've read it...
----
e) On next call, re-read directory info & scan fat to get to pointer

2.
a) Scan directory until we've found the filename + grab cluster #.
b) If filesize <= cluster size, then read entire cluster
c) If not, read entire fat chain and store it
d) Read largest contiguous cluster
---
e) On next call, continue using stored fat chain

Let's pretend a program calls these ^ twice. Two calls to read 32k at a time of a contiguous 64k file, on a 4k cluster partition out of a directory that has only that file.

1) Would do 16 read calls for the directory info (8 for each call). 22 read calls to the FAT (7 for the first call, 15 for the second). 128 calls for the actual file.

Total: 166 int 13h calls or direct hard disk reads.

2) Would do 1 read call for the directory info. 1 read call for the FAT. 2 calls to read the actual file.

Total: 4 int 13h calls or direct hard disk reads.



This isn't theory :) --- this is how corporate databases (mysql/postgresql/oracle/etc) work. They'll execute an extra 1000~ lines of code simply to determine the best way to read from hard drives rather than actually go ahead and start reading the data.

sol

06.12.2007, 01:14

@ Japheth

Speed differences negligible

> However, a test with a current SATA drive and without any cache program
> loaded shows that the differences in speed are negligible.

With SATA drives that have large caches + decent caching logic, a stupid DOS with a bad algorithm to read data would seem okay, since the drive would've read ahead, and also retained data that would be re-read. So many of those 166 calls above would merely be fetching cached data, which wouldn't be too slow.

tom

Homepage

Germany (West),
06.12.2007, 13:01

@ sol

Speed differences - be more specific

> And yes, if FreeDOS is slowest, it probably needs a better method to read
> files from the disk. It probably does something stupid.

Bullshit.

FreeDOS *reads* files fast (because it implements a scheme similar to what you described), probably at least as fast as other DOS's.

If in doubt, *measure* (not speculate) a pur read operation,

c:>copy bigfile.bin NUL

will do.

So it's *writing* the file (and allocating new clusters, writing modified FAT, etc.) that is slow.

sol

06.12.2007, 17:50

@ tom

Speed differences - be more specific

> Bullshit.
>
> FreeDOS *reads* files fast (because it implements a scheme similar to what
> you described), probably at least as fast as other DOS's.
>
> If in doubt, *measure* (not speculate) a pur read operation,
>
> c:>copy bigfile.bin NUL
>
> will do.
>
> So it's *writing* the file (and allocating new clusters, writing modified
> FAT, etc.) that is slow.

Reading is always significantly faster than writing. I'm quite certain that FreeDOS is a bit stupid with regards to how it handles file IO. Even if it appears fast, internal hard drive caches and the like can make it appear faster than it is.

Besides, why would FreeDOS have intelligent algorithms to read files, but a really stupid method to write them?

The concept behind writing intelligently is similar. It should write as many clusters as possible at once, and choose a contiguous free area. It should have an intelligent method for searching for free space, etc. Any stupidity here is more evident since the hard disk is much less likely to cache writes as much, since it leaves the HD in an unknown state if there's a power outage, etc (though some HDs have been known to...an OS programmer's bane).

tom

Homepage

Germany (West),
06.12.2007, 17:59
(edited by tom, 06.12.2007, 18:17)

@ sol

Speed differences - be more specific

> Reading is always significantly faster than writing. I'm quite certain
> that FreeDOS is a bit stupid with regards to how it handles file IO.

Dear enlightened Sol, would you care to share your deep insights into FreeDOS
internals that makes you say so ?

> Even if it appears fast, internal hard drive caches and the like can make it
> appear faster than it is.

BLAH BLAH BLAH.

> Besides, why would FreeDOS have a intelligent algorithms to read files,
> but a really stupid method to write them?

Because I optimized *read* operations, and don't care much about write operations.

> The concept behind writing intelligently is similar. It should write as
> many clusters as possible at once, and choose a contiguous free area. It
> should have an intelligent method for searching for free space, etc. Any
> stupidity here is more evident since the hard disk is much less likely to
> cache writes as much, since it leaves the HD in an unknown state if
> there's a power outage, etc (though some HDs have been known to...an OS
> programmer's bane).

Wow. This guy has *really* good knowledge of a filesystem.
Now sit down, grab your copy of the FreeDOS kernel source, and show the
rest of the world how it should be done. talk is cheap.

sol

06.12.2007, 18:32

@ tom

Speed differences - be more specific

> > Even if it appears fast, internal hard drive caches and the like can make
> > it appear faster than it is.
>
> BLAH BLAH BLAH.

Exactly. If you knew anything about this sort of thing, you'd actually understand what I'm saying and code something better :)

> > Besides, why would FreeDOS have a intelligent algorithms to read files,
> > but a really stupid method to write them?
>
> Because I optimized *read* operations, and don't care much about write
> operations.

I don't see anyone with "tom" in their name in the commit list, but I'll assume maybe you're "Bart Oldeman", "PerditionC" or "Pasquale J. Villani".

Besides, why on earth would you optimize reads but not writes, that's downright stupid?

> Wow. This guy has *really* good knowledge of a filesystem.
> Now sit don't, grab your copy of the FreeDOS kernel source, and show the
> rest of the world how it should be done. talk is cheap.

I do actually. The crappy FAT FSes and many more :)

Anyway, FreeDOS seems to use 'getblk', which in turn uses 'dskxfer'

Every call to getblk uses:
if (!overwrite && dskxfer(dsk, blkno, bp->b_buffer, 1, DSKREAD))

And every other call I've seen to dskxfer uses that "1" as well, which is the # of blocks to read write.

What's this mean? FreeDOS does exactly what I said - it's reading either 1 sector a time, or 1 cluster at a time. I don't really care to dig deeper into this piss to determine which - especially since I've already proven my point.

Now why don't you try to back up your cheap talk?

sol

06.12.2007, 18:35

@ sol

Speed differences - be more specific

> And every other call I've seen to dskxfer uses that "1" as well, which is
> the # of blocks to read write.

This is with the exception of "rwblock" - which someone more intelligent seems to have implemented. But I can't find any calls to it.

tom

Homepage

Germany (West),
06.12.2007, 18:49

@ sol

Speed differences - be more specific

> Exactly. If you knew anything about this sort of thing, you'd actually
> understand what I'm saying and code something better :)

been there. done that.

> > Because I optimized *read* operations, and don't care much about write
> > operations.

repeating: 'Because I optimized *read* operations'

> I don't see anyone with "tom" in their name in the commit list, but I'll
> assume maybe you're "Bart Oldeman", "PerditionC" or "Pasquale J. Villani".

I'm tom, working at this time with Bart. Bart checked the stuff in.
take the time to search history.txt for 'tom'

> Besides, why on earth would you optimize reads but not writes, that's
> downright stupid?

maybe downright lazy, but http://www.drivesnapshot.de/en/ needs fast read, but never writes (trough the file system)

> Anyway, FreeDOS seems to use 'getblk', which in turn uses 'dskxfer'
>
> Every call to getblk uses:
> if (!overwrite && dskxfer(dsk, blkno, bp->b_buffer, 1, DSKREAD))
>
> And every other call I've seen to dskxfer uses that "1" as well, which is
> the # of blocks to read write.
>
> What's this mean?

that you are to dull to operate 'search in textfiles' (see your own next post)

BTW: real men set an interrupt on Int 13, and look at what comes by.
everything else is for kids.

> FreeDOS does exactly what I said - it's reading either
> 1 sector a time, or 1 cluster at a time. I don't really care to dig
> deeper into this piss to determine which - especially since I've already
> proven my point.
>
> Now why don't you try to back up your cheap talk?

not necessary. Speed differences - be more specific You did yourself ;)

sol

06.12.2007, 18:55

@ tom

Speed differences - be more specific

> > Now why don't you try to back up your cheap talk?
>
> not necessary. Speed differences - be more specific You
> did yourself ;)

It is necessary. I don't see any calls to rwblock anywhere, especially not with regards to what I'm referring to (read_dir() etc).

Everything is read 1 sector or 1 cluster at a time. It's unoptimized crap, even for reading.

tom

Homepage

Germany (West),
06.12.2007, 19:08

@ sol

Speed differences - be more specific

> It is necessary. I don't see any calls to rwblock anywhere, especially
> not with regards to what I'm referring to (read_dir() etc).

just because you don't see the call doesn't mean it doesn't exist. (so far you have show little to prove your cleverness)

> Everything is read 1 sector or 1 cluster at a time. It's unoptimized
> crap, even for reading.

download RAWREAD and measure read speed throught the file system,

a:> RAWREAD C:\PAGEFILE.SYS

and compare to read speed from raw disk

a:> RAWREAD 0

should be close.
As said before: talk is cheap. but we have politicians for that.

sol

06.12.2007, 21:30

@ tom

Speed differences - be more specific

> just because you don't see the call doesn't mean it doesn't exist. (so far
> you have show little to prove your cleverness)

I don't see a call in any of the places responsible for reading directories and files in the kernel. Is this not sufficient? This is what I was referring to in my posts, which you argued with.

You're welcome to show me how I'm wrong. So far I made a point *and* defended it with evidence, whereas you've failed to show *any* evidence to the contrary other than to claim you optimized it. If you optimized it, you should very easily be able to quote some functions that you optimized.

I call bullshit.

> a:> RAWREAD C:\PAGEFILE.SYS
>
> and compare to read speed from raw disk
>
> a:> RAWREAD 0
>
> should be close.
> As said before: talk is cheap. but we have politicians for that.

"Should be close" - of course they'd be close. Reads are much quicker than writes, and hard disks have internal caching and read-ahead functionality. The hard drive is making up for the fact that FreeDOS is stupid. This doesn't mean FreeDOS has decent code.

Go ahead, quote some code. I did.

rr

Homepage E-mail

Berlin, Germany,
07.12.2007, 09:43

@ sol

Speed differences - be more specific

Please not another war! ;-) Why not combine your efforts and make FreeDOS the fastest DOS ever? :-P

---
Forum admin

tom

Homepage

Germany (West),
07.12.2007, 11:11

@ sol

Speed differences - be more specific

> Go ahead, quote some code. I did.

AFAIR you said 'I can't find'

FATFS.C

/* Read/write block from disk */
/* checking for valid access was already done by the functions in
   dosfns.c */
long rwblock(COUNT fd, VOID FAR * buffer, UCOUNT count, int mode)
{
.....
      if (dskxfer(fnp->f_dpb->dpb_unit,
                  currentblock,
                  (VOID FAR *) buffer, sectors_to_xfer,
                  mode == XFR_READ ? DSKREAD : DSKWRITE))

sol

07.12.2007, 17:09

@ tom

Speed differences - be more specific

Congratulations, you've found the rwblock function that I already mentioned! Now show me where it's actually used for searching directories & reading/writing files.

tom

Homepage

Germany (West),
07.12.2007, 20:22

@ sol

Speed differences - be more specific

> Congratulations, you've found the rwblock function that I already
> mentioned! Now show me where it's actually used for searching directories
> & reading/writing files.

hint: search for

long DosRWSft(int sft_idx, size_t n, void FAR * bp, int mode)
{
...
long XferCount = rwblock(s->sft_status, bp, n, mode);

sol

07.12.2007, 20:51

@ tom

Speed differences - be more specific

> hint: search for
>
> long DosRWSft(int sft_idx, size_t n, void FAR * bp, int mode)
> {
> ...
> long XferCount = rwblock(s->sft_status, bp, n, mode);

Congratulations on locating yet another function call that has absolutely nothing to do with this thread.

So I take it you're not going to admit when you're wrong?

The fact is, file IO is not optimized. Directory searching, fat searching and file IO are not using these calls. They are using single sector (or potentially cluster) reads/writes.

If you had actually "optimized for reading" you'd know this.

Japheth

Homepage

Germany (South),
07.12.2007, 21:27

@ sol

Speed differences - be more specific

> The fact is, file IO is not optimized. Directory searching, fat searching
> and file IO are not using these calls. They are using single sector (or
> potentially cluster) reads/writes.

"single sector" is impossible, but "single cluster" might be. The benchmark program mentioned previously hooks Int 13h and counts the read/writes. For FreeDOS, there were indeed significantly more Int 13h read/writes than for MS-DOS, but IIRC it was about 2000 reads (MS-DOS about 1000) for a 40 MB file on a 32 GB partition.

---
MS-DOS forever!

sol

07.12.2007, 22:40

@ Japheth

Speed differences - be more specific

> "single sector" is impossible, but "single cluster" might be. The
> benchmark program mentioned previously hooks Int 13h and counts the
> read/writes. For FreeDOS, there were indeed significantly more Int 13h
> read/writes than for MS-DOS, but IIRC it was about 2000 reads (MS-DOS
> about 1000) for a 40 MB file on a 32 GB partition.

It's possible to create a DOS that reads a single sector at a time. I'm glad FreeDOS wasn't *that* stupid.

This only reinforces my point, though - that FreeDOS should be reading more data (such as multiple clusters) with each call and caching more.

Japheth

Homepage

Germany (South),
07.12.2007, 23:20

@ sol

Speed differences - be more specific

> It's possible to create a DOS that reads a single sector at a time.

It's possible of course, but what I meant was that it's impossible if one takes into account the results displayed by the benchmark: 40 MB with single-sector i/o, that would need about 80.000 read calls.

> This only reinforces my point, though - that FreeDOS should be reading
> more data (such as multiple clusters) with each call and caching more.

But reading a file in 4 or 8 kB chunks is not that bad. I did exactly this - split DOS file i/o into 8 kB parts - in the HX Win32 emulation, to make multithreading more smoothly. It didn't affect file i/o speed significantly.

---
MS-DOS forever!

sol

08.12.2007, 01:00

@ Japheth

Speed differences - be more specific

> But reading a file in 4 or 8 kB chunks is not that bad. I did exactly this
> - split DOS file i/o into 8 kB parts - in the HX Win32 emulation, to make
> multithreading more smoothly. It didn't affect file i/o speed
> significantly.

The larger the reads, the better :)

Should also be coupled with some good management of FAT caching.

I should make a little benchmark that hooks int 13h but also categorizes where the reads/writes occur, so that we can see where it's being inefficient.

Back to the board
Thread view  Mix view  Order
22049 Postings in 2034 Threads, 396 registered users, 262 users online (0 registered, 262 guests)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum