Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to index page
Thread view  Board view
Rugxulo

Homepage

Usono,
13.02.2009, 23:38
(edited by Rugxulo, 14.02.2009, 22:15)
 

Compatibility woes / deprecation (Miscellaneous)

(First, let me say that this is more of a philosophical discussion than anything.)

Many programs (and even OSes) are being deprecated these days. Sometimes it's due to code cleanup or broken ports, but mostly it just seems to be due to chauvinism or laziness on the part of maintainers. I know time is precious and money doesn't grow on trees, but it annoys me when something that works fine (e.g. Cygwin) claims to drop Win9x support in its next major release. Or that Pelles C already dropped it a while back. For some reason, everything new is hailed as brilliant and everything old sucks "by default" (even when they lauded the old every bit as much when it was new).

For comparison (or laughs ... even though it's sadly true), read this. It's basically a rant from early 2002 against XP (which is nowadays considered one of the best Windows, or even OSes, ever by most Windows users).

Summary:

+ very stable
+ good hardware compatibility
- older commercial games don't work
- wastes half a gig of HD space
- pre-XP apps run worse under compatibility mode
- shuts down much more slowly than Win9x

"This just occured to me: I had problems with XP. But I hadn't made up my
mind about it yet, because I felt I should give them change to fix it.
("Give it a year, and try it again".) However I just realized by the time
it works well for me, a new version of windows would be released, and I'd be
in the same boat again. So I suppose my preliminary instinct is also my
finally conclusion. "XP sucks". (and to reiterate -- I'm speaking for the
home user.)"

"Essentially, "if it's not broken, don't fix it". (specifically referring to the OS)."

"Take inventory of all your software packages... CD's, floppy's,
download's, etc. How many say "Windows XP: on them? That answer is exactly
how many you can be confident will run properly on XP. How much did all of
that cost? Can you affort to throw away 50% of that money (and time and
detication -- and memories)?"

In short, I would normally half way sorta agree with this guy but ignore it and move on. And yet, in hindsight, he's actually mostly right. (Vista is worse than XP in compatibility and footprint.) Sad but true. In 2006, Win98SE was only seven years old and WinME was actually (barely?) newer than Win2k. Well, these days, XP is seven years old, so it seems that will eventually be deprecated too (although maybe slower since the installed base is so huge). It's just hard to imagine much missing in Win9x that you'd absolutely need and be unable to workaround. It just feels like a copout (to me).

Example: Mozilla Firefox

In 2006 or so, MS decided they were dropping all support for Win9x/ME completely. And immediately upon this, Mozilla decided to drop support in the same way (e.g. MS wouldn't fix some silly bug). It was decided that 3.x wouldn't run, so you'd be stuck with 2.x (which was to be continually updated for security fixes only until Dec. 2008).

Now, I see only a few solutions to this problem when somebody drops support for your OS:

- keep running old, classic 2.0.0.20 on your Win9x machine
- run KernelEx and "fake" kernel version to let the latest run
- get somebody to port 3.x themselves (as was done for eCS, aka OS/2)
- EDIT: run something completely different (e.g. Opera)
- upgrade your OS (e.g. to Win2k, which is the lightest modern NT-based OS)
- run a Linux liveCD (or similar) with latest Firefox

The arguments against upgrading are fairly sane, I think:

- Win9x is faster and smaller, uses less resources
- Win9x has lots better DOS compatibility (although DOSBox exists)
- KernelEx helps some apps run (e.g. Doom 3)
- not interested in wasting more money on what will soon be obsolete
(Vista isn't even as compatible as XP, and Win7 is coming very soon, apparently)

The main reasons mentioned to upgrade from Win9x are as follows:

- SATA support
- HD with > 137 GB
- memory > 512 MB (although this can be worked around)

N.B. I do not run Win9x currently (although I have a Win98SE box lying around here somewhere that I eventually intend to use). Just thinking about OS compatibility in general. What really annoys me is when DOS support is dropped and the older, still-working ports are deleted or hidden so you can't even use that!

Feel free to comment, but this was not meant as a flame towards any one group (Mozilla or MS), just a series of observations. I'm not trying to "hold back progress" or anything, just wondering why some devs claim you need 128 MB of RAM just to run a modern OS (but we managed with much less before).

Rugxulo

Homepage

Usono,
13.02.2009, 23:51
(edited by Rugxulo, 14.02.2009, 00:02)

@ Rugxulo
 

Compatibility woes / deprecation ubiquitous

Just for example:

- NetHack real-mode version: no longer supported and old versions not available on main site
- DoomRL: no DOS support any more and old version not available on main site
- Crafty: no DOS support any more and no srcs for older version easily available (at least, I couldn't find it, but I didn't look too too hard)

+ Dungeon Crawl: Stone Soup developers at one time were considering dropping DOS support
+ OpenWatcom now wondering whether they should drop support for Win NT 3.51 and/or Win16 (but DOS is safe)
+ GNU Emacs had a huge list of ports they wanted to deprecate three months ago, and guess which was on top of their list? Yup, MSDOS (which in all fairness they broke since 22.3). Luckily, Eli Z. jumped back in to update it to work again, but I still get that sinking feeling they want to kill it. :-(

* (rr has mentioned) VirtualBox no longer supports Win2k (so even modern NT-based OSes aren't safe!) :angry:
* QEMU 0.8.2 runs on Win9x but later versions don't ??
* Win32s heavily underutilized by developers (and only was supported until MSVC 4.0) ... most .EXEs have relocs stripped, so it won't work to even try

? Some apps exclusively need GCC 2/3/4 just to compile! (so much for standards reliance)

P.S. I forgot to mention that DOSBox needs > 1 Ghz machine just to run at 486 speed (and seems to need 100+ MB of RAM, which is kinda surprising). So not exactly great for old machines.

EDIT: Just so you know, I don't even use Mpxplay every day, but after a few weeks of not using it (Win32 version on Vista), it suddenly didn't work anymore, crashing with reg dump. I'm not sure if that's due to some OS or driver upgrade, but I find that really really strange and annoying. So you can't even keep stuff from breaking, apparently (e.g. remember XP SP2?).

rr

Homepage E-mail

Berlin, Germany,
14.02.2009, 18:54

@ Rugxulo
 

Compatibility woes / deprecation ubiquitous

> * (rr has mentioned) VirtualBox no longer supports Win2k (so even
> modern NT-based OSes aren't safe!) :angry:
> * QEMU 0.8.2 runs on Win9x but later versions don't ??

QEMU SVN versions also don't run on Win2k w/o modification: [Qemu-devel] QEMU SVN on Windows 2000 :-(

---
Forum admin

marcov

14.02.2009, 14:02

@ Rugxulo
 

Compatibility woes / deprecation

> Many programs (and even OSes) are being deprecated these days. Sometimes
> it's due to code cleanup or broken ports, but mostly it just seems to be
> due to chauvinism or laziness on the part of maintainers.

Correct. The Dos people quit, left, and/or no new ones participated. If you are a user, and not a developer, that includes you too.

I've brought it up several times, but the FPC situation is probably applicable in other projects too:

- Before Giulio, FPC had no dos maintainers for a long, long time (say 5-7 years). Nobody contributed a single line of Dos related code, fixed bugs etc. Luckily it was kept somewhat alive, since nearly all FPC devels used Dos earlier. (Some came straight from Amiga or Mac) But just before Guilio emerged, for a lot of them their last real dos use was at least longer than 5-7 years ago.
- However if you had put an illegal "Borland Pascal 7" copy on your website, it would have proven that there were enough Dos Pascal users out there.

Note also that the general state of development in BP7 hasn't significantly progressed since the later nineties. Since Swag died, no new community stepped up.

Summary: The problem is the lack of participating, and a general "clientism" attitude of the Dos users. Something that you can get away with in a majority platform, but not in a minority platform. Being a non-programmer is not an excuse, founding an documentation and archival site/group is a first good step.

> I know time is precious and money doesn't grow on trees

So why don't you simply fix them? They are typically terminated because not enough people work on them, and they have become a burden by people not interested in them?

They have to keep wrestling with 8.3 support, memory limitations, thread support limitations, unicode deficiencies of that one platform _every_ day.

> For some
> reason, everything new is hailed as brilliant and everything old sucks "by
> default" (even when they lauded the old every bit as much when it was
> new).

It is not new versus old per se, but simply "people willing to work", vs "people not willing to work". And apparantly the few last Dos users don't care enough to start an initiative to fix this. Most of them had a tad of clientism over them anyways.

(XP rant skipped, but I don't agree:
- The two comparisons are not equal: Pre sp1 XP had several performance problems and a bad driver situation.
- Most of the so called advantages only worked for the people coming from age-old win98. For the win2000 crowd XP felt pretty much like Vista would feel to the crowd that started with XP crowd later: a minor point upgrade that sucked resources.
)

> "Essentially, "if it's not broken, don't fix it". (specifically referring
> to the OS)."

Well, that happens? You can still use the old versions. New versions need support (hey "dos-maintainer", I want to implement feature "yyy", does Dos have some way to support that? Hey "dos-maintainer", can you check if the latest changes broke 8.3 support? We are going to build a new release next weekend, will you build the dos release? And then you are only talking passive maintaining, not even actively developing/working around new problems)

> "Take inventory of all your software packages... CD's, floppy's,
> download's, etc. How many say "Windows XP: on them? How much did all
> of that cost?

(just fyi: All software that I'm interested in either runs on plain dos or 2k/XP. I've no win9x specific software at all, and never had any that was not win9x system software which was redundant after migrating from w9x. When I moved to NT (w2k), I mostly cleared out old dos utility programs, partially also because I gave up resistance against LFN)

> Can you affort to throw away 50% of that money (and time and
> detication -- and memories)?"

Is it worth enough to fund broad support for this platform? If so, why don't you do that? OS/2 has been kept somewhat alive for 10 years, same for dos, but no such initiative for w9x has ever sprung up. Most people are glad it is dead. (and in the case of Win ME, they are dancing on the grave). Windows NT/2k has been duplicated by ReactOS, Beos by Haiku, Amiga OS by Morphos, dos by freedos, Unix by linux/bsd/osx whatever. But nobody seems to care a bit about the hybrid platforms win9x and classic macos. Maybe also because they were the most geared to users that only consumed, and not produced.

Somebody has to pick up the bill for the support, either in money or invested time. If the dos/win9x users aren't, who is?

marcov

14.02.2009, 14:03

@ marcov
 

Compatibility woes / deprecation

(part II)

> The main reasons mentioned to upgrade from Win9x are as follows:

- stable network features, and no reboots for hw changes (like e.g. plug and play). This was the reason for me to move to w2k

> What really annoys me is when DOS
> support is dropped and the older, still-working ports are deleted
> or hidden so you can't even use that!

Then start an archival group! Stop being the self-pitying beggar, and do something yourself! Submit patches, archive what is there, and work on driver support. (3rd party people can legally make win9x drivers afaik, and you can copy some of the hw knowledge from the other open platforms)

Moreover, make up your mind what you actually want to preserve, Dos or win9x. It sounds like win9x is a stopgap solution to get to Dos for you.

Rugxulo

Homepage

Usono,
14.02.2009, 23:07

@ marcov
 

Compatibility woes / deprecation

> - Before Giulio, FPC had no dos maintainers for a long, long time (say 5-7
> years). Nobody contributed a single line of Dos related code, fixed bugs
> etc.

How long has FreeDOS been stable? (I can only guess beta8 in 2003 or so. Before that, I'm not sure it was good enough for everyday use. But that's just a guess since I never tried earlier versions.) How long has QEMU and BOCHS been stable? DOSBox? All of that makes a difference, esp. when your OS isn't DOS friendly any more (NT). I've heard many people say, "I don't have a DOS setup anymore." And modern installs of Windows using NTFS, hogging the whole drive, doesn't help.

> They have to keep wrestling with 8.3 support, memory limitations, thread
> support limitations, unicode deficiencies of that one platform _every_
> day.

8.3 can be easily worked around (ROM-DOS, DOSLFN, StarLFN, Win9x, Win2k). Memory limitations? Not in flat model. Thread support? No standard method, too many hacks. Unicode? Even Win9x didn't barely support that, so you can't complain there (since nobody cared back then anyways). Let's face it, even GNU proposes all comments in code be in English, so that proves the English bias in the world. Not saying that's ideal, but seriously, saying Unicode is a deal breaker is a bit exaggerated. (Besides, Win32s didn't have threads or Unicode either except latter via wimpy codepage conversion.)

> (XP rant skipped, but I don't agree:
> - The two comparisons are not equal: Pre sp1 XP had several performance
> problems and a bad driver situation.
> - Most of the so called advantages only worked for the people coming from
> age-old win98. For the win2000 crowd XP felt pretty much like Vista would
> feel to the crowd that started with XP crowd later: a minor point upgrade
> that sucked resources.
> )

Not true. Win2k was pretty light on resources (comparatively) unlike Vista. Heck, even XP is so much lighter that it's the newest OS that MS could cram on a netbook (until Win7 is finalized). And XP -> Vista broke some things ... unlike 2k -> XP, where they actually improved some stuff (e.g. added SB emulation for DOS apps). I can't think of anything fixed in Vista that was broken in XP, esp. not for DOS (which at one time was MS' bread and butter, there is a huge legacy there, whether you acknowledge it or not). Quake (DOS) will not run on XP or Vista. This is directly due to bugs that MS just plain refused to fix. I just am incredulous that they would let things fall apart. It's so frustrating.

Rugxulo

Homepage

Usono,
14.02.2009, 23:08

@ Rugxulo
 

Compatibility woes / deprecation

> > "Take inventory of all your software packages... CD's, floppy's,
> > download's, etc. How many say "Windows XP: on them? How much did all
> > of that cost?
>
> (just fyi: All software that I'm interested in either runs on plain dos or
> 2k/XP. I've no win9x specific software at all, and never had any that was
> not win9x system software which was redundant after migrating from w9x.
> When I moved to NT (w2k), I mostly cleared out old dos utility programs,
> partially also because I gave up resistance against LFN)

First of all, many DOS apps support LFNs (e.g. some FreeDOS utils: find, more, FreeCom, etc.) as well as all DJGPP v2 apps by default. Secondly, you are indeed naive if you think XP can run all your old software. It was a big deal when people found out they couldn't run their old games and other software. The only real saving grace there was the 3rd-party DOSBox, which is nice if (and only if) you have a fast enough machine. It's severely slower than real hardware. (I mean, if my P4 can barely emulate a 486, that's not so awesome. I'll be honest, a 486 is probably too slow for me.)

> > Can you affort to throw away 50% of that money (and time and
> > detication -- and memories)?"
>
> Is it worth enough to fund broad support for this platform? If so, why
> don't you do that? OS/2 has been kept somewhat alive for 10 years, same
> for dos, but no such initiative for w9x has ever sprung up.

OS/2 hasn't really been kept alive. It's just that IBM finally leased it out to somebody to sell (and without giving srcs, too). And they're too expensive and have a tough time getting compatible hardware drivers. Still, I feel for them because there's no major technical deficiency in their OS. At least, I wish them luck.

ReactOS was originally FreeWin95 (or whatever) but eventually upgraded to target the now-ubiquitous XP, which is even more complex. No wonder they still aren't finished. There are indeed a lot of Win9x users, but XP was forced down our throats, for better or worse, and when you break compatibility, people move to what works, not what's best or what they're used to. The main advantage of XP was better stability, but it used more resources and broke compatibility. Other extras (Unicode) were just icing on the cake as most developers don't use them.

> Most people are glad it is dead. (and in the case of Win ME, they
> are dancing on the grave).

Win9x (including ME) were discontinued after six years or so. XP has now been around longer than that, and I never see any new computers with XP installed anymore. So if you think XP is so stable, you are in for a surprise. It will be dropped just like perfectly acceptable OSes before it. Then you're screwed. And the irritation I have is that there's no justifiable reason to drop everything.

> Windows NT/2k has been duplicated by ReactOS,

Alpha (i.e. crashes, quite buggy).

> Beos by Haiku,

Pre-alpha!

> Amiga OS by Morphos,

AROS is probably a better example (isn't MorphOS $$$?).

> dos by freedos,

At least it's GPL. But it's practically frozen in suspended animation (like Dave Lister.) ;-)

> Unix by linux/bsd/osx whatever.

Linux has been around since 1991, and the *BSD family since 1993 or so. So they've had a lot longer time to build (than FreeDOS, for example). Plus they have commercial support from various companies. (I don't know why money is such a motivator to some people.)

Rugxulo

Homepage

Usono,
14.02.2009, 23:09

@ Rugxulo
 

Compatibility woes / deprecation

> But nobody seems to care a bit about the hybrid platforms win9x and
> classic macos.

People still use 'em, they just don't code for 'em. Probably because they either don't know how or lack the appropriate SDK. In case you haven't noticed, MS etc. don't make it easy to code for older OSes. They are only interested in their main platform of the day. (Things like Pelles C and Cygwin dropping Win9x support don't help in the least.)

> Somebody has to pick up the bill for the support, either in money or
> invested time. If the dos/win9x users aren't, who is?

Obviously not you. :-P

> [XP] stable network features, and no reboots for hw changes
> (like e.g. plug and play). This was the reason for me to move to w2k

A lot of good a "stable" OS is if you don't have any software that runs on it. Might as well use Whitix.

> Then start an archival group! Stop being the self-pitying beggar,
> and do something yourself!

I have to actually find older DOS versions before I can archive them!

> Submit patches, archive what is there,
> and work on driver support. (3rd party people can legally make
> win9x drivers afaik, and you can copy some of the hw knowledge
> from the other open platforms)

You know as well as anybody that drivers aren't easy to write.

> Moreover, make up your mind what you actually want to preserve, Dos
> or win9x. It sounds like win9x is a stopgap solution to get to Dos for you.

Win9x was just an example proving a point: compatibility is a lost art, and it is shunned for no good reason. If even Win2k is abandoned, what chance does anyone have? It really is a moving target, and it makes everything seem pointless. Why bother fixing what will just break again in the next release? In other words, once you find what works, keep it as long as possible! Or maybe you think that what was good once before (Win98SE or FPC 1.0.10) is truly crap in hindsight? (Doubt it.) "But it's not x86-64 with ten threads and UTF-16! It doesn't use my Blu-Ray drive and ZFS!" So? If it works it works.

DOS has the full DJGPP suite of (mostly GNU) tools including GCC and GNU Emacs, nice POSIX compatibility, LFN support, good NTVDM compatibility, and yet it's still considered "not good enough".

> For the next 5-10 years I think I can escape buying new hardware and
> new Windows. XP seams now to be "the os", "the standard" and it seams
> it can not be dropped.

You can't. If apps like QEMU and VirtualBox won't even support Win2k, you can't rely on XP for even three years, much less ten. Especially if your new hardware has no drivers. The point is that you can't rely on MS, Mozilla, Cygwin, or anybody else to support even what they used to support! Argh. In short, it's a sinkhole with no way out besides "upgrade upgrade upgrade".

"Doctor, it hurts when I do this." "Well, then don't do that." Great solution, doc, except that's no solution, it's a workaround. "XP works great as long as you don't use DOS. Vista works even better but even less with DOS." And yet DOS is ridiculed for all the hacks and workarounds it uses, but "newer" OSes aren't really any better, just different.

You can no more get mad at DOS for 8.3 (which can be fixed!) if you refuse to see flaws in other OSes, esp. things that used to work but broke for no good reason. The advantages are supposed to outweigh the disadvantages, but when even the "New Technology" gets deprecated, why bother at all? :confused:

EDIT: By the way, upgrading your OS or cpu only helps you, not everybody else. In other words, it's a hack, a workaround, not a true universal solution. If you're cheap (in effort), that will work, but otherwise it's not recommended (unless you like upgrading every six months or so: "hooray, DDR3 with Phenom II", big whoop). This was never meant to be an argument, only discussing a genuine problem and some possible solutions (Firefox example).

marcov

15.02.2009, 13:55

@ Rugxulo
 

Compatibility woes / deprecation

> > But nobody seems to care a bit about the hybrid platforms win9x and
> > classic macos.
>
> People still use 'em, they just don't code for 'em.

Never was a truer word spoken. And there is something of a Darwinian problem there. A problem which will make them extinct in time.

> In case you haven't noticed, MS etc. don't make it easy to code for older OSes.

Neither did AT&T make it easy for the Unixers in the early nineties. But they both their own independant codebase, several even, and they made commercial Unix (except maybe Solaris, which was opened under the pressure though) nearly irrelevant.

> A lot of good a "stable" OS is if you don't have any software that runs on
> it. Might as well use Whitix.

Well, maybe used software is not entirely static either :-)

> I have to actually find older DOS versions before I can archive
> them!

No need. Work on the clones to make them better. If they can run all software, who cares about the original ones? :-)

> You know as well as anybody that drivers aren't easy to write.

I know, but I didn't say it was going to be easy. It is easiest to sit on your bum. But it will get you nowhere.

> and it is shunned for no good reason.

People don't want to invest in long term compatibility. They want to get something of the shelf cheaply, and then run that forever.

Which is in itself not that bad, but they want support too. Both fixing (e.g. keep running on newer hardware), or functional enhancements to keep up with times. Somebody got to pay. The consumers won't, so they move on. Like it or not, but it is a fact.

Only in the very high end (mostly IBM, but also e.g. Compaq's Alpha, Itanium and HP's UX lines were continued mostly compatible for a long time) this is different.

> If even Win2k is abandoned, what chance does anyone have?

(w2k was actually easy to abandon. The performance difference that hurt at 800MHz/512MB hurt less with 2GHz/2GB. If the backlash against Vista hadn't been that bad, that maybe would have happened with Vista too. OTOH, currently computers aren't gettting that much faster anymore (only more cores)

> Why bother fixing what will just break again in the next release?
> In other words, once you find what works, keep it as long
> as possible!

That can be counter productive, because migrating then gets very,very expensive (both in time and money). Taking a hard good look every year about what you "support" and what you need usually pays off.

> Or maybe you think that what was good once before (Win98SE or
> FPC 1.0.10) is truly crap in hindsight? (Doubt it.)

I never had any illusions about either one of them. 1.0.10 was a fine release btw, it was just that all the ones behind it (even the 1.9 series betas, except for the very first 1.9 one) were simply a lot better.

>"But it's not x86-64
> with ten threads and UTF-16! It doesn't use my Blu-Ray drive and ZFS!" So?
> If it works it works.

It might work, but is that really all that I need? Or do I end up supporting an old Dos install for a few progs, and a spiky new machine next to it to run the new stuff?

> The point is that you can't rely on MS,
> Mozilla, Cygwin, or anybody else to support even what they used to
> support!

Of course not. I wonder why you had the idea in the first place. I don't think you can get T-Ford parts from Ford either.

> "Doctor, it hurts when I do this." "Well, then don't do that." Great
> solution, doc, except that's no solution, it's a workaround. "

Pirate : I want a new wooden peg leg.
Doctor : peg legs are no longer made of wood. The new prosthetic limbs are way better now though.
Pirate : But you used to support wooden peg legs!

> XP works
> great as long as you don't use DOS. Vista works even better but even less
> with DOS." And yet DOS is ridiculed for all the hacks and workarounds it
> uses, but "newer" OSes aren't really any better, just different.

First IMHO they are better. But the big difference is that this DOS obsession is the real problem. I don't have a DOS obsession, and anything that is even slightly or gradually better is a plus then.

> EDIT: By the way, upgrading your OS or cpu only helps you, not everybody
> else.
> In other words, it's a hack, a workaround, not a true universal
> solution.

The point is that keeping the old OS is not a golden rule. Peoples main motivation is to do work with computers, not conserve an old OS.

For you, somehow conserving Dos, and putting everything else in stasis is an obsession. For most others it isn't. Worse, the people that seem to obsess about Dos, seems to be mostly obsessing is why the entire world abandoned Dos instead of working on/with Dos.

To keep Dos alive, working on it is the only way. Just like I work on the TextMode IDE, just because I like it.

Rugxulo

Homepage

Usono,
15.02.2009, 20:55

@ marcov
 

Compatibility woes / deprecation

> > People still use 'em, they just don't code for 'em.
>
> Never was a truer word spoken. And there is something of a Darwinian
> problem there. A problem which will make them extinct in time.

No pun intended? ;-)

> > I have to actually find older DOS versions before I can archive
> > them!
>
> No need. Work on the clones to make them better. If they can run all
> software, who cares about the original ones? :-)

I meant apps, not OSes themselves. It's hard to find some things. (Simtel falling apart and Jumbo dying didn't help. Even Hobbes is pretty lacking.)

> > You know as well as anybody that drivers aren't easy to write.
>
> I know, but I didn't say it was going to be easy. It is easiest to sit on
> your bum. But it will get you nowhere.

No, it's easiest to complain. ;-) However, I was trying to be pragmatic as well as look at the flaws in the situation (e.g. my list of Firefox solutions).

> Only in the very high end (mostly IBM, but also e.g. Compaq's Alpha,
> Itanium and HP's UX lines were continued mostly compatible for a long
> time) this is different.

Itanium 2 can only run x86 in software emulation now.

> > If even Win2k is abandoned, what chance does anyone have?
>
> (w2k was actually easy to abandon. The performance difference that hurt at
> 800MHz/512MB hurt less with 2GHz/2GB. If the backlash against Vista hadn't
> been that bad, that maybe would have happened with Vista too. OTOH,
> currently computers aren't gettting that much faster anymore (only more
> cores)

Wait for SSE5. Heck, even Windows 7 claims to be even more multi-core friendly.

> > Or maybe you think that what was good once before (Win98SE or
> > FPC 1.0.10) is truly crap in hindsight? (Doubt it.)
>
> I never had any illusions about either one of them. 1.0.10 was a fine
> release btw, it was just that all the ones behind it (even the 1.9 series
> betas, except for the very first 1.9 one) were simply a lot better.

FPC 1.0.10 had an EMX port (DOS + OS/2 in one), which seems cool. Do any newer versions support that? It seems to me that such a thing would simplify porting. (BTW, another thing that's hard to find is latest EMX and RSX binaries. Rainer's page is dead, so it was hard to do. Still not sure I found it all, e.g. latest RSX.EXE 5.24 ASM srcs.)

> It might work, but is that really all that I need? Or do I end up
> supporting an old Dos install for a few progs, and a spiky new machine
> next to it to run the new stuff?

Ideally, something like Windows (and MS with its knowhow) would keep DOS working, but they didn't. And x86-64 didn't help matters either.

> > The point is that you can't rely on MS,
> > Mozilla, Cygwin, or anybody else to support even what they used to
> > support!
>
> Of course not. I wonder why you had the idea in the first place. I don't
> think you can get T-Ford parts from Ford either.

I find it very very hard to believe that Firefox 3 is so much harder to get working on Win9x than 2.x was. Same with Cygwin. What, did they forget their expertise? No, they just randomly lost interest.

> > "Doctor, it hurts when I do this." "Well, then don't do that." Great
> > solution, doc, except that's no solution, it's a workaround. "
>
> Pirate : I want a new wooden peg leg.
> Doctor : peg legs are no longer made of wood. The new prosthetic limbs
> are way better now though.
> Pirate : But you used to support wooden peg legs!

You know, they recall even artificial knees sometimes due to errors. Not convenient! Even the Segueway had several stability bugs that had to be fixed. In other words, things always break whether new or old. And sometimes older might be 10% inferior but 200% easier to maintain, and cheaper too.

> First IMHO they are better. But the big difference is that this DOS
> obsession is the real problem. I don't have a DOS obsession, and anything
> that is even slightly or gradually better is a plus then.

Even MSVC 2k5 supposedly doesn't work on Vista. Lots of XP drivers don't work on Vista (e.g. printers, scanners, my digital camera). Lousy DOS support. Various other bugs and gotchas. Vista isn't that bad, but it's not that good either (at least, not good enough to kill XP and force everyone to upgrade). At least, I hope all their work on Win7 won't detract from SP2/SP3 for Vista (which does indeed need it). Even UAC is annoying due to hardcoded filenames (e.g. try running anything, even a simple DOS/DJGPP util like UPDATE or PATCH without triggering UAC, you can't!).

> The point is that keeping the old OS is not a golden rule. Peoples main
> motivation is to do work with computers, not conserve an old OS.

I am not going to buy an XBox 360 running a PowerPC if it won't run my original XBox (x86) games. Sure, they added some software emulation, but it only runs like three games of mine, so that's a bust. Why pay more money for a machine that won't run old stuff? I don't have the money or interest to buy all new games. And even the XBox 360 is three years old, probably soon obsolete in the next year (since XBox 1 was dropped like a stone sometime in 2006). Sometimes you don't need "better" or "faster / flashier", just something that works.

> For you, somehow conserving Dos, and putting everything else in stasis is
> an obsession.

Who said put everything in statis? The world keeps moving, just that some things aren't good ideas. DPMI 0.9 is hundreds of times more popular than 1.0. Python 3 ain't the same as 2 (or Perl 6 vs. 5). Change is good, but change that breaks compatibility for no good reason (without good workaround) is bad.

> Worse, the people that seem to
> obsess about Dos, seems to be mostly obsessing is why the entire world
> abandoned Dos instead of working on/with Dos.

I don't care if they work on other stuff, just don't break what already worked!

On one hand, rr using Win2k can't use latest VirtualBox (I feel bad for him), but heck, me on Vista, I can't even run DOS full-screen (only slow-ass DOSBox) or compile latest GNU Emacs. So both our OSes suck?? Or is it just that no one fixed 'em?? And since Windows is closed src, we can only let MS do it (and they won't, of course) or else find workarounds. It gets tiring always working around bugs when it's someone else's responsibility. If not for the hard work of the DJGPP guys, 2K/XP NTVDM bugs would've killed DJGPP a long time ago.

marcov

18.02.2009, 12:17

@ Rugxulo
 

Compatibility woes / deprecation

> > > People still use 'em, they just don't code for 'em.
> >
> > Never was a truer word spoken. And there is something of a Darwinian
> > problem there. A problem which will make them extinct in time.
>
> No pun intended? ;-)

Of course. But also the hard truth I'm afraid, even if wrapped in a pun.

> > No need. Work on the clones to make them better. If they can run all
> > software, who cares about the original ones? :-)
>
> I meant apps, not OSes themselves. It's hard to find some things. (Simtel
> falling apart and Jumbo dying didn't help. Even Hobbes is pretty
> lacking.)

Then support them, get active in them etc. People burn out, it happens, they need people to follow in their footsteps.

> > Only in the very high end (mostly IBM, but also e.g. Compaq's Alpha,
> > Itanium and HP's UX lines were continued mostly compatible for a long
> > time) this is different.
>
> Itanium 2 can only run x86 in software emulation now.

Compatible to their own lines, even though the lines were in heavy decline.
Not everything is about x86.

> > 800MHz/512MB hurt less with 2GHz/2GB. If the backlash against Vista
> hadn't
> > been that bad, that maybe would have happened with Vista too. OTOH,
> > currently computers aren't gettting that much faster anymore (only more
> > cores)
>
> Wait for SSE5. Heck, even Windows 7 claims to be even more
> multi-core friendly.

The problem with multicore is the apps, not the OS. SSE5 has some minor checksumming/compression and encryption primitives only afaik.

While I actually like the idea, I don't expect them to be a massive chance in performance, except in some heavy utilized SSL servers or so.

> > > Or maybe you think that what was good once before (Win98SE or
> > > FPC 1.0.10) is truly crap in hindsight? (Doubt it.)
> >
> > I never had any illusions about either one of them. 1.0.10 was a fine
> > release btw, it was just that all the ones behind it (even the 1.9
> series
> > betas, except for the very first 1.9 one) were simply a lot better.
>
> FPC 1.0.10 had an EMX port (DOS + OS/2 in one), which seems cool. Do any
> newer versions support that?

Don't know. Afaik the devels that mostly did that partially went to the native OS/2 port. And while still alive (as in devels are on the list) the mutation rate there is not that high either :-)

> > It might work, but is that really all that I need? Or do I end up
> > supporting an old Dos install for a few progs, and a spiky new machine
> > next to it to run the new stuff?
>
> Ideally, something like Windows (and MS with its knowhow) would keep DOS
> working, but they didn't. And x86-64 didn't help matters either.

Which proves that ideals are often just handles for self-delusion :)

> > Of course not. I wonder why you had the idea in the first place. I
> don't
> > think you can get T-Ford parts from Ford either.
>
> I find it very very hard to believe that Firefox 3 is so much harder to
> get working on Win9x than 2.x was.

Well, assume with 2.x there were still maintainers, and now not. Big difference.

> Same with Cygwin. What, did they forget
> their expertise? No, they just randomly lost interest.

But that is _NORMAL_, any organisation of any kind has a certain people throughput. If key people loose interest, who are YOU to force them to do anything?

> > > "Doctor, it hurts when I do this." "Well, then don't do that." Great
> > > solution, doc, except that's no solution, it's a workaround. "
> >
> > Pirate : I want a new wooden peg leg.
> > Doctor : peg legs are no longer made of wood. The new prosthetic limbs
> > are way better now though.
> > Pirate : But you used to support wooden peg legs!
>
> In other words, things always break whether new or old.

And old things lie on the ground, rot away and become fossils. Like Dos and Dinosaurs.

> And sometimes older might be 10% inferior but 200% easier to maintain, and
> cheaper too.

Maintaining multiple targets is nearly always more expensive. And if you don't get anything back from the people who have the actual advantage (either invested time or paying money), you are stuck with the expenses, and see nothing in return. Often not even some form of gratified feeling that somebody is actually using it.

> > First IMHO they are better. But the big difference is that this DOS
> > obsession is the real problem. I don't have a DOS obsession, and
> anything that is even slightly or gradually better is a plus then.
>
> Even MSVC 2k5 supposedly doesn't work on Vista. Lots of XP drivers don't
> work on Vista (e.g. printers, scanners, my digital camera).

Yes. Shits happens. Natural process of deprecation.

> Lousy DOS support.

Feature not regression.

> Various other bugs and gotchas. Vista isn't that bad, but it's
> not that good either (at least, not good enough to kill XP and force
> everyone to upgrade). At least, I hope all their work on Win7 won't
> detract from SP2/SP3 for Vista (which does indeed need it).

As far as I heard, Win7 is a dolled up Vista.

> Even UAC is
> annoying due to hardcoded filenames (e.g. try running anything, even a
> simple DOS/DJGPP util like UPDATE or PATCH without triggering UAC, you
> can't!).

Yes I know. Install in my case.

> > For you, somehow conserving Dos, and putting everything else in stasis
> is
> > an obsession.
>
> Who said put everything in statis?

Well, got that as the general tenure of all your messages in this thread.

> The world keeps moving, just that some
> things aren't good ideas.

Problem is that "good" is very subjective. People have lots of different opinions. If you view a complex situation from an oversimplified single person perspective, everything is simple, and you are always right.

> Change is good, but
> change that breaks compatibility for no good reason (without good
> workaround) is bad.

First, there can be good reasons that don't have workaround. Nobody is obliged to actually keep compatibility.

Second, the no good reason is very subjective.

> I don't care if they work on other stuff, just don't break what already
> worked!

For the hundredth time, keeping something unbroken in a live environment actually takes knowledge, skill and time. So if there nobody is actively validating it during times of heavy changes, it will be broken within weeks.
(and that is actual experience from FPC)

> On one hand, rr using Win2k can't use latest VirtualBox (I feel bad
> for him), but heck, me on Vista, I can't even run DOS full-screen (only
> slow-ass DOSBox) or compile latest GNU Emacs. So both our OSes suck??

Yes and no.

Yes: if you consider that your OS doesn't live up to your demands.
No : if you consider your demands (to run new software on old OSes) to be the problem, rather than the OS.

Take your pick.


> Or is it just that no one fixed 'em??

It is just that.

> And since Windows is closed src, we can only let MS do it (and they won't, of course) or else find workarounds.

VirtualBox isn't running on Windows. So the problem is IMHO in first order in VirtualBox, not in 2k.

> gets tiring always working around bugs when it's someone else's
> responsibility. If not for the hard work of the DJGPP guys, 2K/XP NTVDM
> bugs would've killed DJGPP a long time ago.

I agree with that, and FPC/dos too.

Rugxulo

Homepage

Usono,
18.02.2009, 21:26

@ marcov
 

Compatibility woes / deprecation

> > Itanium 2 can only run x86 in software emulation now.
>
> Compatible to their own lines, even though the lines were in heavy
> decline.
> Not everything is about x86.

Yes it is (almost), for good or bad. The only reason for Itanium 2's lacking support is that it was faster in software anyways, so they decided to scrap the hardware compatibility aspect.

> > Wait for SSE5. Heck, even Windows 7 claims to be even more
> > multi-core friendly.
>
> The problem with multicore is the apps, not the OS.

I know, but the OS does do some things itself, so speeding itself up doesn't hurt.

> SSE5 has some minor checksumming/compression and encryption primitives
> only afaik.

AMD does claim much improved (30%?) single thread speedups.

> While I actually like the idea, I don't expect them to be a massive chance
> in performance, except in some heavy utilized SSL servers or so.

Can't know, just have to wait and see.

> > FPC 1.0.10 had an EMX port (DOS + OS/2 in one), which seems cool. Do
> any
> > newer versions support that?
>
> Don't know. Afaik the devels that mostly did that partially went to the
> native OS/2 port. And while still alive (as in devels are on the list) the
> mutation rate there is not that high either :-)

I think the README.TXT still incorrectly mentions the EMX port as if still available, but the FTP /snapshots/ shows an empty i386-emx/ dir for both v22 and v23. I'm not surprised, esp. since EMX isn't actively maintained, AFAICT.

> > Same with Cygwin. What, did they forget
> > their expertise? No, they just randomly lost interest.
>
> But that is _NORMAL_, any organisation of any kind has a certain people
> throughput. If key people loose interest, who are YOU to force them to do
> anything?

I can't force and am not trying to, obviously. I just don't understand the rationale for dropping everything that was acceptable before.

> > In other words, things always break whether new or old.
>
> And old things lie on the ground, rot away and become fossils. Like Dos
> and Dinosaurs.

It's just frustrating: invest all your time and energy into this ... oh wait, we've moved onto something else. Try again!

DPMI is a standard, MS was very important to its creation. It was published by a 12-member committee. Ignoring all the real-world reasons why they can't, I'm wondering, why wouldn't they want to be more compatible by running more apps?? "Yikes, 2K3 broke something, let's fix it before the next version. Oh what, we didn't fix it? Bah, we're rich, what do we care?" And yet the majority of the world is no better. All their anti-MS b.s. and yet they do the same damn thing, spit on the little guy. It's all politics instead of "hey, how do I get my app to run best?" Medicine that nobody can afford is a "big crap", not very useful to anybody.

> > Lousy DOS support.
>
> Feature not regression.

BTW, DOSEMU claims to work (non-gfx) on NetBSD and "maybe FreeBSD", ever tried? Or would that make *BSD less appealing?

> > Various other bugs and gotchas. Vista isn't that bad, but it's
> > not that good either (at least, not good enough to kill XP and force
> > everyone to upgrade). At least, I hope all their work on Win7 won't
> > detract from SP2/SP3 for Vista (which does indeed need it).
>
> As far as I heard, Win7 is a dolled up Vista.

2k3 kernel = Vista pre-SP1 (NT 6.0)
2k7 kernel = Vista SP1 (NT 6.0.0001)
? = Windows 7 (NT 6.1)

> > > For you, somehow conserving Dos, and putting everything else in
> stasis is an obsession.
> >
> > Who said put everything in statis?
>
> Well, got that as the general tenure of all your messages in this thread.

Progress doesn't have to kill everything that came before it. We are not praying mantises.

> > Change is good, but
> > change that breaks compatibility for no good reason (without good
> > workaround) is bad.
>
> First, there can be good reasons that don't have workaround. Nobody is
> obliged to actually keep compatibility.

Just as nobody is obliged to keep diplomatic relations with foreign countries?

ecm

Homepage E-mail

Düsseldorf, Germany,
18.02.2009, 22:21

@ Rugxulo
 

Compatibility woes / deprecation

> 2k7 kernel = Vista SP1 (NT 6.0.0001)
> ? = Windows 7 (NT 6.1)

I thought Windows 7 would increase the NT version to 7... (Or will they still do that but after current 6.1 beta releases?) I wonder whether Windows 8 will then be NT 6.11 or 7.0 ;-)

---
l

Rugxulo

Homepage

Usono,
19.02.2009, 01:08

@ ecm
 

Compatibility woes / deprecation

> > 2k7 kernel = Vista SP1 (NT 6.0.0001)
> > ? = Windows 7 (NT 6.1)
>
> I thought Windows 7 would increase the NT version to 7... (Or will they
> still do that but after current 6.1 beta releases?) I wonder whether
> Windows 8 will then be NT 6.11 or 7.0 ;-)

No, they've explicitly said they will stick to 6.1 internally (esp. since the kernel isn't much changed, if at all).

marcov

18.02.2009, 23:08

@ Rugxulo
 

Compatibility woes / deprecation

> > > Itanium 2 can only run x86 in software emulation now.
> >
> > Compatible to their own lines, even though the lines were in heavy
> > decline.
> > Not everything is about x86.
>
> Yes it is (almost), for good or bad.

Afaik, there are more ARMs and MIPS sell bigger numbers than x86s :-) (*)

(*) though the revenue might be /slightly/ larger on the x86s :-)

> The only reason for Itanium 2's
> lacking support is that it was faster in software anyways, so they decided
> to scrap the hardware compatibility aspect.

Afaik the reason was that they had given up hope attracting the x86 hordes, and focussed on some high-end businesses that came mostly from HPUX (HP being the major Itanium vendor) and some highend computing (FPU throughput was quite nice for before Opteron times)

> > The problem with multicore is the apps, not the OS.
>
> I know, but the OS does do some things itself, so speeding itself up
> doesn't hurt.

Well, I assume it the same what the *nixes are doing, improving the granularities of kernel locking to avoid all-core stalls, as well as better supporting NUMA architectures. But that is only for scalability of server apps. Doubt it matters much for home use.

> > SSE5 has some minor checksumming/compression and encryption primitives
> > only afaik.
>
> AMD does claim much improved (30%?) single thread speedups.

If I add all the claims over all new technologies over the years together, I would have a 20 times the machine now. I can imagine this for maybe an ideal compression/checksumming/encryption scenario. But not for avg singlethread performance. I think 5% (from SSE5 alone) will be optimistic.

> > Don't know. Afaik the devels that mostly did that partially went to the
> > native OS/2 port. And while still alive (as in devels are on the list)
> > the mutation rate there is not that high either :-)
>
> I think the README.TXT still incorrectly mentions the EMX port as if still
> available, but the FTP /snapshots/ shows an empty i386-emx/ dir for both
> v22 and v23. I'm not surprised, esp. since EMX isn't actively maintained,
> AFAICT.

Yes, formally removing them from the list is a big step, and therefore always done late. (typical scenario: devel 1 says "we should delist this port". Devel 2: "but I just spent some time on it to get it releasalbe again" ... deadline passes. Devel 1: "where is the release for xx". Devel 2: "Sorry, something came up, didn't have time".

Note that this is not a accusation. It is simply how it works, and I myself am also guilty of this, many times (for instance, freebsd 64-bit support)

> I can't force and am not trying to, obviously. I just don't understand the
> rationale for dropping everything that was acceptable before.

Nobody was there to maintain it anymore. They just put out the light in the factory, but the factory was already empty. But the light going out is the thing that is visible from a distance, not the factory going slowly bankrupt and less busy over the years.

> > And old things lie on the ground, rot away and become fossils. Like Dos
> > and Dinosaurs.
>
> It's just frustrating: invest all your time and energy into this ... oh
> wait, we've moved onto something else. Try again!

IMHO if you are fair, subtract all your time invested after say 95-96. Because by then you should have been able to see it coming. If you invested the bulk of that time after that, you must have known it was a dying platform.

> DPMI is a standard, MS was very important to its creation.

Yes. And a standard is a document. Not software.

> It was published by a 12-member committee. Ignoring all the real-world reasons
> why they can't, I'm wondering, why wouldn't they want to be
> more compatible by running more apps??

Oldest rule of marketplaces:
Costs money/effort etc, and not enough demand.

> All their anti-MS b.s. and yet they do the same damn
> thing, spit on the little guy.

Well, IMHO the problem IS the little guy. Most of them run around like chicken without heads. Dos got killed off so quickly because in those times the number of computer users became 20-40 times as big, and the people that never knew it, never understood it.

The Office franchise has been based on the fact that employees would install an illegal version of the latest and greatest at home. And half an year later, some manager would solve the "problems" by "unifying" the company on the latest and greatest.

It may be strange to hear me say this in a thread were I argue against stagnation, but I'm a realistic, and while IMHO Dos was a goner in say 1997, the software deprecation tempo is too fast.

> It's all politics instead of "hey, how do I
> get my app to run best?"

An element of fashion has entered the computing scene, which is worse than politics because there is no responsibility feedback loop at all.

> > > Lousy DOS support.
> >
> > Feature not regression.
>
> BTW, DOSEMU claims to work (non-gfx) on NetBSD and "maybe FreeBSD", ever
> tried? Or would that make *BSD less appealing?

I haven't used dosemu in ages. Actually I have run w9x more recently than that.

> > > detract from SP2/SP3 for Vista (which does indeed need it).
> >
> > As far as I heard, Win7 is a dolled up Vista.
>
> 2k3 kernel = Vista pre-SP1 (NT 6.0)
> 2k7 kernel = Vista SP1 (NT 6.0.0001)
> ? = Windows 7 (NT 6.1)

Well, versioning can be manipulated out of marketing concerns. But all there seems to be is some minor makeup, introduction of old bugs (the @$*@($*&( slow USB disk problem that plagued preSP1 Vista too) and some damage control on UAC which they are already partially reverting because it created gigantic security holes.

> Progress doesn't have to kill everything that came before it. We are not
> praying mantises.

Good point there. Progress doesn't actually kill. It just competes for resources. Like that modern humans slowly made the Neanderthaler extinct by more adept competing for resources.

See Dos as a Neanderthaler (pun intended)

> > > Change is good, but
> > > change that breaks compatibility for no good reason (without good
> > > workaround) is bad.
> >
> > First, there can be good reasons that don't have workaround. Nobody is
> > obliged to actually keep compatibility.
>
> Just as nobody is obliged to keep diplomatic relations with foreign
> countries?

Bad analogy. Diplomatic relations only work efficient if there is mutual benefit. Which is exactly the problem. There is not enough incentive for devels to support those targets that users have abandoned in droves.

Rugxulo

Homepage

Usono,
19.02.2009, 02:19

@ marcov
 

Compatibility woes / deprecation

> Afaik, there are more ARMs and MIPS sell bigger numbers than x86s :-) (*)
>
> (*) though the revenue might be /slightly/ larger on the x86s :-)

No MIPS port of FreeBSD? (from quick check) Then MIPS sucks! ;-)

> > The only reason for Itanium 2's
> > lacking support is that it was faster in software anyways, so they
> decided
> > to scrap the hardware compatibility aspect.
>
> Afaik the reason was that they had given up hope attracting the x86
> hordes, and focussed on some high-end businesses and some highend
> computing

Rows and rows of Opterons are (semi-)used in high-end supercomputers (e.g. IBM Roadrunner). But you never hear of any Itaniums. And according to Wikipedia, IA64 debuted in 2001. Pardon my skepticism, but I don't think Intel ever intended it to be mainstream for home users, esp. since they sold lame-o P4s from 2000 until 2006 (Core) although Xeons supposedly had AMD's x86-64 in 2004 or so. And they've hyped the Core 2 to death. So I find it hard to believe that they couldn't (or ever even tried) promote IA64 if they wanted. BTW, CWS says it was good for "number crunching" and "made x86 look like a toy" but "had little driver support". So it's actually surprising that any version of Windows (or anything) supports it. (And to be honest, it feels like SSE1/2/3/4/5 are just things for math nerds and/or multimedia freaks. Or maybe just another way for Intel to promote their compiler. In other words, people wonder why Intel didn't extend the normal GPR size to 64-bits long before AMD.)

P.S. If AMD64 was introduced in 2003, doesn't that mean it's about time to be obsolete?? ;-)

P.P.S. Here's what I've read, correct me if wrong:

PAE: PPro or newer (64 GB max.)
PSE-36: PIII (64 GB max., "simpler alternative"??)
AMD64: 40-bit address space (1 TB max.)
AMD64 / Barcelona: 48-bit (256 TB max.)

> > > The problem with multicore is the apps, not the OS.
> >
> > I know, but the OS does do some things itself, so speeding itself up
> > doesn't hurt.
>
> Well, I assume it the same what the *nixes are doing, improving the
> granularities of kernel locking to avoid all-core stalls, as well as
> better supporting NUMA architectures. But that is only for scalability of
> server apps. Doubt it matters much for home use.

Encoding multimedia, compression, compiling, etc. Not much else really comes to mind. Gaming?? (Well, okay, running antivirus/anti-spyware in background is much much nicer on a dual core than single core. But that's Windows-specific, i.e. I doubt your FreeBSD 64-bits has that issue, heh.)

> > AMD does claim much improved (30%?) single thread speedups.
>
> I can imagine this for maybe an
> ideal compression/checksumming/encryption scenario. But not for avg
> singlethread performance. I think 5% (from SSE5 alone) will be
> optimistic.

Well, are we comparing SSE5 to non-SIMD or to SSE2 or what? They already extended the SSE bandwidth in newer chips (Core 2, Barcelona), e.g. even PIII was fairly limited due to FPU and SSE1 sharing same die (or whatever). So Core 2 should be lots faster at SSE2 than a P4.

N.B. Most of this is just what I've read. I'm really not that well-informed or intelligent or useful. Just a friendly caveat so you don't think I'm being pretentious. :-|

> > I think the README.TXT still incorrectly mentions the EMX port as if
> > still available. I'm not surprised, esp. since EMX isn't actively
> > maintained, AFAICT.
>
> Yes, formally removing them from the list is a big step, and therefore
> always done late. (typical scenario: devel 1 says "we should delist this
> port". Devel 2: "but I just spent some time on it to get it releasalbe
> again" ... deadline passes. Devel 1: "where is the release for xx". Devel
> 2: "Sorry, something came up, didn't have time".
>
> It is simply how it works, and I myself am also guilty of this, many
> times (for instance, freebsd 64-bit support)

I completely understand that. Things happen, lots of support is by hobbyists, so setbacks occur too. (Why do you think FreeDOS is so slow to update?)

Anyways, I've barely ever used EMX, but it was quite hard to find updated binaries of everything (e.g. NT09D, RSX 5.24) due to Hobbes lacking it and Rainer's web gone, and Eberhard is found nowhere. So it seems EMX (esp. the DOS + OS/2 aspect) is mostly ignored by the OS/2 community. Lots of EMX binaries (e.g. VILE, NVI) won't even work on DOS. And DOS just prefers DJGPP while Win32 users prefer MinGW with a very select few preferring Cygwin (more restrictive license, big .DLL). Even RSXNT seemed to have potential, but that fizzled out quite a while back.

(Also, seems FPC prefers to compile its own stuff, e.g. WMemu is bigger than default old compile. If so, I assume UPX-UCL is more useful than stock semi-closed UPX-NRV. I made a .BAT to compile that if anybody cares. And just FYI, UPX has three undocumented commands: --file-info, --all-filters, --small)

> > It's just frustrating: invest all your time and energy into this ... oh
> > wait, we've moved onto something else. Try again!
>
> IMHO if you are fair, subtract all your time invested after say 95-96.
> Because by then you should have been able to see it coming.

Uh, not exactly. I mean, as long as FreeDOS and OS/2 and (older) Windows still use it, it's useful. As long as my computers and brain still work, it's useful, at least to me. As mentioned, I consider DOS more stable than Linux (2.2? 2.4? 2.6?) or *BSD (have they ever finished Linux 2.6 emulation or still stuck at 2.4?). Although I like the idea of *BSD, at least (seems more minimal than Linux).

> > DPMI is a standard, MS was very important to its creation.
>
> Yes. And a standard is a document. Not software.

DPMI 1.0. Barely ever implemented. 'Nuff said.

> > It was published by a 12-member committee. Ignoring all the real-world
> reasons
> > why they can't, I'm wondering, why wouldn't they want to
> be
> > more compatible by running more apps??
>
> Oldest rule of marketplaces:
> Costs money/effort etc, and not enough demand.

Then why even bother? Why draw up a standard and not use it? Why like something one day and hate it the next? That's just weird. "Good" doesn't really depreciate over time. It stays good. Now, tastes can change, other things improve, but it's hard to say, "SNES sucks" or "XBox 1 sucks" etc. without sounding a little ... erm, spoiled. ("You kids today! Get off my lawn!") Of course, I won't be using a VIC-20 anytime soon, but I'm sure even that has its uses (if you enjoy it). ;-)

> > All their anti-MS b.s. and yet they do the same damn
> > thing, spit on the little guy.
>
> Well, IMHO the problem IS the little guy. Dos got killed off so
> quickly because in those times the number of computer users became
> 20-40 times as big, and the people that never knew it, never
> understood it.

Is it really better to remake the world (or maybe let the pre-existing world work with you)?

> > > > Lousy DOS support.
> > >
> > > Feature not regression.
> >
> > BTW, DOSEMU claims to work (non-gfx) on NetBSD and "maybe FreeBSD",
> ever
> > tried? Or would that make *BSD less appealing?
>
> I haven't used dosemu in ages. Actually I have run w9x more recently than
> that.

Give it a try, then. Works on x86-64 too, which you use, right? It can't be harder than building cross BinUtils. ;-)

> > ? = Windows 7 (NT 6.1)
>
> Well, versioning can be manipulated out of marketing concerns. But all
> there seems to be is some minor makeup, introduction of old bugs (the
> @$*@($*&( slow USB disk problem that plagued preSP1 Vista too) and some
> damage control on UAC which they are already partially reverting because
> it created gigantic security holes.

Vista is already two years old. Only three more to go before obsoletion! :-P
And I think 6.1 is used so apps don't break. (Besides, it's truly Vista's kernel even if the UI is majorly overhauled.)

> > Progress doesn't have to kill everything that came before it. We
> > are not praying mantises.
>
> Good point there. Progress doesn't actually kill. It just competes for
> resources. Like that modern humans slowly made the Neanderthaler extinct
> by more adept competing for resources.

We can have multitasking but not multi-boot or multi-development?

> See Dos as a Neanderthaler (pun intended)

"Nederlander"? :-D j/k

> > Just as nobody is obliged to keep diplomatic relations with foreign
> > countries?
>
> Bad analogy. Diplomatic relations only work efficient if there is mutual
> benefit. Which is exactly the problem. There is not enough incentive for
> devels to support those targets that users have abandoned in droves.

They don't even try, even when it's easy, even when it IS beneficial. C'mon, you're basically saying that nothing useful ever has or ever could come out of DOS. Besides, not everything is about net profit / gain, sometimes it's just about making something useful for someone else. (I mean, NetBSD running on a toaster isn't majorly practical or useful to me. But it's still cool.)

So you really only see DOS as 16-bits non-*nix single-tasking real-mode, and nothing else?? Or more realistically as a 16-/32-bit hacked real/pmode hybrid that can be used in single-tasking or multitasking with a fairly nice library of apps and APIs?

Japheth

Homepage

Germany (South),
19.02.2009, 08:10

@ Rugxulo
 

Compatibility woes / deprecation

> Lots of EMX binaries (e.g. VILE, NVI) won't even work on DOS.

Did you try the EMX switch of JEMM? It's supposed to make emx binaries run on machines with > 256 MB of memory.

---
MS-DOS forever!

Rugxulo

Homepage

Usono,
19.02.2009, 08:57

@ Japheth
 

Compatibility woes / deprecation

> > Lots of EMX binaries (e.g. VILE, NVI) won't even work on DOS.
>
> Did you try the EMX switch of JEMM? It's supposed to make emx binaries run
> on machines with > 256 MB of memory.

It's not JEMM's fault, either EMX itself doesn't have the proper support (some very basic file functionality needed for NVI) or the developers specifically compiled it in such a way (-Zsys -Zomf) as to not work except on OS/2. (And VILE has a native DOS port anyways.)

P.S. Is that > 256 MB EMS or just in general? I thought EMX 0.9d fix 4 supported up to 1 GB? (untested) The problems mentioned aren't RAM-related at all, AFAICT.

ftp://ftp.uni-heidelberg.de/pub/os2/gnu/emx%2Bgcc/emxfix04.doc


emxfix04.doc      emx 0.9d     FIX 04                               20-Mar-2001

[emxfix04]
...
[emxfix03]
...
emx.exe now supports up to 1 GB of memory (was 64 MB)
...


Now, JEMM does have a problem with Inner Worlds, dunno why. And HXRT has a problem with RSXNT apps (I compiled NASM 0.98.39 but something file-related, opening or writing or closing, didn't work, only got truncated 0-byte output). I only vaguely mentioned the former (and didn't mention the latter at all) in case you were too busy or disinterested. And feel free to ignore anyways, I'm just mentioning it now since I have your attention. :-D

Japheth

Homepage

Germany (South),
23.02.2009, 10:12
(edited by Japheth, 23.02.2009, 13:08)

@ Rugxulo
 

Compatibility woes / deprecation

> P.S. Is that > 256 MB EMS or just in general?

it's unrelated to EMS.

> I thought EMX 0.9d fix 4 supported up to 1 GB?

Might be true, I didn't test.

> Now, JEMM does have a problem with Inner Worlds, dunno why.

That's true, but it's also true for MS Emm386. With both I get a page error exception on a P4. However, on my old, "DOS-area" PC (80486 100 MHz, 32MB, SB16 PNP, ET4000/W32), there's no problem with Jemm or MS Emm386.

> HXRT has a problem with RSXNT apps (I compiled NASM 0.98.39 but something
> file-related, opening or writing or closing, didn't work, only got
> truncated 0-byte output).

Please tell more details, even better provide a test case!

---
MS-DOS forever!

marcov

19.02.2009, 10:47

@ Rugxulo
 

Compatibility woes / deprecation

> I don't think Intel ever intended it to be
> mainstream for home users, esp. since they sold lame-o P4s from 2000 until
> 2006 (Core) although Xeons supposedly had AMD's x86-64 in 2004 or so.

Yes they did. Why do you think they lamed on with P4 so long? And why do you think AMD originally came up with x86_64 and not inteL? Correct, because Intel had a IA64 happy future. They even rebranded x86 as IA32 to provide marketing continuety.

The trouble with Itanium was manyfold though. It required huge investments in compilers (and commercial tool vendors were already near bankrupt), the logic the VLIW design saved was dwarfed by the emergence of multi-megabyte cache and evaporated. Moreover, Intel was plagued with problems, and couldn't ramp up speed enough. (partially also because they had had to ramp up P4 much faster than planned due to sudden AMD competition).

I worked in an university computational center from 2000-2003. Intel is a large beast with departments competing, but HP was pushing itanium pretty badly.

> And they've hyped the Core 2 to death.

Core-one was their rescue (out of fairly recently founded Israel labs targeted at the notebook market as a direct successor of P-M) after two failed CPU designs (P4, which was originally meant to hit the 5GHz border, but they heat disipation got to bad), and Itanic, oops, pardon me Itanium.

Core 2 was the first design after they decided they needed something new. Nehalem is the third (if I understood correctly, Nehalem is designed by the former P4 team, which is, among others, why they brought hyper threading back)

> BTW, CWS says it was good for "number crunching" and "made x86 look like a > toy" but "had little driver support".

It did have decent FPU (till Athlon64 came along), but it was relatively underclocked. Part of it was the glorious future that never came when it would be equally clocked.

Note that honest comparisons were pretty hard, due to the fact that they gave the high-end Itanium multi MB caches, while the Xeon (P4 based) competition had to do with a fourth of that.

> (And to be honest, it feels like SSE1/2/3/4/5 are just things for math > nerds and/or multimedia freaks.

Correct. e.g. the main Nehalem highlights are the on-board memory manager and better scalabity over more than 2 cores. People over focus on instructions, same as when they are talking about development systems they overfocus on the language.

> people wonder why Intel didn't extend the normal
> GPR size to 64-bits long
> before AMD.)

> PAE: PPro or newer (64 GB max.)
> PSE-36: PIII (64 GB max., "simpler alternative"??)
> AMD64: 40-bit address space (1 TB max.)

Physical, but they had 48-bit virtual, which was sign extended to 64-bit or so.

> AMD64 / Barcelona: 48-bit (256 TB max.)

For the rest it is more or less correct, but note that often only the server variants (XEON) actually had PAE enabled of the 32-bit ones. I don't think you could buy a consumer PAE cpu till x86_64 (which has it part of the standard).

I don't know much about PSE, never used it, and can't remember it being mentioned when I was at the uni computational center either.

> Encoding multimedia, compression, compiling, etc.

Hardly. The CPU intensive part of the first two is too coarse grained, and the I/O too simple to benefit from kernel work.

Compiling is a different story. Yes it is faster with -j 4, but keep in mind that it partially also just earning back the scalability that was introduced (by having the entire process controlable with make, and the infinite header reparsing that that entitles) in the first place.

FPC wins like 20-40% with -j 2. Florian is studying on bringing threading INTO the compiler, but I doubt that will bring that much.

> Not much else really
> comes to mind. Gaming?? (Well, okay, running antivirus/anti-spyware in
> background is much much nicer on a dual core than single core. But that's
> Windows-specific, i.e. I doubt your FreeBSD 64-bits has that issue, heh.)

Well on *nix it is the same. Everything but the main app goes on the 2nd core. Usually you don't have a guzzler like an antivirus, but you still get some speedup (and at least as important: responsivity) from the 2nd core. But that is 50% utilization at best, so adding more cores is useless.

Gaming could benefit, but while they make a big production out of it to sell quad cores to the gamers, it is only just starting. Big chance that when you get games that really utilize more than 2 cores significantly, the 4core rig you buy now is already outdated for the then most demanding games.

> Well, are we comparing SSE5 to non-SIMD or to SSE2 or what? They already
> extended the SSE bandwidth in newer chips

The overall improvement of SSE5 vs same app without SSE5 instructions on the same machine.

> So Core 2 should be lots faster at SSE2 than a P4.

I happen to know that, because I actually have a SSE2 routine in production at work :-) Euh, what was it again, a core2 6600 (2.4GHz) outran a 3.4GHz P-D by slightly more than 100% (which is even more on a per clock basis)

> N.B. Most of this is just what I've read. I'm really not that
> well-informed or intelligent or useful. Just a friendly caveat so you
> don't think I'm being pretentious. :-|

Reading is the best way. I get most of the good info from the German C'T publication, occasionally throwing in a Dr Dobbs. Also I like arstechnica and the register (though the latter is as much entertainment as info)

I started following CPU use because of FPC, but if you do it ten years you learn a lot. I'm not that big with the practical applications either.

> And DOS just prefers
> DJGPP while Win32 users prefer MinGW with a very select few preferring
> Cygwin (more restrictive license, big .DLL).

I'm inbetween. I use cygwin the most, but for the programming I/we actually use mingw bins.

> (Also, seems FPC prefers to compile its own stuff, e.g. WMemu is bigger
> than default old compile.

Afaik, FPC just randomly picks stuff, and is a bit conservative and sticks with the stable stuff if it works. There is no real policy there. We would need more release builds (testing releases) to pull that off, and its currently hard enough to get to a one release every 6 months schedule.

> If so, I assume UPX-UCL is more useful than
> stock semi-closed UPX-NRV.

We discourage UPX. I don't see the point.

> Uh, not exactly. I mean, as long as FreeDOS and OS/2 and (older) Windows
> still use it, it's useful.

True, and there is nothing wrong with using it. But you can't blame other people for losing investments you knew it was a goner long term all along.

> I consider DOS more stable than
> Linux (2.2? 2.4? 2.6?) or *BSD (have they ever finished Linux 2.6 emulation
> or still stuck at 2.4?).

I've only used the Linux emulation for _a_ adobe acrobat reader (don't care that much which).

And your DOS stable argument reminds me a bit about the Urban Legend (?) how MS got Windows NT4 (sp0) initial release C2 security certified: By changing the testing circumstances so that the NT machine was built into a brick wall with only the power cord going into it. Then it was 100% secure.

IOW you can't judge stability by comparing unequal amount of services.

> > Oldest rule of marketplaces:
> > Costs money/effort etc, and not enough demand.
>
> Then why even bother? Why draw up a standard and not use it?

Not everything always goes as planned. I'm sure they planned to use it.

> That's just weird. "Good" doesn't really depreciate over time.

Actually I think it changes yes.

> Is it really better to remake the world (or maybe let the pre-existing
> world work with you)?

IMHO the whole static world concept is artificial. There never was that stable world that needs to be remade in the first place. It is part retrospective idealization, part over-simplification.

> Vista is already two years old. Only three more to go before obsoletion!
> :-P

Vista will be obsolete sooner than XP-pro. (which iirc is security supported till 2014)

> We can have multitasking but not multi-boot or multi-development?

Sure we can. But apparantly Dos can't sustain itself. That is the problem, not the others forbidding it.

> They don't even try, even when it's easy, even when it IS beneficial.
> C'mon, you're basically saying that nothing useful ever has or ever could
> come out of DOS.

If you say:

... could not come out of Dos anymore ...

then yes. I was teasing you about setting up archiving-communities etc, but honestly, I think it is already too late for that, since such communities take quite some time to develop.

OTOH, DJGPP seems to live again somewhat after a slump. If DJGPP really stops, Dos is dead.

> Besides, not everything is about net profit / gain,
> sometimes it's just about making something useful for someone else. (I
> mean, NetBSD running on a toaster isn't majorly practical or useful to me.
> But it's still cool.)

True, but the NetBSD people do that themselves, which is exactly the problem with Dos. No hardened community that bands together, and actively works. Just a few forums full of people having delusions about Dos old grandeur, which are fragmented between people using old commercial stuff and people working on OSS platform.

> So you really only see DOS as 16-bits non-*nix single-tasking real-mode,
> and nothing else??

No. I wouldn't have worked on 32-bit FPC Dos (till 2000) if I thought that.

> Or more realistically as a 16-/32-bit hacked real/pmode
> hybrid

At the moment I don't really have any use case. I'm here mostly because I like fullscreen textmode apps, and Dos is the platform where those were dominant.

Rugxulo

Homepage

Usono,
20.02.2009, 00:24

@ marcov
 

Compatibility woes / deprecation

> > I don't think Intel ever intended it to be
> > mainstream for home users, esp. since they sold lame-o P4s from 2000
> until
> > 2006 (Core) although Xeons supposedly had AMD's x86-64 in 2004 or so.
>
> Yes they did. Why do you think they lamed on with P4 so long?

They must've had high expectations for it.

> And why do
> you think AMD originally came up with x86_64 and not inteL?

I dunno, market pressure? Ingenuity? Boredom? ;-)

> Correct, because Intel had a IA64 happy future. They even rebranded
> x86 as IA32 to provide marketing continuety.

I still highly doubt it. Do any home users of "normal" cpus have any IA64s? If not, then Intel never pushed it to the home market. Now, maybe x86-64 was AMD's competition to the supercomputer / server market, who knows.

> The trouble with Itanium was manyfold though. It required huge investments
> in compilers (and commercial tool vendors were already near bankrupt), the
> logic the VLIW design saved was dwarfed by the emergence of multi-megabyte
> cache and evaporated. Moreover, Intel was plagued with problems, and
> couldn't ramp up speed enough. (partially also because they had had to
> ramp up P4 much faster than planned due to sudden AMD competition).

A PIII was indeed faster per clock than P4, so only fast P4s really got better, and SSE2 made a difference (although required a code redesign due to quirks, ahem, stupid 16-byte alignment). This was due to the super long pipelines, right? (And no "barrel shifter.") And yes, I think the Athlon was a better bang for the buck at the time, possibly even the first to reach 1 Ghz, I forget. Only with Athlon XP did they get full SSE1, and only AMD64 got SSE2 (although I'm still unsure how many people use such, esp. since non-Intel compilers don't really target it).

> Intel is a large beast with departments competing, but HP was
> pushing itanium pretty badly.

But not for home use. The P4 was marketed to us home users, dunno why.

> > And they've hyped the Core 2 to death.
>
> Core-one was their rescue (out of fairly recently founded Israel labs
> targeted at the notebook market as a direct successor of P-M) after two
> failed CPU designs (P4, which was originally meant to hit the 5GHz border,
> but they heat disipation got to bad), and Itanic, oops, pardon me Itanium.

Haifa, Israel ... yes, surprising even them I think was when Intel went to use the Pentium-M as the new base (basically a PIII w/ SSE2). That's because Prescott used a LOT more energy than even Northwood (which I have, c. 2002) for only a small increase in speed. The found out that their original plans to ramp up to 10 Ghz weren't going to happen (since Prescott at 3.8 Ghz was ridiculously hot). So they went back to the more power-friendly Pentium-M. (I don't think Dell ever sold any IA64s to anybody, did they? That's where my P4 came from, and surely by 2002 IA64 was out there, so they could have ... if they wanted.)

> Core 2 was the first design after they decided they needed something new.

Core 1 never got utilized much, from what I could tell. I think it was just a stepping stone. And Core 2 was the first to support x86-64 (not counting previous Xeons).

> Nehalem is the third (if I understood correctly, Nehalem is designed by
> the former P4 team, which is, among others, why they brought hyper
> threading back)

Nehalem is supposed to be 30% faster than previous incarnations, even. Something about Intel's new "tick tock" strategy, new microarchitecture every other year. Of course, they're also including SSE4.2 now (instead of only SSSE3 or SSE4.1). My AMD64x2 laptop "only" (heh) supports SSE3, and yet I can't help but feel that all the SIMD crap is (almost) madness.

AMD really had a good foothold until Core 2 came about. Then all the whiners about Intel's monopoly happily jumped ship due to faster cpus. Gotta have the latest, greatest, I guess. Oh well. Not that I really blame them, just I find it hard to really consider AMD a "generation behind".

> > BTW, CWS says it was good for "number crunching" and "made x86 look like
> a > toy" but "had little driver support".
>
> It did have decent FPU (till Athlon64 came along), but it was relatively
> underclocked. Part of it was the glorious future that never came when it
> would be equally clocked.

Here's what I don't understand: AMD64 is good at number crunching? How so? Due to the 16 64-bit GPRs or improved FPU? The FPU proper (and MMX etc.) is often considered "deprecated", which surprises me. I understand that SSE is all the rage, but ... I dunno, people are weird. :-| And since all x86-64 supports SSE2, it should be used more and more. (But is it?) I'm not sure I believe x86-64 inherently speeds up everything as much as people claim.

> Note that honest comparisons were pretty hard, due to the fact that they
> gave the high-end Itanium multi MB caches, while the Xeon (P4 based)
> competition had to do with a fourth of that.

Same with AMD, they've always had smaller caches and (slightly) older fabs. Hence why Intel is moving to 32nm while AMD is "stuck" at 45nm with Phenom II. (I know, boo hoo.)

> > (And to be honest, it feels like SSE1/2/3/4/5 are just things for math
> nerds and/or multimedia freaks.
>
> Correct. e.g. the main Nehalem highlights are the on-board memory manager
> and better scalabity over more than 2 cores. People over focus on
> instructions

In other words, I'm not sure MMX, much less SSE2 etc., has been really utilized correctly so far. I don't blame the compilers (it's tough), but I kinda sorta almost agree with some: why did Intel make it so kludgy?

> > Encoding multimedia, compression, compiling, etc.
>
> Hardly. The CPU intensive part of the first two is too coarse grained, and
> the I/O too simple to benefit from kernel work.

Well, 7-Zip (and 4x4, etc.) are multithreaded, at least in part, and it does make a difference. But it's not horribly slow anyways, so it only matters in huge files anyways.

> Compiling is a different story. Yes it is faster with -j 4, but keep in
> mind that it partially also just earning back the scalability that was
> introduced (by having the entire process controlable with make, and the
> infinite header reparsing that that entitles) in the first place.

I'm not a huge, huge fan of make. Mostly because there are so many variants, and also because it's so kludgy and hard to make do exactly what you want. And it's very hard to read, too.

> FPC wins like 20-40% with -j 2. Florian is studying on bringing threading
> INTO the compiler, but I doubt that will bring that much.

It can't hurt to try. But most likely the only win will be separating file I/O with the cpu-intensive part (like 7-Zip does).

> Well on *nix it is the same. Everything but the main app goes on the 2nd
> core. Usually you don't have a guzzler like an antivirus, but you still
> get some speedup

Tell that to Intel: they said at one point to expect "thousands" of cores eventually.

> Gaming could benefit, but while they make a big production out of it to
> sell quad cores to the gamers, it is only just starting.

Well, I never understood computer gaming, esp. needing such a high-powered rig just for the "pleasure" of configing / installing / uninstalling / reinstalling / patching. I'd prefer a console (not that I game much anymore). Anyways, DirectX version, total RAM, and GPU probably affect a game more than cores these days.

> > So Core 2 should be lots faster at SSE2 than a P4.
>
> a core2 6600 (2.4GHz) outran a 3.4GHz P-D by slightly more
> than 100% (which is even more on a per clock basis)

256-bit bandwidth (vs. 128?), I think.

> > And DOS just prefers
> > DJGPP while Win32 users prefer MinGW with a very select few preferring
> > Cygwin (more restrictive license, big .DLL).
>
> I'm inbetween. I use cygwin the most, but for the programming I/we
> actually use mingw bins.

Cygwin seems to have a better app library. But it's a double-edged sword: no MSVCRT.DLL bugs but you have to lug around a huge CYGWIN1.DLL as well as adhere to the license (which is only annoying in that it's an extra hassle, more grunt work to do for no benefit). Besides, DJGPP can do all that and still run on (gasp) non-Windows! (And you're not stuck to 3.4.4 or 3.4.5 ... and I found the 4.x snapshots VERY buggy, at least for Vista.)

> > (Also, seems FPC prefers to compile its own stuff, e.g. WMemu is bigger
> > If so, I assume UPX-UCL is more useful than
> > stock semi-closed UPX-NRV.
>
> We discourage UPX. I don't see the point.

But FPC 2.2.2 includes UPX 3.03 in MAKEDOS.ZIP.

N.B. My .BAT is vaguely flawed anyways, as I just found out yesterday. I don't know why (makefile quirks?) but optimizations for the main compressor proper weren't being turned on. So I have to modify it (done) and upload it (not done). I think I'll make a UPX-UCL package for FreeDOS (esp. since they never did anything with my previous unofficial build). But yeah, feel free to ignore.

> And your DOS stable argument reminds me a bit about the Urban Legend (?)
> how MS got Windows NT4 (sp0) initial release C2 security certified

I meant stable API as in both old and new programs still run fine. It's not as much of a moving target.

> OTOH, DJGPP seems to live again somewhat after a slump. If DJGPP really
> stops, Dos is dead.

Well, maybe not. I mean, Turbo C++ 1.01 is "dead" but still used by FreeDOS many many years after-the-fact. So, somebody could still use it, it just wouldn't be updated. And actually you could always fork DJGPP (but call it something else) or use EMX/RSX or MOSS instead. (And there's always OpenWatcom.)

marcov

22.02.2009, 22:11

@ Rugxulo
 

Compatibility woes / deprecation

> > Yes they did. Why do you think they lamed on with P4 so long?
>
> They must've had high expectations for it.

Sure, but at the time the 3GHz came out, they knew they weren't going to make the planned 5GHz, unless a miracle happened. Yet it is then still years till core2.

> > And why do
> > you think AMD originally came up with x86_64 and not inteL?
>
> I dunno, market pressure? Ingenuity? Boredom? ;-)

Intel already wanted to use the 64-bit argument as "advantage" for Itanium. (as driving force for x86(IA32)->IA64 migration). The last thing they needed was anything that detracted from that.

> > Correct, because Intel had a IA64 happy future. They even rebranded
> > x86 as IA32 to provide marketing continuety.
>
> I still highly doubt it. Do any home users of "normal" cpus have any
> IA64s? If not, then Intel never pushed it to the home market. Now, maybe
> x86-64 was AMD's competition to the supercomputer / server market, who
> knows.

The trick was that the IA64 scheme already failed in the server market before the workstation market came even into sight. And that market (say, the autocaders, solidwork etc) traditionally is the bridge between server and desktop use.

> > couldn't ramp up speed enough. (partially also because they had had to
> > ramp up P4 much faster than planned due to sudden AMD competition).
>
> A PIII was indeed faster per clock than P4, so only fast P4s really got
> better, and SSE2 made a difference (although required a code redesign due
> to quirks, ahem, stupid 16-byte alignment). This was due to the super long
> pipelines, right?

Yes. The first P4's were slower than the last (1.3 or 1.4 GHz?) P-III. However sometimes P4's could peak with specially crafted code that used SSE, usually in the photoshop/autocad ranges.

And in general the P4 depended more on "P4 optimized code" than P-III. Athlon was even less picky about how the code was optimized.

> Only with Athlon XP did they get full SSE1, and
> only AMD64 got SSE2 (although I'm still unsure how many people use such,
> esp. since non-Intel compilers don't really target it).

There are three uses:
- Real vector code, mostly in big commercial imageprocessing programs Photoshop, autocad etc.
- Basic primitives of the runtime (like move block of memory) can benefit of the wide registers of SSE(2). A very small number of very heavily used routines, but, contrary to most optimizations,it can be noticable to the user even. Think libgcc here, or similar routines in other compiler runtimes.
- (SSE2 only) SSE2 can be used as a replacement for the floating point engine (or, if you really want the last bit, as an addition). I however don't exactly know how this compares to the FPU with regards to IEEE compliancy of exceptions and precision. It might only be usable in cases where low precision is prefered.

> Haifa, Israel ... yes, surprising even them I think was when Intel went to
> use the Pentium-M as the new base (basically a PIII w/ SSE2). That's
> because Prescott used a LOT more energy than even Northwood (which I have,
> c. 2002) for only a small increase in speed. The found out that their
> original plans to ramp up to 10 Ghz weren't going to happen (since
> Prescott at 3.8 Ghz was ridiculously hot).

Correct. And the Pentium-M had been a tremendous success in the notebook market.

> (I don't think Dell ever sold any IA64s to
> anybody, did they?

Afaik yes, but Poweredge (and higher) only. Not desktops, and not to consumers.

> Core 1 never got utilized much, from what I could tell. I think it was
> just a stepping stone. And Core 2 was the first to support x86-64 (not
> counting previous Xeons).

Core 1 was the last of the Pentium-M's effectively. Used a lot in laptops and SFFs.

> > Nehalem is the third (if I understood correctly, Nehalem is designed by
> > the former P4 team, which is, among others, why they brought hyper
> > threading back)
>
> Nehalem is supposed to be 30% faster than previous incarnations, even.

The reports are varied. It depends if your tests scale up to 4 cores, and if they need memory bandwidth. Also, affordable Nehalems are fairly low clocked (the lowest 2.6 or so Nehalem being the price range of the recently released Core2 Duo 3.5Ghz)

> Something about Intel's new "tick tock" strategy, new microarchitecture
> every other year. Of course, they're also including SSE4.2 now (instead of
> only SSSE3 or SSE4.1). My AMD64x2 laptop "only" (heh) supports SSE3, and
> yet I can't help but feel that all the SIMD crap is (almost) madness.

It isn't, but it shouldn't be overrated. Rule of thumb, it is useless unless you specifically code for it. Except the above "libgcc" point, but that only works after a compiler/runtime is adapted for it.

> AMD really had a good foothold until Core 2 came about. Then all the
> whiners about Intel's monopoly happily jumped ship due to faster cpus.

(or they were, like me basically honest, and went with performance/$$$. This core2 is the first intel since my i386 SX 20, the rest was AMD or Cyrix)

> > underclocked. Part of it was the glorious future that never came when
> it
> > would be equally clocked.
>
> Here's what I don't understand: AMD64 is good at number crunching?

The architecture? Well, yes and no. SSE2 is now guaranteed (contrary to x86/i386), so if you are not really picky on precision. This made that a lot more compilers _default_ enable this.

However _Opteron_ was particularly good at it.

> so? Due to the 16 64-bit GPRs or improved FPU? The FPU proper (and MMX
> etc.) is often considered "deprecated", which surprises me. I understand
> that SSE is all the rage, but ... I dunno, people are weird. :-| And
> since all x86-64 supports SSE2, it should be used more and more. (But is
> it?)

I'm not certain about this either, specially if SSE2 can be used instead of FPU in all cases (exception, precision)

>I'm not sure I believe x86-64 inherently speeds up everything as much
> as people claim.

It isn't. On avg it is about equal or very, very minorly slower.

> > Correct. e.g. the main Nehalem highlights are the on-board memory
> manager
> > and better scalabity over more than 2 cores. People over focus on
> > instructions
>
> In other words, I'm not sure MMX, much less SSE2 etc., has been really
> utilized correctly so far. I don't blame the compilers (it's tough), but I
> kinda sorta almost agree with some: why did Intel make it so kludgy?

Because it was always only useful for a certain kind of apps. It would be counterproductive to dedicate a large part of the die to it.

Later when the use was somewhat established they went further with SSE2 and spend a separate execution unit, later more.

> > > Encoding multimedia, compression, compiling, etc.
> > > Hardly. The CPU intensive part of the first two is too coarse grained,
> and the I/O too simple to benefit from kernel work.
>
> Well, 7-Zip (and 4x4, etc.) are multithreaded, at least in part, and it
> does make a difference. But it's not horribly slow anyways, so it only
> matters in huge files anyways.

I don't spent my days zipping/unzipping, and huge compression jobs (like backup) run unattended anyway.

> I'm not a huge, huge fan of make. Mostly because there are so many
> variants, and also because it's so kludgy and hard to make do exactly what
> you want. And it's very hard to read, too.

Well, my remark is not as much that, but more the fact that the paralellism of make is so simplistic. It simply starts a train of paralel compiler instances, and the problem with that is that with the fast computers of nowadays, actually the start of the compiler is the most expensive part, not the actual compiling.

> > INTO the compiler, but I doubt that will bring that much.

> It can't hurt to try. But most likely the only win will be separating file
> I/O with the cpu-intensive part (like 7-Zip does).

I don't know what 7zip does, but I have some doubts that in the current build model, we can keep a multithread compiler fed with stuff to compile before it shuts down and goes to the next directory.

> > core. Usually you don't have a guzzler like an antivirus, but you still
> > get some speedup
>
> Tell that to Intel: they said at one point to expect "thousands" of cores
> eventually.

Yes, but they want to go in to GPU and General purpose GPU.

> > a core2 6600 (2.4GHz) outran a 3.4GHz P-D by slightly more
> > than 100% (which is even more on a per clock basis)
>
> 256-bit bandwidth (vs. 128?), I think.

Where do you get that? One still installs DDR (1,2,3) in pairs for optimal performance, and each is 64-bit. So afaik mem bandwidth is still 128bit. (unless you use several QPI/HT)

> hassle, more grunt work to do for no benefit). Besides, DJGPP can do all
> that and still run on (gasp) non-Windows! (And you're not stuck to 3.4.4
> or 3.4.5 ... and I found the 4.x snapshots VERY buggy, at least for
> Vista.)

Well, for programming I only use a few tools like GDB and make. Compiler+binutils are provided by FPC on win32/64.

> > We discourage UPX. I don't see the point.
>
> But FPC 2.2.2 includes UPX 3.03 in MAKEDOS.ZIP.

True, but that doesn't mean you should routinely use it.

> > OTOH, DJGPP seems to live again somewhat after a slump. If DJGPP really
> > stops, Dos is dead.
>
> Well, maybe not. I mean, Turbo C++ 1.01 is "dead" but still used by
> FreeDOS many many years after-the-fact.

Is one user left really enough to declare the target non-dead? Is C=64 not dead because there are still people working with it? IMHO it is the same thing as with cars. At a certain point they are oldtimers, and while still cherished, people don't use them every day anymore.

Rugxulo

Homepage

Usono,
23.02.2009, 02:24

@ marcov
 

Compatibility woes / deprecation

> > > And why do
> > > you think AMD originally came up with x86_64 and not inteL?
> >
> > I dunno, market pressure? Ingenuity? Boredom? ;-)
>
> Intel already wanted to use the 64-bit argument as "advantage" for
> Itanium. (as driving force for x86(IA32)->IA64 migration). The last thing
> they needed was anything that detracted from that.

The Pentium had been advertised as having a 64-bit bus (or whatever), not to mention MMX on later models technically being 64-bit. I know that's not the same thing, but that's how it was spun in some cases. (I also have an Atari Jag + CD, which is a hybrid collection of chips, some of which are 64-bit. And it was often called "not 64-bit" by critics because of this despite marketed as the first 64-bit console. "Do the math." Heh.)

> And in general the P4 depended more on "P4 optimized code" than P-III.
> Athlon was even less picky about how the code was optimized.

Well, they broke some common optimizations used in the past due to architectural differences. Of course I think all cpus are like that, even AMDs (from my limited reading of the optimization hints in tech docs).

> > (I don't think Dell ever sold any IA64s to
> > anybody, did they?
>
> Afaik yes, but Poweredge (and higher) only. Not desktops, and not to
> consumers.

That's what I meant, home use. Of course, even the first 64-bit Opterons were server only, right?

> Core 1 was the last of the Pentium-M's effectively. Used a lot in laptops
> and SFFs.

I remember it, but it didn't get nearly as much push or use as Core 2 has. In fact, GCC 4.3.2 has no core1 optimization (yet??), only "core2" and a bunch of other things ("k8-sse3" must be new, I'll have to try that someday).

> > yet I can't help but feel that all the SIMD crap is (almost) madness.
>
> It isn't, but it shouldn't be overrated. Rule of thumb, it is useless
> unless you specifically code for it.

I mean, it just feels somewhat useless since it's weird and involves rewriting code manually (although this is a perfect example that "compiler outperforms humans" isn't always true since most compilers can't vectorize worth a damn).

> > AMD really had a good foothold until Core 2 came about. Then all the
> > whiners about Intel's monopoly happily jumped ship due to faster cpus.
>
> (or they were, like me basically honest, and went with performance/$$$.
> This core2 is the first intel since my i386 SX 20, the rest was AMD or
> Cyrix)

I don't blame them (much, heh), it's just silly to yell and scream about something and then turn around and not care anymore. I guess developers don't want to waste their own precious time.

> I'm not certain about this either, specially if SSE2 can be used instead
> of FPU in all cases (exception, precision)

GCC has "-mfpmath=", which allows 387, sse, or both. But I'm not sure it helps (yet?), highly experimental I think.

> > > a core2 6600 (2.4GHz) outran a 3.4GHz P-D by slightly more
> > > than 100% (which is even more on a per clock basis)
> >
> > 256-bit bandwidth (vs. 128?), I think.
>
> Where do you get that? One still installs DDR (1,2,3) in pairs for optimal
> performance, and each is 64-bit. So afaik mem bandwidth is still 128bit.
> (unless you use several QPI/HT)

I mean SSE bandwidth (or so I heard).

Anyways, I also read somewhere that Core 2 can (optimally) do four instructions per clock unlike AMD at max. three. That alone is pretty good, so raw clock speed doesn't matter as much as previously (e.g. 486 or 586).

> > > We discourage UPX. I don't see the point.
> >
> > But FPC 2.2.2 includes UPX 3.03 in MAKEDOS.ZIP.
>
> True, but that doesn't mean you should routinely use it.

Maybe not on Windows for make, bash, gcc, etc, but DOS has no issues.

> > > OTOH, DJGPP seems to live again somewhat after a slump. If DJGPP
> > > really stops, Dos is dead.
> >
> > Well, maybe not. I mean, Turbo C++ 1.01 is "dead" but still used by
> > FreeDOS many many years after-the-fact.
>
> Is one user left really enough to declare the target non-dead? Is C=64 not
> dead because there are still people working with it? IMHO it is the same
> thing as with cars. At a certain point they are oldtimers, and while still
> cherished, people don't use them every day anymore.

The only reason to not use something old is if the new improves in every way, which is hardly typical.

Pros:
+ small
+ fast (even can use EMS or XMS for even faster speeds)
+ 16-bit code (which GCC still lacks)
+ supports ANSI C and C++ AT&T 2.0
+ runs on 16-bit cpus
+ all models: tiny through huge
+ nice IDE
+ nice help / function reference

Cons:
- no sources
- DOS only (no cross compiling supported)
- no newer C++ features (generics, templates, etc.)
- 186/286 optimizations at most (useless for 99% of the world)
- OpenWatcom is better in most ways (but needs 386+ to host)

Besides, there even a DOS extender that works with it (Swallow ... IIRC, untested by me). :-)

DOS386

24.02.2009, 05:17

@ Rugxulo
 

Compatibility woes / deprecation TURBO stuff

> If DJGPP really stops, Dos is dead.

They stopped DOS support 10 years ago. Are you able to use the BUGzilla now ? Maybe someone should port GCC to DOS ASAP :hungry:

> Turbo C++ 1.01 is "dead" but still used by FreeDOS many years after

> + small
> + fast (even can use EMS or XMS for even faster speeds)
> + 16-bit code (which GCC still lacks)
> + supports ANSI C and C++ AT&T 2.0

no C99

> + runs on 16-bit cpus

heh, unique ... :-|

> + all models: tiny through huge
> + nice IDE
> + nice help / function reference

> - DOS only (no cross compiling supported)

This is not a con :-P

> - no newer C++ features (generics, templates, etc.)

nor C99

> - 186/286 optimizations at most (useless for 99% of the world)

What did you expect from a 16-bit compiler ? 8086 is enough :-)

> - OpenWatcom is better in most ways (but needs 386+ to host)

+ the bloat :-(

> Besides, there even a DOS extender that works with it

pulled one more coffin from its grave ? :lol3: What's unique on it ?

---
This is a LOGITECH mouse driver, but some software expect here
the following string:*** This is Copyright 1983 Microsoft ***

Rugxulo

Homepage

Usono,
24.02.2009, 07:06

@ DOS386
 

FPC: 7-Zip or UPX ; TC++ pros and cons

> Funny. All executables inside FPC package are UPX'ed and I shouldn't
> it then :clap:
>
> What about banning UPX and reducing package size in next release,
> see msg=5374 ? :-)

First of all, switching to 7-Zip has been vaguely discussed here before (by Steve and marcov), but it wasn't considered realistic due to potential platform issues, portability concerns, lack of testing, as well as no Pascal srcs for such (unlike Zip).

Secondly, they could stop using UPX, esp. if they all hate UPX so much, but it would increase download size ... although UPX's best uses LZMA anyways, so switching to 7-Zip would offset that.

(As for UPX license conflict, I don't know of any, but using UPX-UCL can easily fix that, which is 100% same compression for LZMA, only very vaguely worse ratio for UCL instead of closed-src NRV.)

Here's what I recently packaged for FreeDOS (if anybody here cares, highly doubt it):

http://rugxulo.googlepages.com/upx-uclx.zip
http://rugxulo.googlepages.com/upx-ucls.zip

> > If DJGPP really stops, Dos is dead.
>
> They stopped DOS support 10 years ago. Are you able to use the BUGzilla
> now ? Maybe someone should port GCC to DOS ASAP :hungry:

Eh? No, I think they only stopped bothering to SFN-ize the srcs. Otherwise, everything still works. (Besides, 2.04 beta is from 2003, so that's "only" five years, heh.)

> > Turbo C++ 1.01 is "dead" but still used by FreeDOS many years after
>
> no C99

Nor C++0x, boo hoo.

> > + runs on 16-bit cpus
>
> heh, unique ... :-|

One of the only full ANSI C freeware ones I know of. All the others are only subsets, thus not as good.

> > - DOS only (no cross compiling supported)
>
> This is not a con :-P

It is if you want to compile from x86-64 without DOSEMU. Or if you wanted to target other OSes, which are becoming more common every day.

> > - no newer C++ features (generics, templates, etc.)
>
> nor C99

To be honest, even GCC has imperfect C99 support. And most people don't want or use it anyways (ahem, MSVC).

> > - 186/286 optimizations at most (useless for 99% of the world)
>
> What did you expect from a 16-bit compiler ? 8086 is enough :-)

AFAICT, even 286 is not really supported, only 186. I dunno, it's still good, just less than optimal (e.g. OpenWatcom produces faster code).

> > - OpenWatcom is better in most ways (but needs 386+ to host)
>
> + the bloat :-(

You mean the compiler overall? Blame 386+ opcodes or (heh) use UPX. ;-)

> > Besides, there even a DOS extender that works with it
>
> pulled one more coffin from its grave ? :lol3: What's unique on it ?

What's unique? It works with TC++, even virtual memory. Not enough? ;-)

ecm

Homepage E-mail

Düsseldorf, Germany,
24.02.2009, 10:46

@ Rugxulo
 

186 or 286

> > > - 186/286 optimizations at most (useless for 99% of the world)
> >
> > What did you expect from a 16-bit compiler ? 8086 is enough :-)
>
> AFAICT, even 286 is not really supported, only 186.

In fact, the 286 didn't add much to the Real Mode instruction set. Most extensions usually called 286 are in fact 186 extensions.

---
l

marcov

24.02.2009, 11:11

@ ecm
 

186 or 286

> > > > - 186/286 optimizations at most (useless for 99% of the
> world)
> > >
> > > What did you expect from a 16-bit compiler ? 8086 is enough :-)
> >
> > AFAICT, even 286 is not really supported, only 186.
>
> In fact, the 286 didn't add much to the Real Mode instruction set. Most
> extensions usually called 286 are in fact 186 extensions.

Support doesn't really have to be about adding instructions. It can also be instruction choice. (and with later CPUs: scheduling)

DOS386

25.02.2009, 03:18

@ Rugxulo
 

FPC: 7-Zip or UPX ; TC++ pros and cons bloated WATCOM

> switching to 7-Zip has been vaguely discussed here before
> (by Steve and marcov), but it wasn't considered realistic due to potential
> platform issues, portability concerns, lack of testing, as well as no
> Pascal srcs for such (unlike Zip).

I indeed understand this type of purism, but "argument" is invalid nevertheless, there is no PASCAL source of UPX either.

> Secondly, they could stop using UPX, esp. if they all hate UPX so
> much, but it would increase download size

Did you test before boasting ? In my test the size got reduced by factor 2 !!! :surprised:

(WATCOM)

> You mean the compiler overall?

YES.

> Blame 386+ opcodes or (heh)

NO.

> use UPX.

NO, see above.

> What's unique? It works with TC++, even virtual memory. Not enough?

Does it work on 8086 ?

---
This is a LOGITECH mouse driver, but some software expect here
the following string:*** This is Copyright 1983 Microsoft ***

Rugxulo

Homepage

Usono,
25.02.2009, 09:29

@ DOS386
 

FPC: 7-Zip or UPX ; TC++ pros and cons bloated WATCOM

> I indeed understand this type of purism, but "argument" is invalid
> nevertheless, there is no PASCAL source of UPX either.

True.

> > Secondly, they could stop using UPX, esp. if they all hate UPX
> so much, but it would increase download size
>
> Did you test before boasting ? In my test the size got reduced by factor 2
> !!! :surprised:

With 7-Zip, yes. By reducing UPX alone and keeping .ZIP, not so much. (No, I didn't test, obviously.)

> > What's unique? It works with TC++, even virtual memory. Not enough?
>
> Does it work on 8086 ?

No. You cannot use anything besides real EMS on an 8086. As such, neither XMS, VCPI, nor DPMI are available.


----------------------------------
SWALLOW v1.0 - a DOS-Extender to
write own Protected Mode programs
with TP 6/7, TC++ 1.0 or BC++ 3.1.
It provides virtual memory and
supports 32 bit assembler code.
Card software except the sources
of the extender itself.
386 required.
----------------------------------


EDIT: Found a minor update (1.01) here.

marcov

25.02.2009, 12:58

@ DOS386
 

FPC: 7-Zip or UPX ; TC++ pros and cons bloated WATCOM

> > switching to 7-Zip has been vaguely discussed here before
> > (by Steve and marcov), but it wasn't considered realistic due to
> potential
> > platform issues, portability concerns, lack of testing, as well as no
> > Pascal srcs for such (unlike Zip).
>
> I indeed understand this type of purism, but "argument" is invalid
> nevertheless, there is no PASCAL source of UPX either.

UPX doesn't run during install.

marcov

24.02.2009, 11:32

@ Rugxulo
 

Compatibility woes / deprecation

> The Pentium had been advertised as having a 64-bit bus (or whatever), not
> to mention MMX on later models technically being 64-bit. I know that's not
> the same thing, but that's how it was spun in some cases.

Yes, but they fooled nobody serious with that :-)

> > And in general the P4 depended more on "P4 optimized code" than P-III.
> > Athlon was even less picky about how the code was optimized.
>
> Well, they broke some common optimizations used in the past due to
> architectural differences. Of course I think all cpus are like that, even
> AMDs (from my limited reading of the optimization hints in tech docs).

Yes, but the problem was that the penalties were so steep because the P4 with its deep pipelines was so different.

IOW the difference is that you noticed such things inabout general code by normal compilers. Not last cycle work with the intel compiler.

> That's what I meant, home use. Of course, even the first 64-bit Opterons
> were server only, right?

And workstation. But the workstation pendant Athlon64 predates Opteron, so it is not the same.

(SSE/MMX)
> > It isn't, but it shouldn't be overrated. Rule of thumb, it is useless
> > unless you specifically code for it.
>
> I mean, it just feels somewhat useless since it's weird and involves
> rewriting code manually (although this is a perfect example that "compiler
> outperforms humans" isn't always true since most compilers can't vectorize
> worth a damn).

Note that we currently have exactly this same discussions again about cores. The same advocates that said that "better compilers will solve this" are at it again.

Of course it is not entirely the same, but the pure compiler part is IMHO. But the line is blurred that frameworks often can do something (hiding parallelism for the user)

> GCC has "-mfpmath=", which allows 387, sse, or both. But I'm not sure it
> helps (yet?), highly experimental I think.

FPC has {$FPUTYPE SSE2}

> > Where do you get that? One still installs DDR (1,2,3) in pairs for
> optimal
> > performance, and each is 64-bit. So afaik mem bandwidth is still
> 128bit.
> > (unless you use several QPI/HT)
>
> I mean SSE bandwidth (or so I heard).

Ah ok.

> Anyways, I also read somewhere that Core 2 can (optimally) do four
> instructions per clock unlike AMD at max. three. That alone is pretty
> good, so raw clock speed doesn't matter as much as previously (e.g. 486 or
> 586).

Yes, but only if they are parallizable, and 3-4 loads/stores per clock aren't.
Note that both Phenom II and i7 have now 1 cycle L1 caches, which might improve this a bit. (when not read lineairly)

> Maybe not on Windows for make, bash, gcc, etc, but DOS has no issues.

It's more important on Windows true. And Dos has maximal partition limits? How much storage can you use on a single HD on pure dos? 24 partitions of 2GB or so?

> The only reason to not use something old is if the new improves in
> every way, which is hardly typical.

That is not correct. If it works better/easier over the whole line is enough. Otherwise I would always remain stuck with something old because it is "better" on one not terribly important point.

> Pros:
> + small

Not an advantage per se.

> + fast (even can use EMS or XMS for even faster speeds)

For me 32-bit binaires were always faster.

> + 16-bit code (which GCC still lacks)
> + runs on 16-bit cpus
> + all models: tiny through huge
> - 186/286 optimizations at most (useless for 99% of the world)

Not a requirement. Don't have anything in use below XP2000+

> + supports ANSI C and C++ AT&T 2.0

I'm not a C programmer.

> + nice IDE
> + nice help / function reference

I use FPC's IDE. Works fine. Am improving the help, but it is huge.

> Cons:
> - no sources

Less important if it works right. But for runtime parts sources are a non negotiable requirement.

> - DOS only (no cross compiling supported)

Useless :-)

> - no newer C++ features (generics, templates, etc.)

Those would be the only reason to use C++ in the first place.

> - OpenWatcom is better in most ways (but needs 386+ to host)
>
> Besides, there even a DOS extender that works with it
> (Swallow

Pmode works with FPC too. Likewise untested in recent years.

ecm

Homepage E-mail

Düsseldorf, Germany,
24.02.2009, 11:56

@ marcov
 

MS-DOS partition limits

> > Maybe not on Windows for make, bash, gcc, etc, but DOS has no issues.
>
> It's more important on Windows true. And Dos has maximal partition limits?
> How much storage can you use on a single HD on pure dos? 24 partitions of
> 2GB or so?

If "pure DOS" implies "retail MS-DOS", yes. Note that you can't store all these partitions on a single disk because retail MS-DOS never supported LBA, so it accesses only the first 8 GiB of all disks. Most modern DOS versions (even later MS-DOS) however support LBA and FAT32 (some even support up to 32 drive letters instead of 26 ;-)).

---
l

marcov

24.02.2009, 18:38

@ ecm
 

MS-DOS partition limits

> > How much storage can you use on a single HD on pure dos? 24 partitions
> of
> > 2GB or so?
>
> If "pure DOS" implies "retail MS-DOS", yes.

Yes and no. I also include the w9x doses booted in dos. (so without w9x running).

> Note that you can't store all
> these partitions on a single disk because retail MS-DOS never supported
> LBA, so it accesses only the first 8 GiB of all disks. Most modern DOS
> versions (even later MS-DOS) however support LBA and FAT32 (some even
> support up to 32 drive letters instead of 26 ;-)).

Is there a canonical list with most important properties somewhere?

Rugxulo

Homepage

Usono,
24.02.2009, 19:29

@ marcov
 

MS-DOS partition limits

> > > How much storage can you use on a single HD on pure dos? 24 partitions
> > of
> > > 2GB or so?
> >
> > If "pure DOS" implies "retail MS-DOS", yes.
>
> Yes and no. I also include the w9x doses booted in dos. (so without w9x
> running).

Early versions of Win95 didn't support LBA or FAT32, I think corrected in OSR2 or so. Anyways, here's what the FreeDOS Wikipedia page says:

"FAT32 is fully supported, even booting from it. Depending on the BIOS used, as many as four LBA hard disks up to 128 GB, or even 2 TB in size are supported."

> > Note that you can't store all
> > these partitions on a single disk because retail MS-DOS never supported
> > LBA, so it accesses only the first 8 GiB of all disks. Most modern DOS
> > versions (even later MS-DOS) however support LBA and FAT32 (some even
> > support up to 32 drive letters instead of 26 ;-)).
>
> Is there a canonical list with most important properties somewhere?

Comparison of x86 DOS OSes

ecm

Homepage E-mail

Düsseldorf, Germany,
24.02.2009, 22:41

@ marcov
 

MS-DOS partition limits

> > > How much storage can you use on a single HD on pure dos? 24 partitions
> > of
> > > 2GB or so?
> >
> > If "pure DOS" implies "retail MS-DOS", yes.
>
> Yes and no. I also include the w9x doses booted in dos. (so without w9x
> running).

Then the answer is no. As far as I remember MS-DOS 7.00 (from Windows 95) has LBA support; MS-DOS 7.10 (from Windows 95 OSR2, Windows 98 and 98 SE) has both LBA and FAT32 support. (MS-DOS 8.00 from Windows Me [and on the Windows XP MS-DOS bootdisk] seems mostly a hacked 7.10 with some kernel compression.)

---
l

Rugxulo

Homepage

Usono,
24.02.2009, 19:46

@ marcov
 

Compatibility woes / deprecation

> > The only reason to not use something old is if the new improves in
> > every way, which is hardly typical.
>
> That is not correct. If it works better/easier over the whole line is
> enough. Otherwise I would always remain stuck with something old because
> it is "better" on one not terribly important point.

Well, for instance, I use HHsed a lot, and it's old and not as good as GNU sed, for instance. BUT, it's much smaller, easier to build, and actually faster (no thanks to 16-bit code, though). Doesn't mean I can't still use GNU sed sometimes, but hey, if it ain't broke, why fix it?

> > Pros:
> > + small
>
> Not an advantage per se.

True, I take it back, but it's not exactly a disadvantage either. ;-)

> > + fast (even can use EMS or XMS for even faster speeds)
>
> For me 32-bit binaires were always faster.

True again, esp. since even the 486 runs 32-bit code faster. In simple benchmarks, Blair's 16-bit C MD5SUM is lots slower than DOS386's FreeBASIC MD5 tool.

But I actually meant the compiler itself is fast ("turbo").

> > + 16-bit code (which GCC still lacks)
> > + runs on 16-bit cpus
> > + all models: tiny through huge
> > - 186/286 optimizations at most (useless for 99% of the world)
>
> Not a requirement. Don't have anything in use below XP2000+

Well, just saying, GCC doesn't properly support 16-bit code yet (although GAS mostly does, from what I've read). Rask was/is? working on something, and even DJ himself hacked 2.7.2.3 "back in the day" to semi-working 16-bit status. So you have to use something other than GCC.

It's been said that 16-bit is only needed for boot loaders, but obviously some OSes (DOS, ELKS) still use it too. Personally, I think "anything that works" is fine, but some people are offended by 16-bits (although mostly due to segments and their quirks, I think).

> > + supports ANSI C and C++ AT&T 2.0
>
> I'm not a C programmer.

Neither am I really. Note that I also don't know any Pascal, but I wouldn't mind learning some eventually. (IOW, I'm not a purist.)

> > + nice IDE
> > + nice help / function reference
>
> I use FPC's IDE. Works fine. Am improving the help, but it is huge.

I don't actually use TC's IDE much except on rare occasion to look up some function. I like TDE.

> > Cons:
> > - no sources
>
> Less important if it works right. But for runtime parts sources are a non
> negotiable requirement.

Well, if this were a deal breaker, I'd switch exclusively to OpenWatcom. But it's not.

> > - DOS only (no cross compiling supported)
>
> Useless :-)

Less useless with DOSBox, DOSEMU + FreeDOS, 32-bit Windows, etc.

> > - no newer C++ features (generics, templates, etc.)
>
> Those would be the only reason to use C++ in the first place.

Not really. Some (few) people still use C++ as a glorified "C with classes". A C++ subset is realistically better than nothing.

> > Besides, there even a DOS extender that works with it
> >
> (Swallow
>
> Pmode works with FPC too. Likewise untested in recent years.

Too busy porting to Nintendo DS? :-D

DOS386

20.02.2009, 06:08

@ marcov
 

Compatibility woes / deprecation of UPX

marcov wrote:

> We discourage UPX.

Since yesterday ??? :confused:

> I don't see the point.

Me neither. Why all FreePASCAL releases up to 2.2.2 are that heavily infiltrated with UPX and that absurdly bloated because of this. :crying:

---
This is a LOGITECH mouse driver, but some software expect here
the following string:*** This is Copyright 1983 Microsoft ***

marcov

22.02.2009, 22:12

@ DOS386
 

Compatibility woes / deprecation of UPX

> marcov wrote:
>
> > We discourage UPX.
>
> Since yesterday ??? :confused:

Since ever.

> > I don't see the point.
>
> Me neither. Why all FreePASCAL releases up to 2.2.2 are that heavily
> infiltrated with UPX and that absurdly bloated because of this. :crying:

Because that something is included, it doesn't mean you should routinely use it.

DOS386

24.02.2009, 05:07

@ marcov
 

Compatibility woes / deprecation of UPX

> > > I don't see the point.
> > Me neither. Why all FreePASCAL releases up to 2.2.2 are that heavily
> > infiltrated with UPX and that absurdly bloated because of this.
> Because that something is included, it doesn't mean you should routinely use it.

Funny. All executables inside FPC package are UPX'ed and I shouldn't it then :clap:

What about banning UPX and reducing package size in next release, see msg=5374 ? :-)

---
This is a LOGITECH mouse driver, but some software expect here
the following string:*** This is Copyright 1983 Microsoft ***

marcov

24.02.2009, 10:40

@ DOS386
 

Compatibility woes / deprecation of UPX

> > > > I don't see the point.
> > > Me neither. Why all FreePASCAL releases up to 2.2.2 are that heavily
> > > infiltrated with UPX and that absurdly bloated because of this.
> > Because that something is included, it doesn't mean you should routinely
> use it.
>
> Funny. All executables inside FPC package are UPX'ed and I shouldn't it
> then :clap:

Can you imagine how long that package script hasn't been updated....

> What about banning UPX and reducing package size in next release, see
> msg=5374 ? :-)

I'm all for it: Submit a proposal for a new package system, run a few beta's, make sure they are well tested.

marcov

15.02.2009, 13:01

@ Rugxulo
 

Compatibility woes / deprecation

> > not win9x system software which was redundant after migrating from w9x.
> > When I moved to NT (w2k), I mostly cleared out old dos utility
> programs,
> > partially also because I gave up resistance against LFN)
>
> First of all, many DOS apps support LFNs (e.g. some FreeDOS utils: find,
> more, FreeCom, etc.) as well as all DJGPP v2 apps by default.

Under NT?

> Secondly, you are indeed naive if you think XP can run all your old software.

Maybe. But ignorance is bliss sometimes. If I don't know it, I apparently don't use it.

> There are indeed a lot of Win9x users, but XP was
> forced down our throats,

That's life. IMHO not different when your favourite kind of crisps goes out of the supermarket because not enough people bought it.

> Other extras (Unicode) were just icing
> on the cake as most developers don't use them.

Unicode is a major demand from customers, and has been so for years. It partially makes life easier for developers because some of the burdens of charsets/codepages (though not all) disappear.

> XP has now
> been around longer than that, and I never see any new computers with XP
> installed anymore.

Vista % in the corporate world is 4%. You bet that XP installs are done there.

> So if you think XP is so stable, you are in for a surprise.

I never said XP. I was talking about NT. And XP is just a minor notch after 2k for me, an OS btw that I have used longer than XP. Probably if I didn't strictly need 32-bit XP for work, I would already run Vista 64-bit. (I actually own it)

Btw, the app I missed most initially was QEdit.

> It will be dropped just like perfectly acceptable OSes before
> it. Then you're screwed.

No, you adapt, update or clear out old junk. Life is not stasis.

(calling the other targets names skipped. It was just an illustration of multiple targets demanding attention)


> > Unix by linux/bsd/osx whatever.
>
> Linux has been around since 1991, and the *BSD family since 1993 or so.

(before the BSD fragmentation, people used just, uh, BSD. 4.4BSD or i386BSD)

> So they've had a lot longer time to build (than FreeDOS, for example).

Well, they had network running right from the start :-) Actually 4.2BSD hosted the first implementation of TCP/IP :-)

Anyway, I'm going to conclude this discussion, and here is my conclusion:

Seriously, I think the main difference is that the Unix heritage was simply more technically viable than the Dos heritage.

However, more importantly, most of the people that stayed were refuseniks, minimalists etc, complaining how a perfectly good OS was canceled (and some continue to this day :-) And when the free ride was over, most of them moved on.

While the Unix people (and Unix is not perfect IMHO) had their crisises too (the BSD world was hit hard by the AT&T lawsuits, Linux had to start from scratch), they simply started working.

In earlier msgs I've already told you that one of the reasons the Unix stuff goes ahead so much faster, is because the feedback/users ratio is the highest for *nix. (less for Linux/i386, higher for the rest). It is so for all "weird" targets (including the Haiku's and the rest above, Commercial or not), except former mainstream ones like dos and w9x.

If you really want to further either one of those, do something that fixes that. Learning from OS/2 experience (a mainstream platform too, though only for about a year), an archival group and hordes of devels will help.

Rugxulo

Homepage

Usono,
15.02.2009, 20:36
(edited by Rugxulo, 15.02.2009, 21:00)

@ marcov
 

Compatibility woes / deprecation

(rr, a max. post limit of 5000 is obviously too small in these cases)

> > First of all, many DOS apps support LFNs (e.g. some FreeDOS utils:
> find,
> > more, FreeCom, etc.) as well as all DJGPP v2 apps by default.
>
> Under NT?

LFNs work in 2k (NT 5), XP (5.1), Vista (6) by default. Old NT4 needs a TSR/DLL combo (ntlfn08[bs].zip on DJGPP mirrors).

> That's life. IMHO not different when your favourite kind of crisps goes
> out of the supermarket because not enough people bought it.

Mr. T cereal, how I miss you! :-P

> Unicode is a major demand from customers. It
> partially makes life easier

I know, but it's not so critical as to kill ports to non-UTF locales.

> Vista % in the corporate world is 4%. You bet that XP installs are done
> there.

http://www.microsoft.com/windows/windows-xp/future.aspx

"We know you love Windows XP and you?re in good company. Hundreds of millions of Windows XP users are fans of the operating system, and many depend on Windows XP to run legacy applications and hardware not yet compatible with Windows Vista. Even though we?re retiring Windows XP, we won?t leave you hanging. Our Microsoft Support Lifecycle explains it all.

You can still buy new PCs and use Windows XP. Windows Vista Business and Windows Vista Ultimate have downgrade rights that let you return your operating system to Windows XP. We plan to provide support for Windows XP until 2014."

> Btw, the app I missed most initially was QEdit.

http://www.semware.com/

> No, you adapt, update or clear out old junk. Life is not stasis.

Constantly adapting is very annoying, hence why XP in corporations is still prevalent. (ME was dropped after 5 years, so get ready for XP to be abandoned soon, in deed if not word.)

rr

Homepage E-mail

Berlin, Germany,
15.02.2009, 20:47

@ Rugxulo
 

Compatibility woes / deprecation

> (rr, a max. post limit of 5000 is obviously too small in these
> cases)

Now doubled.

---
Forum admin

marcov

16.02.2009, 12:39

@ Rugxulo
 

Compatibility woes / deprecation

> > No, you adapt, update or clear out old junk. Life is not stasis.
>
> Constantly adapting is very annoying, hence why XP in corporations is
> still prevalent.

Vista didn't offer enough advantages, came too soon. Companies don't like to be rushed.

> (ME was dropped after 5 years, so get ready for XP to be
> abandoned soon, in deed if not word.)

ME was never a corporate OS.

marcov

15.02.2009, 12:44

@ Rugxulo
 

Compatibility woes / deprecation

> > - Before Giulio, FPC had no dos maintainers for a long, long time (say
> 5-7 years). Nobody contributed a single line of Dos related code, fixed
> bugs
> > etc.
>
> How long has FreeDOS been stable? (I can only guess beta8 in 2003 or so.
> Before that, I'm not sure it was good enough for everyday use. But that's
> just a guess since I never tried earlier versions.) How long has QEMU and
> BOCHS been stable? DOSBox?

If you not use it daily, and have to keep up your knowledge of this stuff, and Dos for an occasional bugreport it is a problem.
This makes "Dos" expensive for devels. The low feedback and patches ratios further fuel its unpopularity.

Moreover keep in mind that the main purpose (my case as example) to run it is FPC, something that takes 1-2 mins on a Core2 2.4GHZ. Working in VMs is annoying.

Keep in mind that those devels are not unsympathetic per se. But if you have to find out everything yourself for something you are actually not interested in, that is very demotivating, and something totally different from working with a dedicated dos-maintainer. Then the LFN issues etc get bearable.

> All of that makes a difference, esp. when your
> OS isn't DOS friendly any more (NT). I've heard many people say, "I don't
> have a DOS setup anymore." And modern installs of Windows using NTFS,
> hogging the whole drive, doesn't help.

Well it is less that, than that there are a lot of OSes that want a slice. And some have booting/primary partition limitations. I have two harddisks (and way more space than I actually need) due to this. Double the number of primary partitions. (and x86_64 platforms make this a lot more complicated)

On laptops (nowadays half of the computers sold), it is even worse, hidden vendor partitions make it even more difficult. Then there is the issue of how to get data on them (network, SVN), all other stuff (binutils, GDB) being typically old and odd.

> > They have to keep wrestling with 8.3 support, memory limitations,
> thread
> > support limitations, unicode deficiencies of that one platform _every_
> > day.
>
> 8.3 can be easily worked around (ROM-DOS, DOSLFN, StarLFN, Win9x, Win2k).

All can be worked around. But that is what the above sentence says, lots of special stuff that must be worked around. And note btw that LFN is as much max-length of a path as much as 8.3.

> Memory limitations? Not in flat model.

DPMI 64MB limit in w9x iirc.

> Thread support? No standard method,
> too many hacks.

No support is actually a feature, since it is assumed all OSes have it. Same goes for network support and IPC.

> Unicode? Even Win9x didn't barely support that, so you
> can't complain there (since nobody cared back then anyways).

Even WinME is near 10 years old. The world has moved on. The push to unicode (and the pressure behind it) is massive in Open Source projects. The w9x phasing out has already started here and there.

Sure there are ways around (MSLU and the like, and there is even a GNU substitute), but that is another DIY job to figure everything out.

> Let's face
> it, even GNU proposes all comments in code be in English, so that proves
> the English bias in the world.

No it doesn't, since their stance on user interfaces is totally different and they actually lead the way there.

> Not saying that's ideal, but seriously,
> saying Unicode is a deal breaker is a bit exaggerated. (Besides, Win32s
> didn't have threads or Unicode either except latter via wimpy codepage
> conversion.)

Yes, and PDPs neither. But they are also old and killed off.

> Not true. Win2k was pretty light on resources (comparatively) unlike
> Vista.

The ratio XP(org)/2k(sp3) was not that different from Vista/XPsp2

I ran w2k on a 128MB machine, using Outlook,word (2k), Mozilla milestones and adobe acrobat. Later when I got 192MB I could even run them concurently.

> Heck, even XP is so much lighter that it's the newest OS that MS
> could cram on a netbook (until Win7 is finalized).

Yes, and 2k (and one more back, NT4 which could run happily with 32MB before they brought in IE) were such magnitudes further down. IMHO you are idolizing XP. And unfairly so. For most XP is mainly so great because it was their first oasis of stability after w9x. But compared to 2k, XP wasn't that great, slightly better maybe, if you could afford the resource usage increase, but not THAT much. The w95->w98 change was bigger.

> And XP -> Vista broke
> some things ... unlike 2k -> XP,

That's partially true. XP couldn't run some 2k drivers, and there were some apps broken early on when run in Themed mode (the early Teletubby mode was full of bugs), but it was relatively minor. Still that is not what the outrage about Vista (which is IMHO exaggerated) is about.

Rugxulo

Homepage

Usono,
15.02.2009, 20:26

@ marcov
 

Compatibility woes / deprecation

> the main purpose to run it is FPC, something that takes 1-2 mins on a
> Core2 2.4GHZ. VMs are annoying.

VMs are fine, just slower than normal (although VT-X helps, e.g. VirtualBox). BTW, you can't expect everyone to have a Core2, e.g. netbooks.

> On laptops it is even worse, hidden vendor partitions make it
> more difficult. Then how to get ... (binutils, GDB)
> being typically old and odd.

Hidden restore partitions or hidden NTFS pieces (Dell??)?

And actually, tons more OSes have ports of BinUtils than I originally expected. That's why the ridiculous amount of obsolete Emacs stuff scared me. Why delete what isn't broken?

> > Memory limitations? Not in flat model.
>
> DPMI 64MB limit in w9x iirc.

I haven't used Win9x enough to know definitively (maybe -1 or Auto fixes that). DOSBox 0.72 comes pre-configured to 16 MB and can be extended only up to 64 MB max. Even virtualization can typically only use half the total RAM (due to Windows limits). And Win16 was worse, 16 MB max. I think, per DOS box. Kinda strange when even DOSEMU works better than Windows.

To be honest, NT was never exactly DOS-friendly, and I guess it's a more limited rewrite. I guess they never knew how to fix it (and thus didn't care as much). Even Quake 2 was developed for Windows, not DOS (although HX can supposedly run it).

Anyways, for real DOS, you don't have such DPMI memory limits. X32 can use 3 GB, WDOSX and CWSDPMI and PMODE/DJ can use 2 GB, HDPMI32 can use 2 GB (or more??). So blaming Windows' memory limits on DOS isn't fair (although I know what you're saying ... obviously people use and develop on Windows a lot).

> WinME is near 10 years old.

8.5 :-P

> The world has moved on.

No, MS moved on and the world got stuck anyways (although I'm not saying ME was the best ever). Some people here even prefer Win98SE over it.

> The push to
> unicode is massive in Open Source projects.
> The w9x phasing out already started.

I don't think that's why Win9x is less utilized. And Unicode isn't nearly as important in the U.S. (almost entirely monolingual) as in Europe, etc.

> Sure there are ways around (MSLU and the like, and there is even a GNU
> substitute), but that is another DIY job to figure everything out.

MSLU has some weird license (as usual) and the GNU substitute is only half-finished and seems abandoned. Actually, the rewrite is what Firefox used to use, I think.

> No it doesn't, since their stance on user interfaces is totally different
> and they actually lead the way there.

Try "export LANG=eo:en" sometime and see how much is still shown in English. All the interfaces in the world can't get translations. It's because it's not a priority to most people (for good or bad).

> Yes, and PDPs neither. But they are also old and killed off.

But there is no reason to kill off DOS / Win9x / Win2k support that works fine. It's not exactly rare and unknown. That's the whole point!

> > Not true. Win2k was pretty light on resources (comparatively) unlike
> > Vista.
>
> The ratio XP(org)/2k(sp3) was not that different from Vista/XPsp2

Um, I think XP can install in a GB of space, but Vista requires like 16 GB.

> I ran w2k on a 128MB machine, using Outlook,word (2k), Mozilla milestones
> and adobe acrobat. Later when I got 192MB I could even run them
> concurently.

Word, Firefox, and Acrobat are all huge, huge RAM pigs.

> Yes, and 2k (and one more back, NT4 which could run happily with 32MB
> before they brought in IE) were such magnitudes further down. IMHO you are
> idolizing XP. And unfairly so.

I in no way love XP to death. It was worse in many ways to Win9x. And Vista is even worse still in compatibility (not just DOS but Win32 programs as well). Which is bad, really annoying, should've been 100% avoidable.

> For most XP is mainly so great because it
> was their first oasis of stability after w9x.

Whitix is probably pretty stable too, but it doesn't run my apps.

> > And XP -> Vista broke
> > some things ... unlike 2k -> XP,
>
> That's partially true. XP couldn't run some 2k drivers, and there were
> some apps broken early on when run in Themed mode (the early Teletubby
> mode was full of bugs), but it was relatively minor. Still that is not
> what the outrage about Vista (which is IMHO exaggerated) is about.

Vista can't even run all XP drivers. I read somewhere that MS at the last minute broke driver compatibility for some unknown reason. And obviously you're aware that Vista won't run full-screen CMD prompt anymore. So what little gfx support was in Windows has vanished.

rr

Homepage E-mail

Berlin, Germany,
15.02.2009, 20:36

@ Rugxulo
 

Compatibility woes / deprecation

> I read somewhere that MS at the last minute broke driver compatibility for
> some unknown reason.

My theory: They broke it to force hardware developers to write only for the new driver model, which then forces users to buy the new OS, because there will be no XP drivers for the new hardware.

---
Forum admin

marcov

16.02.2009, 12:37

@ rr
 

Compatibility woes / deprecation

> > I read somewhere that MS at the last minute broke driver compatibility
> for
> > some unknown reason.
>
> My theory: They broke it to force hardware developers to write only for
> the new driver model, which then forces users to buy the new OS, because
> there will be no XP drivers for the new hardware.

I don't see how that would work. Users are usually not so easily forced, and hardware vendors have competition to reckon with.

Even if, it would only start to matter many years after the Vista release.

They needed a new driver model for the DRM stuff. But the DRM drivermodel implementation blew up into their face, and they were already late, cutting compatibility bits here and there.

I do think they also wanted to kill drivers that were effectively pre-XP, but that worked under XP, to not get caught in such similar problems again.

marcov

16.02.2009, 12:32

@ Rugxulo
 

Compatibility woes / deprecation

> > the main purpose to run it is FPC, something that takes 1-2 mins on a
> > Core2 2.4GHZ. VMs are annoying.
>
> VMs are fine, just slower than normal (although VT-X helps, e.g.
> VirtualBox).

VMs are not fine, since slower than normal.

> BTW, you can't expect everyone to have a Core2, e.g. netbooks.

netbooks are not for development but for surfing. And anything well above 1GHz is usable, it just takes longer (below 1GHz also but gets painful).

A stricter limit is memory, problem is that it is hard to give a figure, because it depends on the program size, usage/options etc. Also often the linker eats more memory than FPC itself.

> > On laptops it is even worse, hidden vendor partitions make it
> > more difficult. Then how to get ... (binutils, GDB)
> > being typically old and odd.
>
> Hidden restore partitions or hidden NTFS pieces (Dell??)?

Two. Restore and some media OS.

> And actually, tons more OSes have ports of BinUtils than I originally
> expected.
> That's why the ridiculous amount of obsolete Emacs stuff scared
> me. Why delete what isn't broken?

See earlier msgs. Nothing is broken, old versions keep working. It is only if you want new stuff, you will have a problem. But then, for new stuff, no dos maintainer can be found.

> To be honest, NT was never exactly DOS-friendly, and I guess it's a more
> limited rewrite.

Actually, it seems a lot of it was in the extenders, since nowadays it is way better than it used to. IIRC NT4 didn't even allow LFN extensions in a dosbox. (2k too? Can't remember).

> I guess they never knew how to fix it (and thus didn't
> care as much).

The few dos laggards remained on their "true" doses. So there was not much motivation to begin with.

> Anyways, for real DOS, you don't have such DPMI memory limits. X32 can use
> 3 GB, WDOSX and CWSDPMI and PMODE/DJ can use 2 GB, HDPMI32 can use 2 GB (or
> more??). So blaming Windows' memory limits on DOS isn't fair (although I
> know what you're saying ... obviously people use and develop on Windows a
> lot).

No, but it does make the dos port less usable, since the pure dos users are far,far in the minority.

> > WinME is near 10 years old.
> 8.5 :-P

Close enough.

> > The world has moved on.
>
> No, MS moved on and the world got stuck anyways (although I'm not saying
> ME was the best ever). Some people here even prefer Win98SE over it.

The world moved on, and some people prefered to stay behind. Now they are complaining they really got left behind.

> > The push to unicode is massive in Open Source projects.
> > The w9x phasing out already started.
>
> I don't think that's why Win9x is less utilized.

No it isn't. It is just killing it off since usage has waned.

> And Unicode isn't nearly as important in the U.S. (almost entirely > monolingual) as in Europe, etc.

Hmm. I thought the US mostly spoke Spanish now? Moreover they sell abroard.

> > Sure there are ways around (MSLU and the like, and there is even a GNU
> > substitute), but that is another DIY job to figure everything out.
>
> MSLU has some weird license (as usual) and the GNU substitute is only
> half-finished and seems abandoned. Actually, the rewrite is what Firefox
> used to use, I think.

Afaik the GNU substitute now worked reasonable. But I didn't test myself.

> > No it doesn't, since their stance on user interfaces is totally
> different
> > and they actually lead the way there.
>
> Try "export LANG=eo:en" sometime and see how much is still shown in
> English. All the interfaces in the world can't get translations. It's
> because it's not a priority to most people (for good or bad).

So because some bits are not translated, we should just chuck it entirely? Strange reasoning.

> But there is no reason to kill off DOS / Win9x / Win2k support
> that works fine.

There are no developers for new releases. That is as good as a dead sentence.
So no more new releases. After a while, also the old releases will be kicked indeed. (If I don't support it, I don't want it on my site), hence my suggestion to start an archival group if you really care.

> It's not exactly rare and unknown. That's the whole point!

The only win98 that I have seen in 4 years was the one I installed myself. It _IS_ rare. And most that I knew before were not in use as general purpose computers (needing upgrades), but just doing some specific task (some old administration program or hw related thing).

> > The ratio XP(org)/2k(sp3) was not that different from Vista/XPsp2
>
> Um, I think XP can install in a GB of space, but Vista requires like 16
> GB.

Odd. I wonder how I got my _64-bit_ (bigger) Vista in 8GB then.

> > I ran w2k on a 128MB machine, using Outlook,word (2k), Mozilla
> milestones
> > and adobe acrobat. Later when I got 192MB I could even run them
> > concurently.
>
> Word, Firefox, and Acrobat are all huge, huge RAM pigs.

All versions were older. So it worked. The 2k system can make do with 64MB if you don't upgrade IE.

> I in no way love XP to death. It was worse in many ways to Win9x.

Only in the dos support. I don't know any other way.

> And Vista is even worse still in compatibility (not just DOS but Win32
> programs as well). Which is bad, really annoying, should've been 100%
> avoidable.

It was determinate, enforcing coding guidelines that already were specified for XP. I don't like it either (because IMHO the use is limited), but let's not exaggerate it.

> > That's partially true. XP couldn't run some 2k drivers, and there were
> > some apps broken early on when run in Themed mode (the early Teletubby
> > mode was full of bugs), but it was relatively minor. Still that is not
> > what the outrage about Vista (which is IMHO exaggerated) is about.
>
> Vista can't even run all XP drivers.

See above XP couldn't run all 2k and nt4 (2k supported some nt4 ones) too.

> I read somewhere that MS at the last minute broke driver compatibility for some unknown reason.

Reference? The new DRM guidelines (that Hollywood btw forced upon MS) broke already a lot.

> So what little gfx support was in Windows has vanished.

Don't care. Can't remember the time I ran a graphical dos app (well probably it was DV/X)

Rugxulo

Homepage

Usono,
17.02.2009, 01:29

@ marcov
 

Compatibility woes / deprecation

> VMs are not fine, since slower than normal.

Better than nothing.

> netbooks are not for development but for surfing. And anything well above
> 1GHz is usable, it just takes longer (below 1GHz also but gets painful).

Depends on the person. I guess for building FPC, maybe. For my needs, a P166 is plenty fast. (Okay, so I don't rebuild GCC or Firefox or p7zip there, but still, I know what it's good for.) ;-)

> A stricter limit is memory, problem is that it is hard to give a figure,
> because it depends on the program size, usage/options etc. Also often the
> linker eats more memory than FPC itself.

Try passing LD --reduce-memory-overheads (since it uses more memory by default in an attempt to speed itself up).

Have you tried the ELF "Gold" linker with FPC on *nix?

> See earlier msgs. Nothing is broken, old versions keep working. It is only
> if you want new stuff, you will have a problem. But then, for new stuff, no
> dos maintainer can be found.

Or Win9x devs too? Seriously, I find that hard to believe.

> Actually, it seems a lot of it was in the extenders, since nowadays it is
> way better than it used to. IIRC NT4 didn't even allow LFN extensions in a
> dosbox. (2k too? Can't remember).

2K works I think (ask rr), but that's 'cause they made it better in order to "merge" WinME and Win2K into one big product: WinXP. They wanted to make it an attractive sell. Vista? Apparently, it's all "advertising, advertising, advertising, fix Vista, advertising, advertising, ...".

> The few dos laggards remained on their "true" doses. So there was not much
> motivation to begin with.

Not true, a lot of people use WinXP etc. for their DJGPP development. I mean, Windows (besides offering a GUI) was initially meant as a DOS multitasker. Peoples' needs change, I understand, but I wish things wouldn't get deprecated or bitrot without a valid reason. Dropping DOS is one thing (which I disagree with, obviously), but dropping Win9x just seems self-destructive.

> No, but it does make the dos port less usable, since the pure dos users
> are far,far in the minority.

Which is a chicken and egg problem. If nobody tools support Win9x, nobody will develop for or run it. And then they'll whine, "Well, nobody's asked for it", but that's because they don't support it!!

> The world moved on, and some people prefered to stay behind. Now they are
> complaining they really got left behind.

No, they are complaining that things which should still work no longer will due to willful negligence of the vendors. And the whole point is that even hallowed Win32 won't remain stable enough, and that will be deprecated, new will come, that will be deprecated, ad infinitum ....

> > And Unicode isn't nearly as important in the U.S. (almost entirely >
> monolingual) as in Europe, etc.
>
> Hmm. I thought the US mostly spoke Spanish now? Moreover they sell
> abroard.

Not even close. Sure, there are some (small) Spanish-speaking minorities, but it's far far far from being widespread. (Besides, Spanish is well-covered in Latin-1, and which alone isn't enough of a need to move to more-complex Unicode.)

> Afaik the GNU substitute now worked reasonable. But I didn't test myself.

It was only half-finished. Even MS released MSUL way way way too late in the game.

> So because some bits are not translated, we should just chuck it entirely?
> Strange reasoning.

Um, your whole argument is "If no one uses it, why should we bother?" So you're effectively struggling with UTF-8 when nobody cares much beyond even Latin-1 (if even)!

> There are no developers for new releases. That is as good as a dead
> sentence.

But how did Win9x suddenly move from "good enough" to "bad"? And did they forget how to maintain it?? No!

> So no more new releases. After a while, also the old releases will be
> kicked indeed. (If I don't support it, I don't want it on my site), hence
> my suggestion to start an archival group if you really care.

Very reckless attitude to kill good, working software. Ridiculous.

> The only win98 that I have seen in 4 years was the one I installed myself.
> It _IS_ rare.

I've only seen very very very few Europeans in 4 years, but that doesn't make them rare, too. :-P

> Odd. I wonder how I got my _64-bit_ (bigger) Vista in 8GB then.

Which is still 8x (or more) the size of XP.

> All versions were older. So it worked. The 2k system can make do with 64MB
> if you don't upgrade IE.

But ironically, Firefox 2.x was a bloated pig and only 3.x corrected some of that. And yet the machines who would most benefit (Win9x) aren't supported. Go figure.

> > I in no way love XP to death. It was worse in many ways to Win9x.
>
> Only in the dos support. I don't know any other way.

Apparently you don't understand. Windows NT used to have POSIX, OS/2, and DOS subsystems. How many of those still work? Don't you see a trend here? And don't give me the marketshare crap. Why does it break? Is a new OS worth more somehow by actually doing less???

> It was determinate, enforcing coding guidelines that already were
> specified for XP. I don't like it either (because IMHO the use is
> limited), but let's not exaggerate it.

Coding guidelines? As if those will stand the test of time either. MS will eventually probably move to managed code (and/or with hypervisor ... for Business and Ultimate only, I'm sure, ugh).

> See above XP couldn't run all 2k and nt4 (2k supported some nt4 ones)
> too.

And ME broke driver compatibility, just as Win95 did, just as Win16 did. So basically, every five years you have to upgrade to newer hardware, whether you want to or not. And that requires the latest Windows, too. And that requires more RAM, so you can't use older machines. But since newer drivers won't work anyways, oh well. (My digital camera is from 2005, not exactly old. Does Vista 2007 work with it? No. Not a huge deal, but when your printer, scanner, camera, software, etc. don't work, then what? Might as well buy a freakin' Mac!)

> > I read somewhere that MS at the last minute broke driver compatibility
> for some unknown reason.
>
> Reference? The new DRM guidelines (that Hollywood btw forced upon MS)
> broke already a lot.

Then netbooks really are good, esp. because there is no optical drive at all. And yet will Windows 7 on the netbook be crippled similarly? Probably.

> > So what little gfx support was in Windows has vanished.
>
> Don't care. Can't remember the time I ran a graphical dos app (well
> probably it was DV/X)

<sarcasm> Good for you. </sarcasm>

The whole point of things like ANSI C and POSIX is that it'll be portable. And yet what good is having standards like that if compliant systems are ignored for no reason?

ecm

Homepage E-mail

Düsseldorf, Germany,
17.02.2009, 12:25

@ Rugxulo
 

Compatibility woes / deprecation

> > > I in no way love XP to death. It was worse in many ways to Win9x.
> >
> > Only in the dos support. I don't know any other way.
>
> Apparently you don't understand. Windows NT used to have POSIX, OS/2, and
> DOS subsystems. How many of those still work? Don't you see a trend here?
> And don't give me the marketshare crap. Why does it break? Is a new OS
> worth more somehow by actually doing less???

According to "Undocumented DOS" older NT versions (I never used anything older than 5.0) even had an x86 emulator so the DOS and Win16 subsystems did work on non-x86 architectures. If that was the case Microsoft could've maintained the emulator in current versions and AMD64 Windows versions could still run DOS software. (Without such crap as the DOSBox's OS.)

---
l

Rugxulo

Homepage

Usono,
18.02.2009, 00:56

@ ecm
 

Compatibility woes / deprecation

> According to "Undocumented DOS" older NT versions (I never used anything
> older than 5.0) even had an x86 emulator so the DOS and Win16 subsystems
> did work on non-x86 architectures. If that was the case Microsoft could've
> maintained the emulator in current versions and AMD64 Windows versions
> could still run DOS software. (Without such crap as the DOSBox's OS.)

That sounds like something I've read, too.

Anyways, here's VX32, which sounds useful:

http://pdos.csail.mit.edu/~baford/vm/

> Vx32 is a user-mode library that can be linked into arbitrary
> applications that wish to create secure, isolated execution environments
> in which to run untrusted extensions or plug-ins implemented as native
> x86 code. Vx32 is similar in purpose to the Java or .NET virtual
> machines, but it runs native x86 code, so plug-ins can be written in
> ANY language, not just Java or C#.
>
> Vx32 runs on unmodified x86 FreeBSD, Linux, and Mac OS X systems
> without special permissions, privileges, or kernel modules. It also runs
> on x86-64 Linux systems. Ports to x86-64 FreeBSD and Mac OS X should not
> be difficult. A port to Windows XP should also be possible.

Cool, eh? ;-)

---
Know your limits.h

marcov

18.02.2009, 11:12

@ Rugxulo
 

Compatibility woes / deprecation

> > VMs are not fine, since slower than normal.
> Have you tried the ELF "Gold" linker with FPC on *nix?

No why?

> > See earlier msgs. Nothing is broken, old versions keep working. It is
> > only if you want new stuff, you will have a problem. But then, for new stuff,
> > no dos maintainer can be found.
> > Or Win9x devs too? Seriously, I find that hard to believe.

Win9x is effectively also unmaintained. However some loose patches for it come in once in a while. That it still works is that till now hardly any NT only functionality was needed in the base system, so that fairly rarely something broke, rather than that w9x is debugged and maintained.

However with the Unicode modifications, this is going to change. It will depend on how those unicode layers can soften this blow. No telling yet.

> 2K works I think (ask rr), but that's 'cause they made it
> better in order to "merge" WinME and Win2K into one big product:
> WinXP.

I doubt it. I think it was more a request from business users after NT4 And XP was not even on the horizon when this was decided.

> > The few dos laggards remained on their "true" doses. So there was not
> much
> > motivation to begin with.
>
> Not true, a lot of people use WinXP etc. for their DJGPP development.

/me doubts if there is "a lot of people" doing "DJGPP development" at all.

> > No, but it does make the dos port less usable, since the pure dos users
> > are far,far in the minority.
>
> Which is a chicken and egg problem. If nobody tools support Win9x, nobody
> will develop for or run it. And then they'll whine, "Well, nobody's asked
> for it", but that's because they don't support it!!

I've told you a dozen times that the people that use a target should carry the burden for it. That is the real chicken/egg.

If nobody uses it, or the people that still use it mainly work with a few apps in legacy mode, or stick to a few outdated commercial tools because it is easier, yes than Open Source will wane.

> > The world moved on, and some people prefered to stay behind. Now they
> are
> > complaining they really got left behind.
>
> No, they are complaining that things which should still work no
> longer will due to willful negligence of the vendors.

IMHO that is slander.

> And the whole point
> is that even hallowed Win32 won't remain stable enough, and that will be
> deprecated, new will come, that will be deprecated, ad infinitum ....

win32 never was stable to begin with and constantly expanded. Forward compatibility was never guaranteed.

It's just a gradual shift that is accelerated by the need for unicode, and the waning numbers of win9x users.

> > Hmm. I thought the US mostly spoke Spanish now? Moreover they sell
> > abroard.
>
> Not even close. Sure, there are some (small) Spanish-speaking minorities,
> but it's far far far from being widespread.

http://en.wikipedia.org/wiki/Spanish_in_the_United_States

> (Besides, Spanish is
> well-covered in Latin-1, and which alone isn't enough of a need to move to
> more-complex Unicode.)

Well, that is not cp437, so there you have your first conflict.

> Um, your whole argument is "If no one uses it, why should we bother?"

More like "if even the people that use it don't bother, why should the ones that don't care?"

> So you're effectively struggling with UTF-8 when nobody cares much beyond
> even Latin-1 (if even)!

Well, that is simply not true. Note that UTF-8 actually cleans up a lot of old stuff with multiple codepages.

> > There are no developers for new releases. That is as good as a dead
> > sentence.
> But how did Win9x suddenly move from "good enough" to "bad"?

Same as with dos mostly:
Nobody anymore looking into specific win9x issues for years already. Userbases are/were eroding away, and then you suddenly find bugs in existing releases that would have been blocking for serious users. Nobody noticed.......

> And did they forget how to maintain it?? No!

It requires active work to investigate such claims. Your whole argument is based on the fact that keeping something working in a live codebase requires hardly any effort, which is simply false.

> > So no more new releases. After a while, also the old releases will be
> > kicked indeed. (If I don't support it, I don't want it on my site),
> hence my suggestion to start an archival group if you really care.
>
> Very reckless attitude to kill good, working software. Ridiculous.

Very reckless to not actively try to preserve what you use. Suicidal.

> > The only win98 that I have seen in 4 years was the one I installed
> myself.
> > It _IS_ rare.
>
> I've only seen very very very few Europeans in 4 years, but that doesn't
> make them rare, too. :-P

It would have been if you saw heaps of them every day in the same region a few years back.

> > Odd. I wonder how I got my _64-bit_ (bigger) Vista in 8GB then.
> Which is still 8x (or more) the size of XP.

But relatively they are getting smaller :-)

> But ironically, Firefox 2.x was a bloated pig and only 3.x corrected some
> of that. And yet the machines who would most benefit (Win9x) aren't
> supported. Go figure.

Apparantly nobody using them anymore. Otherwise those people who had most to gain had fixed it, or created an alternate release.

> > > I in no way love XP to death. It was worse in many ways to Win9x.
> >
> > Only in the dos support. I don't know any other way.
>
> Apparently you don't understand. Windows NT used to have POSIX, OS/2, and
> DOS subsystems. How many of those still work? Don't you see a trend here?

(afaik OS/2 was never released), the Dos system actually got better over time.

But for the rest, yes there is a trend. But I
(1) don't understand/consider it a problem. Mountains rise and fall too, nothing is forever.
(2) don't understand why you expect it NOT to break.

> And don't give me the marketshare crap. Why does it break?

Who is going to pick up the bill to keep it running?

> Is a new OS worth more somehow by actually doing less???

Nonsense, they do a lot more, they just clean out some legacy cruft.

> > See above XP couldn't run all 2k and nt4 (2k supported some nt4 ones)
> > too.
>
> And ME broke driver compatibility, just as Win95 did, just as Win16 did.

ME doesn't exist as far as I'm concerned. I skipped it.

> So basically, every five years you have to upgrade to newer
> hardware, whether you want to or not.

No. You can use your old hw indefinitely. You just don't get new software for it, unless you (both you personally as in the community on said platforms) either do it their selves or fund it.

> And that requires the latest
> Windows, too. And that requires more RAM, so you can't use older machines.
> But since newer drivers won't work anyways, oh well. (My digital camera is
> from 2005, not exactly old. Does Vista 2007 work with it? No. Not a huge
> deal, but when your printer, scanner, camera, software, etc. don't work,
> then what? Might as well buy a freakin' Mac!)

All my HW worked with Vista 64-bit, except my creative soundcard. I didn't like that, but you are exaggerating grossly.

> > Reference? The new DRM guidelines (that Hollywood btw forced upon MS)
> > broke already a lot.
>
> Then netbooks really are good, esp. because there is no optical drive at
> all. And yet will Windows 7 on the netbook be crippled similarly?
> Probably.

I don't like netbooks except for their intended use: limited very mobile surfing and mailing, which I don't do enough to warrant the expense. (I had a EE on loan for a while) Main gripes: bad keyboard and too lowres monitor. Fixing those issues turns them into an underpowered, but otherwise ordinary laptop. (even pricewise)

> The whole point of things like ANSI C and POSIX is that it'll be portable.

HAHAHHAHA, <chokes and laughs>. You still believe that?

> And yet what good is having standards like that if compliant systems are
> ignored for no reason?

You can make that statement go up for anything if you ignore other peoples reasons. It is a bit self-centered.

Japheth

Homepage

Germany (South),
18.02.2009, 12:27

@ marcov
 

Compatibility woes / deprecation

> > The whole point of things like ANSI C and POSIX is that it'll be
> portable.
>
> HAHAHHAHA, <chokes and laughs>. You still believe that?

What is so laughable about Rugxulo's remark? I also "believe" that an important goal of ANSI C and POSIX was to increase the level of portability. Do you know some conspiracy theory which claims that the true intentions were totally different ones? So please tell us more!

---
MS-DOS forever!

marcov

18.02.2009, 13:12

@ Japheth
 

Compatibility woes / deprecation

> > > The whole point of things like ANSI C and POSIX is that it'll be
> > portable.
> >
> > HAHAHHAHA, <chokes and laughs>. You still believe that?
>
> What is so laughable about Rugxulo's remark? I also "believe" that an
> important goal of ANSI C and POSIX was to increase the level of
> portability.

Sure that was the intention of the committee (not necesarily the same as the vendors that initiated it), but that was achieved mostly by allowing both practices (like e.g. both SYSV and BSD, take your pick) and limiting functionality to an ancient base set. (read: the lowest common denomitor that already was nearly portable).

That is a bit a grim look, since yes, there have been developments in later standards. In general their effect was mixed. It did improve portability, but also the compromise avoided a shakeout of minor vendors, which hampered development of a wider portable subset.

However IMHO they are a minor footnote in computing history, and not the glorious revolution it is often pretended to be.

> Do you know some conspiracy theory which claims that the true
> intentions were totally different ones? So please tell us more!

I think the primary intention was not to go out of business. The bigger vendors thought more portability would make them stronger against emerging PCs on one side, and IBM with its mainframes on the other. The smaller ones saw a way to stay in business just a bit longer.

Also detaching Unix a bit from AT&T was a main objective. AT&T didn't see Unix as core business.

Rugxulo

Homepage

Usono,
18.02.2009, 21:36

@ marcov
 

Compatibility woes / deprecation

> However IMHO they are a minor footnote in computing history, and not the
> glorious revolution it is often pretended to be.

I agree that it's not the holy grail of the programming world, but it's better than nothing. "Hey, why not develop your own standard then?" you would probably say to me if I disliked it. :-P

> Also detaching Unix a bit from AT&T was a main objective. AT&T didn't see
> Unix as core business.

AT&T was legally prohibited from selling the OS outright due to their huge commercial influence (which is now strangely being resurrected after various mergers and exclusivity deals). They were too big for their britches and had to be split up "back in the day". Or so I'm told.

marcov

18.02.2009, 22:32
(edited by marcov, 18.02.2009, 22:43)

@ Rugxulo
 

Compatibility woes / deprecation

> > However IMHO they are a minor footnote in computing history, and not the
> > glorious revolution it is often pretended to be.
>
> I agree that it's not the holy grail of the programming world, but it's
> better than nothing.

I'm saying that you don't know that. Maybe the standards helped create an artificial Unix world that in the long run was fragmented more to get an initial improvement that without these standards.

> "Hey, why not develop your own standard then?" you
> would probably say to me if I disliked it. :-P

They tried to stop the gap with additional standards and initiatives, but worse, the mentality of minimalistic lowest denomitor compromises stuck. See the Gnome vs KDE wars, and their half hearted attempts to get a lowest common denomitor standards and interoperability. (like Freedesktop, but also the stange ways both have a "super" media streaming system that are connected to eachothers via all kinds of halfhearted plugins )

Well, at least we have come close to my opinion on the matter here, by example of extremes. A totally multi decennium Dos on one hand, a totally hackety patchy infinitely complexly versioned Unix without any decent definition of the API (except a few unparsable C headers) on the other side:-)

> > Also detaching Unix a bit from AT&T was a main objective. AT&T didn't
> see
> > Unix as core business.
>
> AT&T
> was legally prohibited from selling the OS outright due to their huge
> commercial influence (which is now strangely being resurrected after
> various mergers and exclusivity deals). They were too big for their
> britches and had to be split up "back in the day". Or so I'm told.

Yes and no. That was the reason for the original proliferation, but that cleared up in the early nineties when the split was complete and the limitations went off (the BSD vs AT&T lawsuits and the following Settlement).

However AT&T had two problems to enforce its claims; first it had sold way to broad licenses during the breakup times (which made it hard to go after licensensees), and when they went after BSD (the academic branch of Unix), it turned out they had absorbed heaps of BSD code without proper attribution.

Officially, the Settlement ('93 iirc) was a compromise, but in practice BSDi (UC) had won. Unfortunately it was a Pyrrhic victory because the years long strangehold of the case, and the minor but labourous cleanup slowed BSD.

Though to be honest I think the fragmentation into the flavours we know today, as well (especially) a better focus on "home" hardware helped Linux more. IIRC even though I had years long experience with BSD on the university, I installed Linux (which I only knew by reputation) because I didn't have SCSI hardware at home, and Linux supported IDE. (95-96 timeframe)

Rugxulo

Homepage

Usono,
18.02.2009, 20:57

@ marcov
 

Compatibility woes / deprecation

> > Have you tried the ELF "Gold" linker with FPC on *nix?
>
> No why?

'Cause it's supposed to be better (x86 and x86-64 ELF only). And you seem in a better position to test it than me.

> > 2K works, but that's 'cause they made it
> > better in order to "merge" WinME and Win2K into WinXP.
>
> I doubt it. I think it was more a request from business users after NT4
> And XP was not even on the horizon.

XP is just a lightly-modified 2k, so they are very similar. But XP was finally offered to "home" users unlike 2k.

> /me doubts if there is "a lot of people" doing "DJGPP development"

Probably true, but there are random users out there who don't "check in" too often.

> > > The world moved on, and some people prefered to stay behind. Now they
> > are complaining they really got left behind.
> >
> > No, they are complaining that things which should still work no
> > longer will due to willful negligence of the vendors.
>
> IMHO that is slander.

First of all, this is a written forum, so that'd technically be libel. But it's not that either. I'm not trying to disrespect anyone. BUT, it's true that some things don't work anymore and it's due to bogus (or even political) reasons instead of technical ones. As mentioned, Windows no longer supports certain subsystems and no longer runs on anything but x86, x86-64, and IA64 (unless you count XBox 360, which I don't). So much for "portable".

> win32 never was stable to begin with and constantly expanded. Forward
> compatibility was never guaranteed.

Actually, at one time MS was intent on having a "stable" Windows API (back in 16-bit days).

> > Not even close. It's far far far from being widespread.
>
> http://en.wikipedia.org/wiki/Spanish_in_the_United_States

I'm not saying people don't speak Spanish (e.g. San Antonio, Texas or Puerto Rico which is actually only a territory), but it's still not the official language or anything close to that. And really only those closer to Mexico itself have that influence. Other places don't at all. Very very very isolated / monolingual country (34 million != 300 million).

> > (Besides, Spanish is well-covered in Latin-1
>
> Well, that is not cp437, so there you have your first conflict.

cp850 is what was generally used, but cp819 is the "true" Latin-1 (although not found in most DOSes, third-party .CPI needed, e.g. Kosta Kostis' ISOLATIN.CPI).

> > So you're effectively struggling with UTF-8 when nobody cares much
> beyond even Latin-1 (if even)!
>
> Well, that is simply not true. Note that UTF-8 actually cleans up a lot of
> old stuff with multiple codepages.

I know that, but the net gains aren't enough for all the efforts. OpenWatcom has support for Japanese messages, but it hasn't been kept updated. So it's almost useless. It's not a bad thing, and someone could update it, so I'd rather they not delete it entirely. But it's obviously not first priority and not worth too much effort. (I'm not arguing against Unicode, just saying it isn't an absolute necessity.)

> > But how did Win9x suddenly move from "good enough" to "bad"?
>
> Same as with dos mostly:
> Nobody anymore looking into specific win9x issues for years already.

And the fact that the main OS pre-installed / used by 90% of the people has crappy DOS support had nothing to do with it??

> It requires active work to investigate such claims. Your whole argument is
> based on the fact that keeping something working in a live codebase
> requires hardly any effort, which is simply false.

I just don't know of any major changes that would require huge workarounds for Win9x in most normal cases.

> > > The only win98 that I have seen in 4 years was the one I installed
> > myself.
> >
> > I've only seen very few Europeans in 4 years, but that
> > doesn't make them rare, too. :-P
>
> It would have been if you saw heaps of them every day in the same region a
> few years back.

Face it, big dogs get more food than little ones. You almost have to reckon with them. And if you're not careful, they'll eat the little dogs' food too. Even though the big dog was once little, he's forgotten how it used to be, so he forgets how tough it is. Sure you may have to prepare a separate plate for each, but unless you only want one dog / species to survive, you have to work a drop (but not much) harder.

> > But ironically, Firefox 2.x was a bloated pig and only 3.x corrected
> some
> > of that. And yet the machines who would most benefit (Win9x) aren't
> > supported. Go figure.
>
> Apparantly nobody using them anymore. Otherwise those people who had most
> to gain had fixed it, or created an alternate release.

For a group (Mozilla) that rallied so hard against MS for having buggy IE, they sure jumped ship fast once MS dropped Win9x support. "How can you trust MS when they didn't update for five years?" Well, Firefox has dropped Win9x permanently, so now they are no better. So much for their bragging.

> > Apparently you don't understand. Windows NT used to have POSIX, OS/2,
> and
> > DOS subsystems. How many of those still work? Don't you see a trend
> here?
>
> (afaik OS/2 was never released), the Dos system actually got better over
> time.

No, the NTVDM has always had bugs. Even Quake wouldn't run on it due to bugs that MS refused to fix. And this was when DOS was still huge. Sandmann (CWSDPMI dude) could've worked around it, but it would've held up development for a month, so they just ignored it. (Besides, NT isn't really a gamer's OS.) Same with Win2k, more bugs that CWS (et al.) had to work around just so apps would run. (Try running anything DJGPP-ish compiled before 2000, it almost definitely won't work.)

> But for the rest, yes there is a trend. But I
> (1) don't understand/consider it a problem. Mountains rise and fall too,
> nothing is forever.
> (2) don't understand why you expect it NOT to break.

Why should it break? I guess it's a tiny bit unfair to expect MS to "do everything", but since only they can fix it (closed src), we're stuck! And I'm just saying, neither Win98SE nor WinME are even technically ten years old. Okay, so XP etc. has replaced it mostly, but only on new machines. Do all old machines break immediately once the warranty expires? Shouldn't working machines still be supported and used? Otherwise, they just rot and serve no purpose. I don't see how that's responsible behavior to ignore a perfectly working machine.

I just don't know enough about Win32 to claim that Win9x is SO hard and outdated. Surely it would be better (in theory) to support all Win32, right? Fine, it's too much work, if you say so. I'm just saying, in a perfect world ....

> > And don't give me the marketshare crap. Why does it break?
>
> Who is going to pick up the bill to keep it running?

Bill? ;-)

> > Is a new OS worth more somehow by actually doing less???
>
> Nonsense, they do a lot more, they just clean out some legacy cruft.

Legacy cruft that involves lots of apps. You're basically saying that any apps written previously "don't count" or "aren't useful anymore". Basically, MS "wasted their time" on Win9x. So why should we even bother with XP / Vista / etc.?? I'm sure it's considered a "big crap" in hindsight, too.

> > > See above XP couldn't run all 2k and nt4 (2k supported some nt4 ones)
> > > too.
> >
> > And ME broke driver compatibility, just as Win95 did, just as Win16
> did.
>
> ME doesn't exist as far as I'm concerned. I skipped it.

Zima doesn't exist anymore either. What's your point? That you can selectively choose what to recognize? Just like Palestine is / isn't a state (yet)? Or that Yugoslavia no longer exists except in separate parts?

It's just annoying when some things don't get a fair shake. I'm sure ME has bugs, but all the more reason to try to support it, to make it less buggy, to make it more useful, better, etc. Healthy people don't need doctors, only the sick do.

> All my HW worked with Vista 64-bit, except my creative soundcard. I didn't
> like that, but you are exaggerating grossly.

I think Creative didn't supply any Vista drivers. And some things don't work like they did on XP (EAX? 3D Sound, accelerated or whatever). Now, you can blame Creative for "being lazy" or blame MS for "breaking driver compatibility" or just say, "Oh well, time to upgrade". Anyway you look at it, fun fun fun. :-(

> I don't like netbooks except for their intended use: limited very mobile
> surfing and mailing, which I don't do enough to warrant the expense.
> Bad keyboard and too lowres monitor. Fixing those issues turns them
> into an underpowered, but otherwise ordinary laptop. (even pricewise)

They are smaller (e.g. 2 lbs. instead of 6.6 lbs. like my current laptop) and use lots less battery life. Sure, the keyboard sucks, and the screen is pretty small, but hey, if you don't need more, why pay extra for it?

> > The whole point of things like ANSI C and POSIX is that it'll be
> portable.
>
> You still believe that?

It does its job, albeit imperfectly. I'm just saying, people whine about standards (web, coding, etc.) and yet don't even bother to support older things (breaking API support, etc.), which seems a bit backwards.

Matjaz

Homepage E-mail

Maribor, Slovenia,
16.02.2009, 17:12

@ Rugxulo
 

Compatibility woes / deprecation

> Um, I think XP can install in a GB of space, but Vista requires like 16
> GB.

16 is a bit to much ;-) My windows folder is "only" 12 GB (Vista sp1).
On a side note:
It's true that a lot of apps from Windows 9x era don't work any more on Vista, but for some strange reason some programs from Windows 1.x and Windows 2.x do work on it :-D

mr

14.02.2009, 14:03

@ Rugxulo
 

Compatibility woes / deprecation

> It's basically a rant from early 2002 against XP (which is nowadays
> considered one of the best Windows, or even OSes, ever by most Windows
> users).
>
> Summary:
>
> + very stable
> + good hardware compatibility
> - older commercial games don't work
> - wastes half a gig of HD space
> - pre-XP apps run worse under compatibility mode
> - shuts down much more slowly than Win9x

I agree. XP is still the most famous operating system, most people are using and liking it. Basically it would be possible to create a better operating system, but people know it and people don't like changes.

For me XP runs more stable (in sense of not bluescreening) then Win9x.

Dropping support for any pre-XP operating system is a explicable process, most developers are using XP. Whenever I could theoretically join a open source project such as watcom, firefox or whatever I wouldn't like to care what's up with Win 3.11, NT 3.5, Win9x or whatever, it's would theoretically bug me to keep care in order not to break compatibility with them as I am not using any of them. To provide multi os support for Linux is already a laudable task.

As any new windows version also Vista and probable also Windows 7 are slower then it's predecessor on the same hardware. Because I am a thrifty person I don't like to pay money for things I can do perfectly with the things I already own.

All the new innovations on past-XP desktops (from ms or from the linux world) are _really_ boring and not simplifying or more efficient or convincing me. Until today I (and most others) can still not out of the box, talk or think and le the computer understand in a efficient way and do what I want.

Ms done with XP a good job and many new netbooks (it's currently a hype) are delivered with XP because past-XP is to slow and because people who are already working with XP like it also in general more then Vista. That's why I think ms will have a very hard time in trying to kill XP. For the next 5-10 years I think I can escape buying new hardware and new Windows. XP seams now to be "the os", "the standard" and it seams it can not be dropped.

RayeR

Homepage

CZ,
16.02.2009, 13:52
(edited by RayeR, 16.02.2009, 15:54)

@ Rugxulo
 

Compatibility woes / deprecation

just a quick note:

> - KernelEx helps some apps run (e.g. Doom 3)

you even don't need to install kernelex. There is very tiny russian patch that makes slight modification of doom3.exe that makes it running under win98. That's I find anoying if a program doesn't run under win9x just because of one unimportant 2K/XP call. Of course if the program is entirely based on new libs and use all new features of 2k/XP it obvious it will not run under Win9x...

> The main reasons mentioned to upgrade from Win9x are as follows:
> - SATA support

If your BIOS has IDE compatability mode then you're safe (usual on intel 9xx/P31 chipset based mobos)

> - HD with > 137 GB

No problem. There's patched ESDI_506.PDR that fully support LBA48. I use Win98SE on 500GB SATA drive in IDE mode. I also tried AHCI mode under XP but I didn't recognize significant speed-up compared to IDE mode. The limitation factor are <4GB files on FAT32.

> - memory > 512 MB (although this can be worked around)

Up to 1GB there's no several problems. You have to limit VCACHE to not overflow and then you can safely use 1GB. Over 1GB there may rise problems with some drivers causing BSOD, etc. But it is possible to limit windows memory if you have installed more. I ended up ~1,15GB of 2GB (I utilize full 2GB under other OSes).

ps.
I use win98 mostly as DOS multitasking environment (when coding in djgpp and want to read PDF documantation, search web, listen music etc. but I still have direct access to HW, not like in NTVDM) and for some older games. Win98 runs like flash on new HW but it's significantly less stable than XP. Many new programs excessivelly eats GDI resource which are very limited on win9x and once are all consumed you probably will have to reboot very soon :P

Before some years ago I didn't like XP when come after W2K but after a years came many fixes and improvements that turned XP into very stable and powerfull OS. I use it at work and I don't remember a BSOD occured recent months. And when it happen it is usually due to my experiments. Our company is buying Dell HW and they still delivers XP when we requested :) I don't know anybody who would like get Vista on his new machine.

---
DOS gives me freedom to unlimited HW access.

rr

Homepage E-mail

Berlin, Germany,
20.02.2009, 22:06

@ Rugxulo
 

Compatibility woes / deprecation

> - KernelEx helps some apps run (e.g. Doom 3)

For Windows 2000 there exist:
1. OldCigarettes Windows 2000 XP API Wrapper Pack
2. Known Dlls Wrapper (KDW)

---
Forum admin

Zyzzle

21.02.2009, 02:04

@ Rugxulo
 

Compatibility woes / deprecation

A Few Thoughts, after having spent almost one hour reading the entire thread...

The main point seems to be: If the DOS and/or Win 98 versions are working well, why needlessly eliminate them? Keep the old version up for distro, even if the newer ones are released. I am also extremely upset that the older DOS or Win 98 downloads just "disappear" for no good reasons, the maintaniers seem to go out of their way to erase them, and it's as if these legacy versions never existed at all!

Case in point-- the DOS version of MAME. Where can I even download the last DOS version produced? Can't even find a link anymore. And, if I wanted to, how could I even compile one of the newer versions for DOS? Has anyone a link to a newer DOS binary of MAME or suceeded in compiling one?

Rugxulo

Homepage

Usono,
21.02.2009, 04:21
(edited by Rugxulo, 21.02.2009, 05:36)

@ Zyzzle
 

Compatibility woes / deprecation

> A Few Thoughts, after having spent almost one hour reading the entire
> thread...
>
> The main point seems to be: If the DOS and/or Win 98 versions are working
> well, why needlessly eliminate them? Keep the old version up for distro,
> even if the newer ones are released. I am also extremely upset that the
> older DOS or Win 98 downloads just "disappear" for no good reasons, the
> maintaniers seem to go out of their way to erase them, and it's as if
> these legacy versions never existed at all!

The point was that even Win9x seems deprecated, not only DOS or OS/2 or whatever. And there seems to be no concrete reason besides "MS dropped it, and we don't use it". And yet it was never a big deal to support it before, so why now abandon it?? (Firefox is just the main example I saw, not the only instance and obviously not DOS-related. But I figured it was a fair analogy.)

> Case in point-- the DOS version of MAME. Where can I even download the
> last DOS version produced? Can't even find a link anymore. And, if I
> wanted to, how could I even compile one of the newer versions for DOS? Has
> anyone a link to a newer DOS binary of MAME or suceeded in compiling one?

I agree, I noticed this too, and it's very annoying. They used to support DJGPP fine, but eventually Win32 gained over and somebody killed off all remnants of DOS support, despite various requests on where to find DOS ports.

Anyways, I don't really use MAME anymore (StarROMS disappeared and I never ordered Hanaho's HotRod joystick w/ CD), but here's the two I know of off-hand (although not necessarily the latest):

* AdvanceMAME 0.106.1
* AMAME 0.59 (website gone, this is a mirror of only binary on Zophar.net, "Optimized for speed increases with AMD, Pentium II, and i486 processors")
* various other MAMEs listed on Zophar.net: MAME(K6), fastMAME (for 586,K5,Cyrix), MAMEAthlon, PMAME (for 586), MAME32ASM (unstable / faster w/ asm), etc. etc.

P.S. I found a site that says this (quoted from "M.A.M.E. Frequently Asked Questions V0.27 (6th of September, 1997)"):

> I personally use a P90/16MB/WIN95 and the DOS version runs like a
> dream really on nearly all the games. Nicola developed MAME on a
> 486/DX100 so my guess is that it runs well enough on that as some
> sort of a minimum configuration.

There is NO WAY IN HELL that this is still true, assuming it ever was. MAME has a reputation for being dog slow. Last I heard, MAME officially recommended at least 256 MB of RAM (maybe more now?). But I guess really really old-fashioned games (0.1 only supported five!) might? barely work on such "lowly" hardware. (And you must have a license, please don't pirate, not worth it! There are plenty of decent and free DOS games. See here.)

EDIT: MAME 0.1 - 0.10 history

EDIT: July 2002 specs for running MAME (for comparison)

EDIT: Why does MAME become slower all the time?

> The simpler hardware will work out in the end anyway due to
> ever-faster PCs (Pac-Man is very sub-optimal now compared with
> MAME 0.29 for instance, but almost any average modern system runs
> it with 100% speed).


EDIT #2: Apparently, Sep. 17 2005 was the last official DOS release, MAME 0.100, but I did (luckily!) find a download site:

http://mame.jp/0.100/

dm100s.zip              18-Sep-2005 08:29   136k  DOS source code changes for MAME 0.100. Requires the main source code archive too.
m0990100.zip            15-Sep-2005 00:16   1.2M  DIFF file from MAME 0.99 to 0.100. Use the command patch -p1 <m0990100.dif in a 0.99 source directory to update it to 0.100.
mame0100b.zip           15-Sep-2005 00:50   7.8M  Windows command line version of MAME 0.100.
mame0100b_dos.zip       18-Sep-2005 05:51   8.4M  DOS version of MAME 0.100.
mame0100b_dos_i686.zip  18-Sep-2005 05:59   8.6M  DOS version (Pentium Pro optimized) of MAME 0.100.
mame0100b_i686.zip      15-Sep-2005 00:54   7.9M  Windows command line version (Pentium Pro optimized) of MAME 0.100.
mame0100s.zip           15-Sep-2005 00:23  10.7M  Source code archive of MAME 0.100.


There's some (older and newer, ironically) OS/2 binaries here. Yes, Steve, I'm thinking of you. :-)

P.S. Hanaho's HotRod joystick emulates a PC keyboard with true "arcade feel" and is IMHO reasonably priced ($99 US), but they seem to include less Capcom games than previous (what, no more Willow? damn ...). Also, their site needs Flash 9 (ugh), and it claims to not work well on Win98SE (keyboard issues) although I get the impression that the MAME included is the DOS version (go figure).

P.P.S. XBox 1 has some good emulation sets (Taito Classics, Capcom Collection 1-2, Midway Arcade Classics 1-3, Namco Museum, Tecmo Arcade) and some game remakes even include the older original games (Prince of Persia: Sands of Time, Doom 3, House of the Dead, Wolfenstein 3D, TMNT 2, Panzer Dragoon Orta, Ninja Gaiden, Pac Man World 2). Some of these have been ported to PCs, also.

Back to index page
Thread view  Board view
22049 Postings in 2034 Threads, 396 registered users, 256 users online (0 registered, 256 guests)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum