Back to home page

DOS ain't dead

Forum index page

Log in | Register

Back to the forum
Board view  Mix view

HX question about link.exe (DOSX)

posted by marcov, 29.07.2008, 22:15

> Why are MS OSes always really really bad cooperating with others?? It's
> beyond embarrassing since you would hope they would fix that by now!

While agreed in general, (and even in this case), the root cause is a bit different. I suspect one of the OSes has a different view of harddisk layout than others, resulting in difficulties to make windows bootable again after a different OS install and vice versa.

> Isn't Xen not in the kernel tree unlike LVM, which is partially based on
> QEMU? (Or something.)

There is a kernel dimension. The idea was to speed up development for other platforms. Emulators are quite slow, so I decided to go with hypervisors. It worked fine for a while, but it broke with Fedora 7, and never was fixed since.

> > Total RAM use under Windows is higher, but not twice. This is logical,
> > since most are buffers anyway.
>
> You mean aggressive caching?

Any caching. IOW dynamically allocated memory occupied that isn't directly wired to processes.

(64-bit kernel/32bit userland)
> Really? I would assume they'd recompile everything (although that may not
> be a good idea).

There is not much difference between running in 16-bit programs on a 32-bit OS and 32-bit in a 64-bit process.

> > you can't really notice the difference, unless you really
> > get to the edges.
>
> Sounds almost not even worth it. :-/

Under Linux less even, since Linux supports PAE and 32-bit Linux has access to the full 4GB. On Windows this feature is reserved for the server SKUs.

> What I heard was that 64-bit was 10% faster, but a lot less drivers work.

Not my experience, though I can imagine it could be true for something that is really sensitive to the changes (like e.g. Photoshop).

> > I read up a lot on 64-bits programming and the summary is that rule of
> > thumb, general purpose code is slightly slower. But this is only a
> > measurable difference, not a noticable one.
>
> Maybe that's an implementation issue that will be fixed in later models??

Yes and no. It is simply the average larger data size. So unless you have 32-bit bins or some form of a NEAR model.

> > conventions, so this is not always a straightforward to predict. Afaik,
> > all OSes use the same calling convention now btw. A lot less cdecl vs
> > stdcall nonsense.
>
> GCC supports x86-64 (and before that various other 64-bit chips), so I
> assume it's not too bad at handling this.

There is nothing to handle. As said, the calling conventions are pretty universal for all x86_64 supporting OSes and compilers, and the rest is basic register allocation. x86-64 is quite risc'y in this regard. (and then I mostly mean orthogonality of instruction set, and more regs than x86)

Btw, in general GCC is worse in CPU specific optimizations than other compilers.

> > The variation in CPU capabilities on x86_64 is also much lower than on
> > x86, at least for now.
>
> There doesn't seem to be much added to IA32 chips since CMOVxx besides
> SIMD stuff.

Well, SSE2 is nearly a complete floating point unit too. Faster, but less precise.

> So, basically, it seems you only get very very minor
> performance differences without targeting one of those instruction sets
> specifically (at least with GCC). The most popular seem to be MMX and SSE2
> (not 3dnow! or SSE1, less need for doubles than scalars??).

It's a misconception that the SIMD sets only matter for vectorizable purposes.

The main bottle necks of the avg application are the heapmanager and the memory move function. And in the latter, it helps a lot if you can use SIMD instructions which move with larger granularity. x86 was always a bit ackward in this regard, compared to e.g. an 68040 that could move 16 bytes in one instruction in the early nineties already.

See e.g. the Fastcode project.

> > Registers are not saved properly during a context
> > switch. This lowers theoretical maximal performance since you can't use
> an
> > unit.

> Some people say that the FPU and MMX are both deprecated in favor of SSE2.

Afaik yes.

> Maybe MS feels the same way, I dunno. And don't forget that SSE4.1 and
> SSE4a are already on the market (with AVX and SSE5 in the planning
> stages). However, a lot of legacy (32-bit, too) code uses the FPU, but for
> whatever reason, computers are moving too fast these days to (commercially)
> worry too much about legacy (although I think it's more of an issue with MS
> than not).

Yes, but you can't introduce a whole new architecture every two years. Till now it happened twice. The i386 and the x86_64. All x86_64 machines have SSE2.

> Microsoft basically started over from scratch with the XBox360 because it
> uses a PowerPC-ish cpu (tri-core 3.2 Ghz) unlike the previous Intel PIII
> Celeron 733 Mhz.

You can always much freeer in chosing an embedded CPU. While there might be some legacy concerns, they are generally less.

 

Complete thread:

Back to the forum
Board view  Mix view
22049 Postings in 2034 Threads, 396 registered users, 204 users online (0 registered, 204 guests)
DOS ain't dead | Admin contact
RSS Feed
powered by my little forum