VOGONS


First post, by keenmaster486

User metadata
Rank l33t
Rank
l33t

It doesn't seem to matter if I compile with the -mh param and/or use the huge modifier or not, if I try to allocate memory for a huge array of chars, I get all sorts of weird behaviors that are symptomatic of it never having actually allocated the memory.

That's one issue, but another trouble is that I can't seem to catch any bad_alloc exceptions. Maybe I'm doing it wrong? I'm a little fuzzy on how this works. Yes I'm enabling exceptions with -xs.

I can't even cause an exception if I catch all exceptions with catch (...) and try to allocate some ridiculous amount of memory, like several gigabytes, to a near or far pointer. What gives?

Example of what I'm trying to do:

unsigned char huge *hugePointer = NULL;
void allocateMemory() {
try {
hugePointer = new unsigned char[80000]; // some number greater than 65535
} catch (bad_alloc) {
cout<<"Out of memory"<<endl;
}
}

World's foremost 486 enjoyer.

Reply 1 of 14, by Ringding

User metadata
Rank Member
Rank
Member

I guess because size_t is 16 bit, you cannot even create an array with more than 64k entries. Your value gets truncated to the least significant 16 bits. Also, in h/new, it says:

NOTE: Open Watcom currently does not throw std::bad_alloc on allocation failures.

Reply 2 of 14, by keenmaster486

User metadata
Rank l33t
Rank
l33t
Ringding wrote on 2025-04-03, 16:28:

I guess because size_t is 16 bit, you cannot even create an array with more than 64k entries. Your value gets truncated to the least significant 16 bits. Also, in h/new, it says:

NOTE: Open Watcom currently does not throw std::bad_alloc on allocation failures.

Okay... so what I'm hearing from this is that OW basically doesn't support the huge model or bad_alloc exceptions? How is the huge model useful if I can't actually allocate more than 64K memory with it?

World's foremost 486 enjoyer.

Reply 3 of 14, by Ringding

User metadata
Rank Member
Rank
Member

Welcome to the wonderful world of 16bit programming 😉

You should be able to allocate more. Try an array of 30000 longs! That should be larger than 64KB.

Reply 4 of 14, by keenmaster486

User metadata
Rank l33t
Rank
l33t
Ringding wrote on 2025-04-03, 16:43:

Welcome to the wonderful world of 16bit programming 😉

You should be able to allocate more. Try an array of 30000 longs! That should be larger than 64KB.

Haha well I can do that but then I have to extract individual bytes from it which will slow everything down.

I guess I basically have to write my own memory manager that juggles 64K chunks or something.

World's foremost 486 enjoyer.

Reply 5 of 14, by amadeus777999

User metadata
Rank Oldbie
Rank
Oldbie

Isn't there also a (storage) modifier for 'new' required or is this handled transparently?

Reply 6 of 14, by BloodyCactus

User metadata
Rank Oldbie
Rank
Oldbie
keenmaster486 wrote on 2025-04-03, 16:37:

Okay... so what I'm hearing from this is that OW basically doesn't support the huge model or bad_alloc exceptions? How is the huge model useful if I can't actually allocate more than 64K memory with it?

I'm not a c++ guy, but it works fine in c.

wcl -mh -oneatx -2

#include <stdlib.h>
#include <malloc.h>
int main(int argc, char *argv)
{
unsigned char far *hugePointer = NULL;
hugePointer = halloc(128, 1024); // allocate 128kb
if(hugePointer == NULL)
printf("malloc failed\n");
else
printf("malloc success\n");
return 0;
}

works just fine.

--/\-[ Stu : Bloody Cactus :: [ https://bloodycactus.com :: http://kråketær.com ]-/\--

Reply 7 of 14, by megatron-uk

User metadata
Rank l33t
Rank
l33t

I too have used the huge memory model in Open Watcom / 16bit to allocate >64kb (actually enough for a 640x400x8bpp screen buffer).

It definitely works in C via standard *alloc calls as BloodyCactus states.

My collection database and technical wiki:
https://www.target-earth.net

Reply 8 of 14, by keenmaster486

User metadata
Rank l33t
Rank
l33t

Hmm. Maybe I just need to use 2D arrays?

World's foremost 486 enjoyer.

Reply 9 of 14, by wbahnassi

User metadata
Rank Oldbie
Rank
Oldbie

Try adding UL next to your 80000, like 80000UL (if the compiler supports this), or put the value in a DWORD variable and ensure the value is indeed stored as 80000 and not wrapped around 16 bits.

Turbo XT 12MHz, 8-bit VGA, Dual 360K drives
Intel 386 DX-33, TSeng ET3000, SB 1.5, 1x CD
Intel 486 DX2-66, CL5428 VLB, SBPro 2, 2x CD
Intel Pentium 90, Matrox Millenium 2, SB16, 4x CD
HP Z400, Xeon 3.46GHz, YMF-744, Voodoo3, RTX2080Ti

Reply 10 of 14, by megatron-uk

User metadata
Rank l33t
Rank
l33t

I just went back and checked my code in question, and whilst I do have large (>64kb) malloc calls, I noticed the other thing I was doing is defining large static regions as follows:

#define VRAM_END 256000
unsigned char __huge vram_buffer[VRAM_END]

Compiler flags used are:

CFLAGS = -0 -bc -d0 -ox -ml -zq

So it looks like (>64kb) declared static / globals are allocated and work just fine, as well. Your particular compiler flags may need to vary, of course.

The code in question uses an off-screen page buffer for a 640x400x256c screen, and then copies one VGA-window sized chunk to hardware at a time, incrementing the VGA window each time until the full screen is page-flipped. VESA linear framebuffer is nicer, of course, but not available everywhere.

My collection database and technical wiki:
https://www.target-earth.net

Reply 11 of 14, by keenmaster486

User metadata
Rank l33t
Rank
l33t

Hmm yes, but doesn’t that inflate the executable size?

World's foremost 486 enjoyer.

Reply 12 of 14, by jakethompson1

User metadata
Rank Oldbie
Rank
Oldbie
keenmaster486 wrote on 2025-04-04, 13:37:

Hmm yes, but doesn’t that inflate the executable size?

Not if it ends up in .bss
i.e., for zero-initialized memory, the executable just keeps track of how big the region needs to be at runtime, rather than explicitly storing all those zeros

Reply 13 of 14, by BloodyCactus

User metadata
Rank Oldbie
Rank
Oldbie

and if it did, executable packer would reduce it to like 5 bytes anyway.

--/\-[ Stu : Bloody Cactus :: [ https://bloodycactus.com :: http://kråketær.com ]-/\--

Reply 14 of 14, by Ringding

User metadata
Rank Member
Rank
Member
Ringding wrote on 2025-04-03, 16:43:

You should be able to allocate more. Try an array of 30000 longs! That should be larger than 64KB.

Not even this: operator new exists only with a size argument of type size_t, and this is 16 bits long. This seems to indicate that for dynamic huge memory allocation, you would have to use different library functions.