VOGONS


Installing an SSD without TRIM

Topic actions

Reply 60 of 101, by jmarsh

User metadata
Rank Oldbie
Rank
Oldbie

Assuming ide-core is compiled into the kernel (and not loaded as a module), you would want to add "ide_core.nohpa=0.0" to the kernel options to make it ignore the HPA on interface 0 device 0. I have to ask though: why bother with HPA? Why not just use small partitions that leave most of the drive unallocated if you don't want to use it?

Reply 61 of 101, by darry

User metadata
Rank l33t++
Rank
l33t++
jmarsh wrote on 2025-08-12, 04:03:

Assuming ide-core is compiled into the kernel (and not loaded as a module), you would want to add "ide_core.nohpa=0.0" to the kernel options to make it ignore the HPA on interface 0 device 0. I have to ask though: why bother with HPA? Why not just use small partitions that leave most of the drive unallocated if you don't want to use it?

OP will correct me if I am wrong here, but my understanding is as follows.

The idea was to avoid modifying the script that partitions and stages/installs the OS and applications. It might be simpler to do that after all than putz around with the HPA. Alternatively, resizing partitions after the fact might be an option.

And the point of doing that was to avoid getting an apparently non fatal error message that occurs at runtime when the OS and apps are installed to a large drive by the aforementioned installer script (correlation was observated, causality was not confirmed, AFAIU).

Reply 62 of 101, by tony359

User metadata
Rank Member
Rank
Member

@Darry is correct in his understanding.

The box is a custom cinema player which comes with an install DVD. The process will use the whole available space when partitioning. YES, I could manually change the partitions/clone a smaller drive but ideally I'd love to be able to reinstall the SW without having to take out the drives or use Clonezilla to re-clone it. Ideally, I'd like the final user to be able to stick the automatic DVD in the drive and let it re-install if required. The installer WIPES the drive so I cannot prepare smaller partitions in advance I'm afraid.

The 1TB drive does work but unfortunately it seems that the largest partition causes some delays which trigger all sort of error messages. The software will eventually recover but that't not an ideal situation. I think I have enough evidence now to confirm that a smaller (200GB) drive does not exhibit the same behaviour. Those machines were shipped with 120 or 160GB drives.

I have asked on the Fedora community if someone can point me to the right direction when it comes to the Anaconda install script but nothing so far. I cannot find a human-readable "install script" on the install DVD, only the "post linux installation" software installation script.

If this is not feasible then I guess my only option is cloning - and I don't even know whether that is possible, I haven't tested that yet. Chances are Linux will detect the anomaly and revert it! 😁

Darry,
Thanks for your help as usual.

I've managed to get ROOT access to the live box - which is interesting!

Here is the outputs you requested, I hope it's what you asked.

The attachment IMG_6826 Large.jpeg is no longer available
The attachment IMG_6827 Large.jpeg is no longer available
The attachment IMG_6828 Large.jpeg is no longer available

I found the Anaconda's log and I see a separate section is being called for partitioning the drives. If I could have access to that installation routine...

The attachment IMG_6829 Large.jpeg is no longer available

The good news, for conversation's sake, is that now formatting that 1TB drive takes 1/3 of the time thanks to the UDMA100 speed 😀

My Youtube channel: https://www.youtube.com/@tony359

Reply 63 of 101, by tony359

User metadata
Rank Member
Rank
Member

EUREKA!

I think I found the kickstart file for Anaconda. It's inside initrd.img which is compressed so macOS refused to open it.

I believe the --grow parameter is the one that tells the installer "if there is more space, use it".

The attachment 912B2E68-2761-4C4E-95B1-941F8464BE3A Large.jpeg is no longer available

A 160GB HDD creates a 138GB "contents" partition. I believe if I remove "--grow" and change --size=10000 to --size 138000 that should do the trick.

I do hope this works because it would save so much headache.

My Youtube channel: https://www.youtube.com/@tony359

Reply 64 of 101, by megatron-uk

User metadata
Rank l33t
Rank
l33t

Yes, that's a Redhat kickstart file, and the "--grow" flag instructs the partitioner to allocate at least 10000M, but to use the remaining space if there is any.

Alternatively, leave it at 10000 without the "--grow" flag, and then do an online resize by extending the partition to a size you want, post-install, and then run "resize2fs" to grow the filesystem of /contents. You can do this with the filesystem mounted.

My collection database and technical wiki:
https://www.target-earth.net

Reply 65 of 101, by tony359

User metadata
Rank Member
Rank
Member

I do not have access to the file system (besides after resetting the root password using another system) - it needs to be an automatic process I'm afraid.

I've added a "--maxsize=140000" parameter, it should grow up to that. Now let me see if I can create an ISO that works... 😀

My Youtube channel: https://www.youtube.com/@tony359

Reply 66 of 101, by megatron-uk

User metadata
Rank l33t
Rank
l33t

Mount the disk in another Linux machine and expand it there, if it doesn't work via the KS method.

My collection database and technical wiki:
https://www.target-earth.net

Reply 67 of 101, by tony359

User metadata
Rank Member
Rank
Member

it's compressed so I was able to expand it and mod the file and - probably - to re-compress it keeping all the permissions. But that is an IMG file which belongs inside a larger, bootable, ISO file.

I've now moved to Windows to try to replace the file into an existing ISO file - simply because I am not superb at Linux and I was hoping to find a simpler tool in Windows. I'm going to try WinISO.

My Youtube channel: https://www.youtube.com/@tony359

Reply 68 of 101, by tony359

User metadata
Rank Member
Rank
Member

SUCCESS! (I think)

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/hda2 1984044 1594536 287096 85% /
/dev/hda5 143822264 193944 136204688 1% /contents
tmpfs 123920 0 123920 0% /dev/shm
/dev/hda6 2972236 861224 1957596 31% /home
/dev/hda10 303344 10369 277314 4% /rwm
/dev/sda1 3899392 140 3899252 1% /mnt/usb

The above is the disk after a full reinstall. hda5 is the one which has the "grow" command. It seems to be only 143GB (ish), previously it was this

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/hda2 1984044 1600332 281300 86% /
/dev/hda5 935400264 5507808 881610308 1% /contents
tmpfs 62632 0 62632 0% /dev/shm
/dev/hda6 2972236 861232 1957588 31% /home
/dev/hda10 303344 11336 276347 4% /rwm
/dev/sda1 15729664 1217128 14512536 8% /mnt/usb

Now let's hope it works!!!

Thank all SO MUCH for your help on this!

=========================================

Update: and I still have those errors.... How incredibly frustrating that is.
If I look at the drive activity LED which the new adaptor features, I see that immediately after reaching the "ready" state the activity light goes on constantly for a few seconds and there is when the error might appear.

I am not questioning whether the size of the drive has a role into this but I've never noticed that with the factory, PATA drives.

Unfortunately logs don't seem to show much.

Sigh!

My Youtube channel: https://www.youtube.com/@tony359

Reply 69 of 101, by megatron-uk

User metadata
Rank l33t
Rank
l33t

Yes, that looks to have grown to the maximum extent allowable.

Shame you don't have any means of accessing the terminal while the machine is running, it would be so much easier that way!

My collection database and technical wiki:
https://www.target-earth.net

Reply 70 of 101, by tony359

User metadata
Rank Member
Rank
Member

I do if I reset the root password. Not as standard.

I am now evaluating options - I have ordered some 500GB WD drives (didn't realise they were still selling them) but I think it might be wise to go back to exploring the SSD solution. I have tried a random SSD I have here and it just works. When an HDD gets stuck with the green light fully on, the SSD flashes briefly and that's it.

TRIM is the big question mark, 2.6.15 doesn't support it.

Did someone say there was an SSD available which would do "garbage collection" internally, without the need of OS support?

I found this on Crucial website

While Trim is generally good for helping to manage SSD performance and wear in most desktop and notebook environments, it is important to note that Trim is not critical and the improvement may only be marginal. The internal garbage collection algorithms on Crucial SSDs manage deleted data quite effectively. The question of enabling Trim really has to be answered by the user. If you are a casual user sho uses your system for Internet, email, and other light tasks, garbage collection built into the firmware of Crucial SSDs will probably be plenty to keep your SSD running fast and healthily. If you are more of a power user who does picture and video editing or other tasks that require a lot of writes, enabling Trim might be more useful to you, because constantly writing workloads do not always allow for regular maintenance from garbage collection. We hope this information helps you decide what is best for you.

I mean, is this correct?

My Youtube channel: https://www.youtube.com/@tony359

Reply 71 of 101, by megatron-uk

User metadata
Rank l33t
Rank
l33t

Lots of stuff you can do on Linux to mitigate (lack of) TRIM and superfluous writes.

You can mount the /var and /tmp directories on tmpfs to make temporary files and logs write to ram instead of disk.
Turn off file access timestamps ('noatime' in /etc/fstab) on all mounted partitions, even mount some of them read only if you can get away with it.

Those alone should stop almost all write operations, leaving only your /content for r/w ops.

With more modern kernels you've got more options as well as the ability to manually run discard, so you're a little limited with a kernel that old, but those techniques should still work and buy a lot more life.

My collection database and technical wiki:
https://www.target-earth.net

Reply 72 of 101, by tony359

User metadata
Rank Member
Rank
Member

I appreciate the input, of course those are options.

However, the SW for those boxes is quite... crude and I don't feel comfortable changing things as who knows what could happen. Changing a partition size is fine - it would change anyways with the disk size. Anything else I feel uncomfortable to be honest. Those were very "niche" boxes with very few units sold and very little budget 😀 Not designed for the masses. If this was a box which I'd use myself in my home cinema, 100%, I'd play with it, change things and see what happens.

Just as an example, if I plug an HDD via the MB SATA port, the logs stop showing the drive health as they have HDA hardcoded!

I'm going to explore the SSD route - the test SSD I have here seems to be perfectly fine and I see both Crucial and Samsung advertise their own "active garbage collection" on board which should be plenty for a system which writes 1.5GB of data every 6 months - plus logs. 😁

During critical operation, the box will READ only at an incredible bitrate of 800Kbps - plus logs every now and then. 😁

Any feedback is welcome of course.

My Youtube channel: https://www.youtube.com/@tony359

Reply 73 of 101, by megatron-uk

User metadata
Rank l33t
Rank
l33t

The "noatime" option is a no brainer and should have no ill effects.

The standard on modern systems with SSD will be to use 'relatime'. I can't remember if that was introduced as far back as 2.6.x, but it's worth a try.

Outside of writing to /var, if you have swap disabled then the majority of the system writes will come from updating file access timestamps.

My collection database and technical wiki:
https://www.target-earth.net

Reply 74 of 101, by megatron-uk

User metadata
Rank l33t
Rank
l33t

Honestly though, I suspect a modern, new SSD will work just fine.

My collection database and technical wiki:
https://www.target-earth.net

Reply 75 of 101, by tony359

User metadata
Rank Member
Rank
Member

that seems to be the case - as soon as I plugged a 1TB HDD, I had those "timeout" errors (which sometimes were so severe that lead to SW corruption - or to something believing there was corruption) pretty often at boot. Now I am running an SSD, I have now rebooted that box many times and also loaded content and I haven't seen anything.

If modern, good quality, SSDs have some sort of garbage collection in the FW, I am fine with that. Those boxes will sit idle for hours so plenty of time to make some space.

This whole exercise has absorbed most of my time for the past... month? and now I feel a fool as it was to avoid the SSD route but now it seems to be the only route that works? 😁

I still wonder what's wrong with a large HDD - with a small partition - that can upset the SW so much. All that happens at that time is X Window is starting. I can only imagine that X Window is somehow probing the physical geometry of the HDD. I really don't know. I'm sure it would be an easy fix for the developer - if they were still developing that machine!

That said, all this has been fun 😀

I've got a Samsums EVO 870 coming - they still make the 250MB which is perfect size.

My Youtube channel: https://www.youtube.com/@tony359

Reply 76 of 101, by Archer57

User metadata
Rank Member
Rank
Member
tony359 wrote on 2025-08-12, 19:21:
I am now evaluating options - I have ordered some 500GB WD drives (didn't realise they were still selling them) but I think it m […]
Show full quote

I am now evaluating options - I have ordered some 500GB WD drives (didn't realise they were still selling them) but I think it might be wise to go back to exploring the SSD solution. I have tried a random SSD I have here and it just works. When an HDD gets stuck with the green light fully on, the SSD flashes briefly and that's it.

TRIM is the big question mark, 2.6.15 doesn't support it.

Did someone say there was an SSD available which would do "garbage collection" internally, without the need of OS support?

I found this on Crucial website

While Trim is generally good for helping to manage SSD performance and wear in most desktop and notebook environments, it is important to note that Trim is not critical and the improvement may only be marginal. The internal garbage collection algorithms on Crucial SSDs manage deleted data quite effectively. The question of enabling Trim really has to be answered by the user. If you are a casual user sho uses your system for Internet, email, and other light tasks, garbage collection built into the firmware of Crucial SSDs will probably be plenty to keep your SSD running fast and healthily. If you are more of a power user who does picture and video editing or other tasks that require a lot of writes, enabling Trim might be more useful to you, because constantly writing workloads do not always allow for regular maintenance from garbage collection. We hope this information helps you decide what is best for you.

I mean, is this correct?

My opinion - if SSD works it is the best and the easiest option. Do not bother about TRIM too much. Yes, lack of TRIM has downsides, it will increase wear and reduce write performance ("garbage collection" firmware can do without TRIM is limited because it does not know which data is no longer needed). But as long as it is just an OS drive with no heavy write workload it will still last forever, outlast any HDD and likely the system along with its owners 😁

Technically the only thing operating without TRIM does to SSD - it results in all user-accessible storage being completely full all the time. Equivalent of just filling the SSD to the brim.

I use SSDs a bunch without TRIM and on OS which are not aware SSDs exist, like Windows XP. I never bothered with any "optimizations" like disabling access time in FS or even disabling defrag. I've seen no excessive wear or any negative effects so far. I mostly use dirt cheap aliexpress SSDs for this too, not something fancy like samsung 870evo.

There are a lot of myths and fearmongering about SSDs online, mostly because of people being psychologically afraid of something limited, like number of P/E cycles. Most of this is complete nonsense and should be ignored, including all the "optimization" advice. Write endurance was never a factor for home use outside of specialized tasks. TRIM or not...

tony359 wrote on 2025-08-12, 22:17:

I still wonder what's wrong with a large HDD - with a small partition - that can upset the SW so much. All that happens at that time is X Window is starting. I can only imagine that X Window is somehow probing the physical geometry of the HDD. I really don't know. I'm sure it would be an easy fix for the developer - if they were still developing that machine!

That is, indeed, curious. Not sure what it could be. My only guess would be that it is asking HDD to do something that takes longer on modern hard drive than on old one. May not even be related to size...

Reply 77 of 101, by megatron-uk

User metadata
Rank l33t
Rank
l33t

It is indeed odd. Ext3 had been around a while by kernel 2.6.x, and supports fairly huge (for the time) partitions, so it's unlikely to be the filesystem, and no applications directly try to work out disk geometry that I'm aware of on Linux, at least not at that level - they all talk to the filesystem driver.

Still, if an SSD solves it, then I'd just use it and not be overly concerned about the lifespan. You can always looks at the mount tweaks in the future if you want to.

My collection database and technical wiki:
https://www.target-earth.net

Reply 78 of 101, by tony359

User metadata
Rank Member
Rank
Member

The "heavy write workload" is loading a soundtrack every now and then. With FILM (35/70mm) already disappeared from the cinema scenario, this happens VERY rarely. When it happens, it's max 2 CDs so 1.4GB of data. So I guess that to fill those 250GB it'll take a while! 😁

(though I see the player is writing the data on a temp folder and re-index them in a new format so say 3GB per movie).

I just thought a spinning HDD was the best solution but...

Logs unfortunately don't seem to show what that process running at boot is. I can only speculate it's something to do with X Window as when a heavy delay happens, X Window fails to start. But then, in rare case, the whole DTS software crashes, it's restarted after about 30 seconds and sometimes it flags an MD5 error which triggers a reinstall.

When I check the version of the software I see there is a mention of a "Large drive" installed. I wonder whether that means that some software tweak are being applied exactly for my problem? It wouldn't make much sense to write code just to say "yes, there is a large drive inside", unless it was a marketing feature, that is, maybe the player was sold with "small" and "large" drives so that needed to be reflected in the SW as well.

Out of curiosity, would there be a way to log drive access to see what's going on at boot?

My Youtube channel: https://www.youtube.com/@tony359

Reply 79 of 101, by megatron-uk

User metadata
Rank l33t
Rank
l33t

If the binaries are on the disk (and there's no guarantee they are), tools like 'lsof' (monitors which files a process has open) or 'strace' (logs system calls a binary makes) may help to track down what is causing the timeout/crash... but they would generate a heavy load and mountains of logs - exacerbating the issue you've got at hand... so possibly may make finding the problem even harder.

My collection database and technical wiki:
https://www.target-earth.net