Welcome! Log In Create A New Profile

Advanced

Can Hyperfiles get corrupted?

Posted by Arie 
Arie
Can Hyperfiles get corrupted?
December 04, 2008 07:53PM
Hi,
after a power loss I got a message "index corrupted". Well, that;s not so weird, of course. A simple re-index will solve this "problem". And that can even be automated.

I even can reproduce this situation, by causing a power loss, in particular cases. Not that weird either.

Now I'm wondering if my DATA files can get corrupted too. Has anyone noticed that?

And why is an index corrupt and not the datafile? Let's rephrase that: If the data is NOT corrupted, why can't the index stay "ok" as well.

Can some more be said on this subject? I'm talking about HFCS.

Of course my customers are responsible for proper backups and so on. I'm aware of that.

Arie
Jimbo
Re: Can Hyperfiles get corrupted?
December 04, 2008 10:52PM
Hi,

the biggest problem in corruption of any datafiles or databases is the 'Windows cache'. Your program doesn't write directly to disk, Windows is caching your data - which makes disk operations appear to be very fast - and will actually write any data to disk as soon as it seems to be feasible for Windows. Read operations do it the other way around: if data is still in the cache, it will be read from there. Any power loss of the system will make the system lose all cache contents and will stop any further writing to disk. Bad enough, hard disk drives have their own cache too! Up to 8 Mbytes of cache!

Effectively, this makes any data loss unpredictable. Either it's data record/s or key record/s which are lost. Worse, losing records is essentially in hard disk records, which are different in size from datafile records! So corruption by power loss WILL make you lose some records and will destroy others. You cannot be sure about what's on disk on what not. Even 'old' records could be lost! I've even seen situations where power failed in the middle of the write operation and destroyed part of the hard disk contents.

What do do? UPS, UPS. Use the good ones, avoid the cheap ones. Set HSecurity() and HFlush data to disk. Remember that a GPF in the middle of your program could work just as fine as a power failure. Building really secure systems is non-trivial. The Space Shuttles are using a main computer which is essentially a miniaturized System /360 which is out of service for about 25 years now. But it works. For down-to-earth solutions your datafile record sizes should be a multiple of 256 bytes. If you have got 496 bytes - fill it up with a string to get 512 bytes. You can even stop Windows cache! You can jumper off hard disk cache on some drives. Test your program at least 10 times as long as it took to write it.

Kind regards,
guenter





Edited 1 time(s). Last edit at 12/05/2008 09:57AM by Jimbo.
Arie
Re: Can Hyperfiles get corrupted?
December 05, 2008 10:13AM
Thanks Guenter, that makes sense.

Like a proper backup, I'm not responsible for hardware issues like UPS and so on. I always tell my clients to look at these things.
Indeed, 100% secure bites performance and bites costs. We all want "the best" for "nothing". But that's not the real world.

I already did use HFlush at some points in my software.
Do you know the difference between HSecurity(1) and HSecurity(2)? The help file is not that helpfull on this one.

Arie
Frans
Re: Can Hyperfiles get corrupted?
December 05, 2008 10:41AM
Hello to you all,

Recently I got a datafile in which the autoidentifier had double numbers.
The customerdiscovered problems after one week.
Reindexing mashed up the whole file! With word about an error ?

Question: Is there an easy and fast way to check for such a failure?
What do others do?

Regards and thanks in advance,
Frans
Frans
Re: Can Hyperfiles get corrupted?
December 05, 2008 10:45AM
Hello Arie,

The help-file says:
0 or False (by default): Security mechanism disabled. The speed of the write-to-file operations is maximum.

1 or True: Security mechanism enabled: The speed of the write operations is slower than when using the HSecurity(0) option, but security is ensured when writing into data files.

2: Maximum security mechanism: The speed of the data file write operations is slower that with the HSecurity(1) option).

I don't know what the technical difference is. Testing the speed should be simple.
Set it to 1 or 2 and work with your application to 'feel' the difference.

Regards,
Frans
Arie
Re: Can Hyperfiles get corrupted?
December 05, 2008 11:01AM
Frans,
that just what I'm trying from now on, for some days/weeks. I started by turning off de Windows-cache.
I will keep you all informed...

Arie
Jimbo
Re: Can Hyperfiles get corrupted?
December 05, 2008 11:05AM
Hi Arie,

they say that HFlush() closes & re-opens the file and thereby forces a more or less immediate 'write to disk'. Whenever a file is closed, the Windows cache clears all buffers for that file, it has to. However, HFlush() doesn't seem to be more than that like circumventing the Windows OS and writing to disk directly and so on.

I'm not sure what's the exact difference between the three HSecurity()-levels and docs are rather vague on this.

WinDev 5.5 documentation says that 0 gives fastest write access, that 1 is a compromise between speed & safety and that 2 gives maximum data protection, data is written physically to the file(!) immediately, but no close / reopen happens. 2 slows down write-operations considerably. I just don't believe the wording 'physically to the file' ... maybe, it tells Windows to do the write op immediately.

However, Windows is Windows is Windows. It's a consumer operating system with some features for business customers, but it is no RTOS (Real Time Operating System) which would guarantee that data gets there where it should get.

Guidelines: Use a good UPS, use a good power supply for the computer, take care of proper grounding of the mains plug, get a multimeter and check for voltage differences between the cases of the computers in the network, use computers with trustable hardware parts and look into the BIOS settings, set them to safe operation, use RAM with parity checking, use a motherboard that handles the parity checking, use RAID 1 disks (mirroring), make your software as safe and as trustable as possible.

Kind regards,
Guenter
Jimbo
Re: Can Hyperfiles get corrupted?
December 05, 2008 11:14AM
Quote
Frans
Hello to you all,

Recently I got a datafile in which the autoidentifier had double numbers.
The customerdiscovered problems after one week.
Reindexing mashed up the whole file! With word about an error ?

Question: Is there an easy and fast way to check for such a failure?
What do others do?

Regards and thanks in advance,
Frans

Regards,

Frans

Hi Frans,

I'm using autoID only where it cannot do much harm. Otherwise, I'm building my own 'autoIDs' which make much more sense because they contain key information data about the record.

I think you should use the search for the forum, there were some solutions to autoID-problems.

Regards,
Guenter
Malc
Re: Can Hyperfiles get corrupted?
December 05, 2008 01:35PM
Hi Arie

Are you using transactions?
Transactions should allow the database to recover to the last 'good' data save in case of system failure, so you should always use transactions, even if only updating one field in one record.

Note:
I haven't use HFCS, so I have no view on how it handles transactions, but as the commands are in the language, I assume it works as other C/S DBs.


Cheers

Malc
Geert Debruyne
Re: Can Hyperfiles get corrupted?
December 09, 2008 09:01AM
Hi all,

i think the real issue is to write secure software... the impact of disabling cache on hard drives can be very anti-performance... i would nog suggest this!

We have about 8 years experience in using Windev with several types of databases, also 'simple' Hyperfiles. The only times we really had problem with file consistency was when software was used on a network with a mix of Win'98, WinXp, Vista, ... because all of those systems use Windows caching in a different way: this is a real problem for keeping fiels consistent. So now we alway tell our customers that they should move to ONE and ONE os only...

After all this years, the only database we work with that never had any problem and where not a single transaction was lost when a hardware-problem occured was DB2/400 (on i5-systems). This DBMS is a pleasure to work with for software-developers....

Greetz,
Geert
Jimbo
Re: Can Hyperfiles get corrupted?
December 09, 2008 09:14AM
Hi Geert,

thank you for mentioning! YES, you MAY NOT use a Win9x-'server' and WinXP-clients. The built-in locking mechanisms of 9x-Windows and NT-type Windows are incompatible and will inevitably lead to corrupted files!

The other way around is just fine & supported: A WinNT/XP/Vista computer with the database (aka server) and mixed WinXP & Win9x clients are OK.

Kind regards,
Guenter
Steven Sitas.pcs.crosspost
Re: Can Hyperfiles get corrupted?
December 11, 2008 12:51PM
Being a Clarion developer also, I have heard many times about "Windows Cache issues" and TPS files.
For me if a file system, has problems with "cache" or anything else triggered by the OS, don't use it.
I have been using Btrieve (ISAM and C/S) for over 20 years and I have never-ever seen any data corruption. My users don't care about "cache setings" and many of them don't even have a UPS.
How Btrieves does it, I don't know, but it is real ...


Message forwarded from pcsoft.us.windev
Author:

Your Email:


Subject:


Spam prevention:
Please, enter the code that you see below in the input field. This is for blocking bots that try to post this form automatically. If the code is hard to read, then just try to guess it right. If you enter the wrong code, a new image is created and you get another chance to enter it right.
Message: