Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Code Snippets / Fast text file IO

Author
Message
Grismald
21
Years of Service
User Offline
Joined: 4th Mar 2003
Location: France
Posted: 13th Aug 2006 00:22 Edited at: 13th Aug 2006 10:20
This snippet is for anyone who wants fast and easy to use functions to write text to a file.
I was inspired to make this after using these file functions by Underworld 1020; they do the job well, but the writing part is fairly slow.

You need IanM's Matrix1Array and Matrix1Utils(n°5) dlls to make this code work.

FUNCTION LIST
AddStringToBuffer(ArrayPtr as dword,Write$,Pos) `adds a line of text to the buffer array
WriteTxtFile(ArrayPtr as dword,File$,overwrite as boolean) `writes a text file using the buffer data
CountLinesInFile(File$) `returns the number of lines of a text file
ReadStringAtPos(File$,Pos) `returns one line of text from a file as a string

NB: you need to use get array ptr() to get the buffer array adrress
in order to pass it to the first 2 functions; for more info about this command, see the readme
of Matrix1Array dll



Just change the dir$= line to wherever you put the file.
I recommend you run the code at least twice, to see what the overwrite parameter does when set to 0.

If you have any suggestions on how to improve/expand this code, i'd be glad to hear them

Grismald
21
Years of Service
User Offline
Joined: 4th Mar 2003
Location: France
Posted: 26th Aug 2006 13:33
I am in the process of rewriting this code using memory banks from IanM's dll.
It works when you write a new file (it isn't much faster though) but doesn't when trying to add text to an already existant file.
I get an unhandled error:

It's probably just my fault but i haven't found what causes this yet.

here's the code:


On a side note, i did a few speed tests, and i found out that writing 2*n lines of text takes about 3 times as much as writing n lines;
i.e, on my computer i get these results:
500 lines-> 7ms
1000 lines-> 21ms
2000 lines-> 60ms
4000 lines-> 178ms
I have no idea why though

here's the speed test code :


IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 26th Aug 2006 22:00
I don't think that you are going to get great speed gains by dealing with the data on a byte-by-byte basis.

Take a look at how I dealt with strings in the help and keyword files generator in my plugins thread - I worked on the strings in memory until I was ready to write them, then wrote them in one shot all together. Reading was done in the same way - read the whole file as a string, then split by the line-end characters.

Also, I see you are using MAKE BANK FROM FILE. You might want to try using MAP FILE TO BANK instead and see if that helps

Grismald
21
Years of Service
User Offline
Joined: 4th Mar 2003
Location: France
Posted: 29th Aug 2006 01:28 Edited at: 29th Aug 2006 01:37
Thanks for the help.

Quote: "I worked on the strings in memory until I was ready to write them, then wrote them in one shot all together"

I'm kinda doing that too,the write bank word stuff is there just to seperate lines of text; i tried to work with only one string which would eventually contain all the text and then write the whole thing directly to a file but it's actually slower.

Quote: "Reading was done in the same way - read the whole file as a string, then split by the line-end characters"

yeah,i hadn't realised the existence of split string and get split word$ functions before i saw your code; they make mine much cleaner now

btw, get split word$ crashes when used with a number out of range;
try to run this code:


Quote: "You might want to try using MAP FILE TO BANK instead and see if that helps
"

it doesn't help with my current code, but i could have some
use for it later on

Grismald
21
Years of Service
User Offline
Joined: 4th Mar 2003
Location: France
Posted: 29th Aug 2006 16:06 Edited at: 30th Aug 2006 01:41
Here's the new and (hopefully) last version of this code:


It's slightly faster, especially for reading.
It'd nice if some people could try it before i post it
to the codebase

IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 30th Aug 2006 01:07
Thanks for spotting the bug - I'll deal with it. I think while I'm looking at that bug that I might add some bank search functions too - it seems like it would be useful.

I still think you are missing out by not using MAP FILE TO BANK - it's very flexible, and will do everything that you need it to, including expanding an existing file.

Grismald
21
Years of Service
User Offline
Joined: 4th Mar 2003
Location: France
Posted: 1st Sep 2006 14:59 Edited at: 1st Sep 2006 15:00
Quote: "I still think you are missing out by not using MAP FILE TO BANK - it's very flexible, and will do everything that you need it to, including expanding an existing file.
"

I'd like to use it, but it's not the best solution atm for me;
what i'd need is something like this:
MAP FILE TO BANK Filename, Bank Number, Size increment
it would create a bank with a size equal to the file size+the size increment in bytes.
You could also make a second MAKE FILE FROM BANK function like this:
MAKE FILE FROM BANK Filename, Bank Number,Start Position, Number of bytes
That could be useful imo

IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 1st Sep 2006 15:38 Edited at: 2nd Sep 2006 18:53
Simple ...



You'll need to test, as I typed this in directly. I might add these as built-in commands too as I've finished the string search functions for banks and they work wonderfully.


EDIT - Sorry, the code won't go into a code box for some reason.
EDIT2 - Aha, fixed it.

Login to post a reply

Server time is: 2024-11-23 03:14:18
Your offset time is: 2024-11-23 03:14:18