Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

DarkBASIC Professional Discussion / Poor Performance over time

Author
Message
BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 13th Dec 2008 17:43 Edited at: 13th Dec 2008 17:44
I have tracked the performance over time of my test routine. The chart below shows how the same process increases in time from around 10 seconds per 100 iterations, to 30+ seconds for the same process. I'm looking for any suggestions on what might be causing this, and if indeed it is fixable.



The process is:

1. Create a set of arrays and variables to track entitites.
2. Import a 640x640 image as a bitmap
3. Chop up the bitmap into 10x10 images, then delete bitmap
4. Create 20 plains, and texture them with a selection of the previous 100 images.
5. Display the plains
6. Delete the plains
7. Delete the textures
8. Delete the data elements, array etc
9. Repeat.

Each measurement on the chart represents the creation of 2000 plains and 10,000 images.

I have montitored memory, and it remains constant. I have checked array elements etc, and I am deleting them entirely each time. So I am assuming the process of creating and deleting causes problems over time.

BlobVanDam
17
Years of Service
User Offline
Joined: 26th Jul 2008
Location: one of Cybertrons moons
Posted: 13th Dec 2008 18:02
Well my first idea would have been it using more memory, but obviously that was your first thought too. Have you tried to narrow down which part of the process is slowing down? Maybe check each section of your process over time and see how long they take if possible?
It sounds like you've covered most of what you can check though, so I'm inclined to think it's not through a fault of yours, but I don't know what process would slow down over time without being memory related.
Green Gandalf
VIP Member
20
Years of Service
User Offline
Joined: 3rd Jan 2005
Playing: Malevolence:Sword of Ahkranox, Skyrim, Civ6.
Posted: 13th Dec 2008 18:17
Any reason to keep creating/deleting plains and bitmaps rather than reusing them?

Only thing I can think of is that DBP is maintaining an internal list of such things and that list just keeps growing even though the objects themselves have been deleted - but I would expect that to show up in the memory report.
Green Gandalf
VIP Member
20
Years of Service
User Offline
Joined: 3rd Jan 2005
Playing: Malevolence:Sword of Ahkranox, Skyrim, Civ6.
Posted: 13th Dec 2008 18:51
Quote: "Yes, memory fragmentation."


Ah! Hadn't thought of that. Is there a way to clear it? What about flush video memory or similar?
BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 13th Dec 2008 21:19
Quote: "Any reason to keep creating/deleting plains and bitmaps rather than reusing them?"

In this case, this is my stress test. I'm running the process 10,000 times just to make sure it has no adverse side effects. Obviously a wise test to perform!

This is quite a generic system - think of it as a game template. So I can't predict exactly how it may be used in every scenario and thus must account for as many possibilities as I can.

Quote: "What about flush video memory or similar?"


Funnily enough, that was the last thing I tried as I posted. It made no noticeable difference.

I think I may re-run smaller parts of the system as suggested.

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 13th Dec 2008 22:16
Well...I can confirm that the issue occurs once I introduce the plains. Loading the image file as a bitmap, chopping it up and creating the images does not have any negative impact.

This time around, I also switched off all of my debug output to the console and file, so these are not to blame either.

Mistrel
Retired Moderator
19
Years of Service
User Offline
Joined: 9th Nov 2005
Location:
Posted: 13th Dec 2008 22:23
Source code?

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 13th Dec 2008 23:18
Quote: "Source code?"


About 6000+ lines. It's not possible to extract the part that is causing problems, as it is a series of systems that buld on one another. However, I can say that the bit that causes trouble is well summarised above.

Green Gandalf
VIP Member
20
Years of Service
User Offline
Joined: 3rd Jan 2005
Playing: Malevolence:Sword of Ahkranox, Skyrim, Civ6.
Posted: 13th Dec 2008 23:51
Sounds like it's a version of the old "delete object" slow-down problem. Quite why this happens, I can't recall.

I vaguely recall there was a big discussion about this issue about two years ago - might be worth a search for "delete objects" etc.
BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 13th Dec 2008 23:56 Edited at: 14th Dec 2008 00:12
This code seems to prove that Creating & Deleting plains is not the culprit in isolation. This has no issues.



I'll keep adding the other parts until it takes a nosedive again.

[EDIT] I'm beginning to think it's the creation / deletion of arrays and array elements that is slowing it down.

Green Gandalf
VIP Member
20
Years of Service
User Offline
Joined: 3rd Jan 2005
Playing: Malevolence:Sword of Ahkranox, Skyrim, Civ6.
Posted: 14th Dec 2008 00:15 Edited at: 14th Dec 2008 00:19
The plot thickens.

Your original chart suggests you were running the program for about 5 hours. Could something else be happening on your machine during that time? In fact the chart suggests something "kicked in" after about 8000 iterations. Have you benn able to reproduce that finding?

I sometimes get caught out by my machine when the background system optimization utilities decide it's time to clean out my registry etc. I find myself cursing the slowdown till it dawns on me what's happened.

AV software?
Automatic updates?
Etc, etc?

Edit Now I think about it I think the "delete objects" slowdown that I was muttering about was caused by having large numbers of objects - the time per object deleted tended to increase as the number of objects increased. This was because it took longer to update the new internal lists of objects etc. As you have observed, that doesn't seem to be your problem.
BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 14th Dec 2008 00:40
The test was about 10 - 15 minutes, and it can be reproduced time and time again. So I think I can exclude other background jobs. As the graph shows, it also consistently shows a slow down as time progresses, rather than peaks as something else kicks in.

The max number of objects is 20 at any time. They get deleted and recreated thousands of times though. I wonder if, as you say, some "list" is not being cleared.

Green Gandalf
VIP Member
20
Years of Service
User Offline
Joined: 3rd Jan 2005
Playing: Malevolence:Sword of Ahkranox, Skyrim, Civ6.
Posted: 14th Dec 2008 01:31
Quote: "The test was about 10 - 15 minutes"


??

Quote: "the same process increases in time from around 10 seconds per 100 iterations, to 30+ seconds for the same process"


Your original chart shows 58000 somethings ("iterations" presumably??) along the X axis. So taking an average figure of about 20 seconds per 100 would mean a total of about 20 * 58000/100 seconds, i.e. about 3 hours. But I guess your powers of ten were wrong?

Numbers are tricky things.
jason p sage
18
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 14th Dec 2008 08:18
I recall a debate of sorts over this type of scenario and it was hard for any of us to nail it... some people denied it exists, others saw it, then didn't care... (validating something awry kinda)...the "specific" thing we were ruminating over was creating and deleting objects over time got slower.

You're "reduced test" with just plains and no ill effect does not mean there is no issue there... I'll revist in a minute...

The trouble with this issue is that this is a closed source code system... which is fine... it just makes it hard to investigate thoroughly (not to state the obvious) LOL... Seriously though - I always had some reservations about the array system in DBPro... I mean its actually pretty awesome in functionality - but I wonder on its ability to stay lean and mean. I also think there is something to memory fragmentation possibly, as well as the object creation deletion issue...

Now ... its possible the limited test of creating/destroying plains alone shows no ill effects because the "object" has a fixed size, so creating and releasing that memory a zillion times is ok because the next "request" for memory would be the same size as a previously released chunk, and therefore fragmentation would have no ill effects. This is why in my homegrown C++ Dynamic Array class "template" I tend to use sorta large "allocation units" so that when freed, they leave behind a sizable "hole" of free mem - which makes that mem chunk more readily usuable again by other memory requests... as tiny little 1k blocks of free memory scattered about are less usuable than say 8k chunks or bigger.

Now a days I guess that doesn't matter some will claim... but I do think that either the internal arrays get "messy" over time and/or the object "lists" get out of whack... or a combination of these.

In short I'm trying to say that the combination of two or more of your repetitive steps together might show this break down much better than each piece individually.

I really would like to hear more about this - as this is related to DBPro's core most likely... a core that to my knowledge is shared across TGC products... and problems aren't so bad if one knows how to cope.. and this thread could easily turn into a "best practice" topic as details unravel!

--Jason

--Jason

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 14th Dec 2008 10:27 Edited at: 14th Dec 2008 11:08
@GG: It's my explanation! I take a measurement every 20x100 objects - 2000 objects. Thus at the start of the test, 2000 objects are taking around 10 seconds, by the end the same number of objects are taking around 30 seconds. The original graph represents about 7.5 minutes of the test. Hope this explains the anomolies!

@Jason: Thanks for the input, some useful ideas there. I recall the last thread too, I think I got drawn in.

I have reintroduced a few more elements, but as yet no luck in reproducing the problem out of context of my larger source. Here's the latest, which includes some manipulation of the objects, and a simple array.



wildbill
19
Years of Service
User Offline
Joined: 14th Apr 2006
Location:
Posted: 14th Dec 2008 14:40
I'm no memeory expert. But you could try this just to see if the problem continues.

Since you most likely know the size of the array in advance. Try just making the array to the correct size ahead of time and null the values, instead of deleting the elements? This should stop any fragmentation of memeory.
Mistrel
Retired Moderator
19
Years of Service
User Offline
Joined: 9th Nov 2005
Location:
Posted: 14th Dec 2008 22:48
We need reproducible code, BatVink. Otherwise how can we know that it's not a logic error?

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 15th Dec 2008 10:17 Edited at: 15th Dec 2008 10:18
As you can see - I'm trying to reproduce the error in isolation. I can't actually release the code that is causing it, and if I did it would take a week to understand it before you could look for the problem

I am currently rewriting part of the code, where I have noticed I didn't follow my own standards. Then I'll re-test and see what happens.

I am coming around to the idea of memory fragmentation, which I'd never even thought about until this thread. The section I am rewriting could indeed cause this. WildBill's comment is a valid one, although in reality I am using a system where arrays are very rarely resized. But my little rewrite will eradicate this scenario further still.

I am also going to set a minimum array size, so that even if just 10 elements are requested for example, the minimum it will create is 500. Then I am allocating bigger chunks of memory that are reusable if released.

I will let you know the results, and what the resolution was if it works.

Van B
Moderator
22
Years of Service
User Offline
Joined: 8th Oct 2002
Location: Sunnyvale
Posted: 15th Dec 2008 10:34
I'll go back to my old thinking, that it's deleting objects that causes these problems - deleting them in specific circumstances. Maybe once you texture an object with code, you can't delete it neatly - something along those lines.

I know that adding array elements can be really slow but you'd have to do a lot of that for it to impact so highly. Something I would do if I was you is add array elements in blocks of 32 or something - so when you run out of space, you create another 32 elements then fill them up as you need them. I think it is better to have a long delay than 32 short delays.

Not sure if you can tell, but are these plains showing any other problems, like being fully white and untextured?


Health, Ammo, and bacon and eggs!
BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 15th Dec 2008 10:52
Quote: "I know that adding array elements can be really slow but you'd have to do a lot of that for it to impact so highly"


Well, in this stress test I'm adding and deleting blocks of 1,000 elements.

Quote: "Something I would do if I was you is add array elements in blocks of 32 or something "


I am doing that in most cases. The rewrite I am talking about above is to one section where I didn't do that. I'll let you know if it works.

Quote: "Not sure if you can tell, but are these plains showing any other problems, like being fully white and untextured?"


No, although I remember that discussion a while ago.

Just about to test on a different machine with a different processor and graphics card.

Morcilla
22
Years of Service
User Offline
Joined: 1st Dec 2002
Location: Spain
Posted: 15th Dec 2008 11:10
Well, I've tested last code snippet above and the time was more or less constant. The average value in results.txt is 1470, and highest has been 1500.

I agree that just memory management cannot take away that amount of time, perhaps there is something else in the original code...
BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 3rd Jan 2009 17:06 Edited at: 3rd Jan 2009 17:10
It took a while, life and other things got in the way. But I have now fixed the problem, and improved the performance by about 500%!!!!

Here is the new results against the old. The red line is the new timing - as you can see it's consistent (the main goal) but it's also outperforming the old method like a Lotus racing a Snail!



How did I fix it? I'm now using arrays that are predefined rather than increased in size each time I need a new object. I'm using the "stacks" method that IanM has told me about many times, and I ignored through laziness.

So thanks again for the input, I hope this helps others in similar situations!

jason p sage
18
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 3rd Jan 2009 17:24
Good Stuff... Reminds me of how my array class and the STDLIB Vector (dynamic array) class get their speed - you allocate "CHUNKS" of items at a time... then no more are allocated until you fill it... This basically gives you a "PREDEFINED" array in memory until you need more than you have - while it allocates the next chunk of empty items, and moves all the items ya have - this part is chuggy... but these classes let you "Predefine" any size you want to avoid the "Dynamic" growth part if you like...

I guess my point is I can see how the dynamic arrays in DBPRo might be a bit "chuggy"... and I'm aware of the concept of a stack system... where you iterate through a smaller subset of items I suppose maintained in a smaller array or what have you.

I definately think this technique would out perform array iterating through the master list of (say objects like we discuss here) or even a linked list approach. although the idea of a stack array that is an array of pointers to the objects in question in memory in the other array would be blazing fast and less resources than compying all the data naturally.

To sum up my bantering - I think users of DarkGDK or DarkGDK.Net will benefit from the Stack Approach also.

--Jason

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 3rd Jan 2009 17:43
That's pretty much it in a nutshell. The other thing I do is "clean up" the main array whenever it needs expanding. So if the coder has failed to release the elements entirely (it's a 2 step process to allow objects to be physically deleted at a more appropriate time), it does the cleanup for them, and may result in the stack size remaining the same.

IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 3rd Jan 2009 19:19
Is this the piece of code I gave you that showed how to use a list-array as a stack?

If that's the case, and it's definitely these changes that fixed the problem, then WK was right about the memory fragmentation being the culprit.

I thought that the windows memory allocator was better than that - increasing the size of an existing allocation is something I would have though would happen fairly often, and for the allocator not to have a strategy to deal with that in a relatively efficient way is surprising.

jason p sage
18
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 3rd Jan 2009 20:19
Hmm... Especially since you are creating and destroying "things" that are the same... or are you? Point I'm making is that if you allocate and destroy thousands of items that are the same size in memory - MOST mem managers don't get frag'd. Hmm.

--JAson

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 3rd Jan 2009 20:43
Quote: "Is this the piece of code I gave you that showed how to use a list-array as a stack?"


Not sure of the terminology, but it's a main array, and 2 others to list the "Available" and "In Use" array elements. You pick from one and place on the other. The "In Use" stack is used to reference the main array when looping through all of the active ones.

Quote: "Especially since you are creating and destroying "things" that are the same... or are you? "


Not really. Without this method I was using ADD TO QUEUE, so I would be destroying one array, and replace it with the array + a few extra bytes each time.

Green Gandalf
VIP Member
20
Years of Service
User Offline
Joined: 3rd Jan 2005
Playing: Malevolence:Sword of Ahkranox, Skyrim, Civ6.
Posted: 3rd Jan 2009 20:43
Quote: "then WK was right about the memory fragmentation being the culprit"


Yes, he knows some useful stuff. Shame he was always on such a short fuse and has recently gone off in a huff.
gamerboots
16
Years of Service
User Offline
Joined: 8th Dec 2008
Location: USA
Posted: 4th Jan 2009 01:03
Quote: "arrays that are predefined rather than increased in size each time I need a new object."
ok, I was going to post somewhere else about this subject, but I have a comment and a question. My comment is that this little memory leak thing was like a recursive error in that it kept allocating more and more memory due to the increasing size of the array , My question is :
I am familar with the dim commmand which makes an array of a set amount, but, how to resize that array is another story. correct me if Im wrong but will the code below work and preserve all the previous imformation that was stored ?




my second question is , I am not familiar with the ADD TO QUEUE function would you explain that one too please.

gamerboots~
jason p sage
18
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 4th Jan 2009 01:15
tons of dynamic arrays in my dbpro ironinfantry download - your code snip should blast away all data... and i dont think that would compile ... maybe - but looks like a redeclaration - I think redim might be the command to use ... but...

here's the IIBullet.dba code from IronInfantryDBPro below - it has adding and deleting from an array dynamically.


gamerboots
16
Years of Service
User Offline
Joined: 8th Dec 2008
Location: USA
Posted: 4th Jan 2009 01:32
I cant find a redim in the help and it doesn't appear as a keyword in the compiler window

gamerboots~
IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 4th Jan 2009 01:51
Check my codebase entries for example usage of queues, stacks and lists.

There's no REDIM command in DBPro - DIM does the job. If you actually want to clear the data and resize at the same time, you have to either UNDIM or EMPTY ARRAY first, then DIM to the correct size.

gamerboots
16
Years of Service
User Offline
Joined: 8th Dec 2008
Location: USA
Posted: 4th Jan 2009 02:17 Edited at: 4th Jan 2009 02:37
@jason
thanks, your examples along with Ianm's is very good.
@IanM
thanks a bunch Ian, I think thats pretty much what I was looking for

After reading some of your code base correct me if Im' wrong, but as I understand it "dim a() as integer" will declare the dynamic array and 'empty array a()' basically initiallizes it - (Had to do something simular to this in c# ) but I didn't see an empty array command in jason's code,

now suppose I wanted to save that dynamic array to a file , and then load it back in again - Am I correct in assuming that I will have to get the arrays index count and store that viariable somewhere (like in a seperate file ) before using the save /load array commands ?

gamerboots~
IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 4th Jan 2009 16:18 Edited at: 4th Jan 2009 16:20
You don't need to run EMPTY ARRAY on an already empty array - that's why you don't see it in Jasons code.
EMPTY ARRAY is a cleardown command, not an initialisation command.

The loading and saving of arrays ... well this is my opinion ... the existing commands are almost useless because of two major limitations. First, you can't combine the array data with another file (whether custom data or other arrays) and secondly, you can't save UDT arrays at all.

My suggestion would be to write custom array read/write functions that do all the saving/loading for you to a file using the built-in file I/O, or my file I/O plug-in. Unfortunately, you'll need to write specific save/load functions for each array - maybe later I'll add some generic commands in my array plug-in to do the job more cleanly.

gamerboots
16
Years of Service
User Offline
Joined: 8th Dec 2008
Location: USA
Posted: 4th Jan 2009 18:38
@IanN
you and jason have been really awsome about explaing things too me and its very much appreciated I can now use dynamic arrays, it does rather bite that you cant save utd's, however, I have already made some code to save a dynamic array
however, It wouldn't have even been possible at all if it weren't for you guy's help . Again , thanks a bundle

gamerboots~
jason p sage
18
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 4th Jan 2009 20:15
You're welcome - speaking for myself - I enjoy when I can help someone get a concept and they are all excited and moving forward full speed ahead in their projects LOL... People help me - just trying to give back here and there

@BatVink - I AM sorry about the Hijack...

--Jason

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 4th Jan 2009 21:51
This is no Hijack - it's the way threads should evolve

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 16th Feb 2009 12:43
Another small update. Using Stacks, I have grabbed another 25% performance hike over the dynamic array method. I no longer need to translate entity IDs to array indexes, because the entities never move position in the array.

In my test scenario, I was getting up to 200 items in the array - so in some cases I could be searching 200 array elements for my ID!

IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 16th Feb 2009 13:58
By 'Using stacks' do you mean you use a stack to track unused indexes in the arrays that you use to hold the data?

Hopefully, you are also pre-growing your stack and tracking the top yourself, and not using the ADD TO STACK command...

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 16th Feb 2009 14:09 Edited at: 16th Feb 2009 14:10
Quote: "Hopefully, you are also pre-growing your stack and tracking the top yourself, and not using the ADD TO STACK command.."


I realise this is an old resurrected thread - this is the one you helped me with some time ago. It's 2 extra 1-D arrays to hold "Available" and "In Use" elements in the main array. It only ever grows if it blows the limit, and then adds 10% more elements.

What I had forgotten when I changed it was that I no longer needed to track "moving" elements as items were removed from the array - every position is now static for the lifetime of it's contents. I get an extra 25% boost by removing this code.

Van B
Moderator
22
Years of Service
User Offline
Joined: 8th Oct 2002
Location: Sunnyvale
Posted: 16th Feb 2009 16:02
Speaking of stacks, I adopted a similar technique for particle handling. Before, when creating a new particle I'd just use an incrementing number, which obviously won't account for a particles lifespan, so particles live forever while on screen for instance, and these would be wiped out.

So I added a stack system, but I think my implementation was skewed and it caused problems, like the stack being filled up before all of the particles got used - while fixing this I noticed just how many leaks there were in what I was trying to do, I needed something far more robust, controllable and predictable. For example, when using stacks, how do you prevent more indices being taken from the stack than are populating it. I would often run into the problem

So instead of adding to a stack when a particle dies, I just disable the particle and leave it as is. When the particle update function is called, the list is reset then any particle that is dead gets added to the list. As soon as it's done that there's a list of available particle numbers that are used when spawning a particle.

I'm not sure of speed benefits because I'm not sure there are any measurable speed benefits in this, the benefit in this case is having a more optimized particle usage - particles that are alive will never be used and whereas before I would consider some particle effects a waste due to the effect they have on newly spawning particles. Particle that live for 1 second don't mix well with particles that have a long life, unless you use a stack. Now though I can spawn a particle anywhere without worrying, and the particle overhead never comes close to it's limit, even when new particles are being spawned every loop. One nice benefit as well is that you can display how many particles are in play, great for testing out a new particle effect and making sure that it doesn't eat your stack up too much.

I think this is worth considering for similar scenarios as I believe it takes some of the burdon of coding stacks away, you don't have the same worries, as you can guarantee that the list is always up to date. You always know what is in your list as well, for instance if a particle is very quick it will be used up, displayed, then hidden and added back onto the stack to be used again. The problem I find with this is that the same particles may take over your stack, and some particles may never be used if the stack never lists it, or rather the stack recycles particles enough to not need all of them. This is especially a problem when you have lots of fast particles then things slow down, and more long living particles are needed. I mean you could run out of stack before all of your new particles are created.


Health, Ammo, and bacon and eggs!
BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 16th Feb 2009 18:03
Quote: "I mean you could run out of stack before all of your new particles are created."


I run a check, and if the stack overflows, I add 10% more entities. So in a worst-case scenario, you get a small hit by expanding the arrays. As I tend to use 1,000 elements in a stack, it's unlikely to happen, and when it does, I add an extra 100 in one sweep.

jason p sage
18
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 16th Feb 2009 19:25
That's the smart way to do it I think BatVink. I have coded the same strategy in the JGC Classes for Dark GDK.

I have a dynamic array class that also grabs a "Chunk" of new elements in a single pass when it is full and a new item is requested. Because I wrote it generically, the number of items in new chunks are configurable.

This technique, though I'm not using stacks ATM specifically, definately allows the arrays to perform top notch with work that equates to overhead only being called on demand.

This is much better than implementations that continuously add/remove from the array via insert/delete and arrays are SO MUCH faster than dynamic linked lists.

Dynamic Linked Lists are awesome - but if you use them everywhere - they start to hurt ya in a game. I ran into this myself, and when I switch from a Dynamic Linked List Approach for the speed crucial stuff to this kind of smart array handling - it made performance SCREAM!

--Jason

Kevin Picone
22
Years of Service
User Offline
Joined: 27th Aug 2002
Location: Australia
Posted: 16th Feb 2009 20:02 Edited at: 17th Feb 2009 03:49
While on the subject, here's some array allocation examples. (PB code, but the method is easily picked up)
Array Allocation Management Examples

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 16th Feb 2009 20:50
The only issue I have to deal with around this now is what I believe you call linked lists.

For example, I have a bank of 20 textures. In order to get to the ID of texture 20, I have to read 1 to get the index of 2, 2 to get the index of 3, 2 to get to 4 etc etc... I'm not sure how to deal with this in DBP, where arrays in arrays is not possible.

IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 16th Feb 2009 21:30
Want an example?

This code was an example for a tile-based game where you could have an unlimited number of fighting units on each tile:


This code creates a small grid (0-4,0-4), then creates 4 units and assigns them all to the same grid. It then carries out various other actions such as clearing all units from a tile, putting them all back, then removing just a single unit from the tile.

There's a lot of printing in it, so you can see what it's doing at each point - run it in an 800x600 window to see it all.

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 16th Feb 2009 21:45
I think I have all that wrapped up, I'm now looking for a quicker way to traverse a linked list.

For example, if I load a bitmap font of 100 images, each one gets an array element from the stack. But if I want to find the location of lower case "z", I have to read 95 elements.

The only solution I can think of now will help for repeat searches of the same list within the main array. The first search would load the positions into a 100-element array. Subsequent searches would use that array rather than iterate through the linked list again. Each time I utilised a different set of images, this additional array would need to be reloaded.

This is a generic solution, so there is no rule to the number of lists within the array, or number of items within each list.

IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 16th Feb 2009 21:57
Sounds to me like you are overcomplicating things ... but then again, I don't have a full picture of what you are trying to accomplish.

If I had to manage several character sets, I'd keep it simple - decide the max number of characters available for all character sets, then just maintain a single array with that many items multiplied by the number of character sets. Then I'd just calculate the offset of the first character in the character set I want, and add a further offset for the actual character I want the image number for, then use that as the index into the array to get the image number.

Or, even simpler, drop the array altogether and allocate ranges of images for each character set (eg, Images in the range 5001 to 5255 are for char set 1, 5256 to 5511 are for char set 2 etc).

BatVink
Moderator
22
Years of Service
User Offline
Joined: 4th Apr 2003
Location: Gods own County, UK
Posted: 16th Feb 2009 22:04 Edited at: 16th Feb 2009 22:05
Unfortunately it's not that simple, I just used fonts as an example because it's easy to explain and visualise.

The array contains references to a vast array of image groups. It could be 6 images for traffic lights, 52 for a pack of cards or 100 for a font set. You could have all three, and a few more bitmap fonts to bulk it up.

The system works, I'm just a performance junkie and hate to see lost microseconds that can be put to good use elsewhere.

IanM
Retired Moderator
22
Years of Service
User Offline
Joined: 11th Sep 2002
Location: In my moon base
Posted: 16th Feb 2009 23:07 Edited at: 16th Feb 2009 23:14
You have image groups, which contain image id's which cross-reference to DBPro image numbers?



From your description, it sounds like something you would load up front, and wouldn't change too much, if at all, in your main loop. If that's what you have, or if the number of insertions is relatively low, you should consider using a sorted array instead. Your linear searches will then become binary searches.

If you load up front, populate the array, then sort it all (use my plug-in SORT ARRAY command).

If you insert single items in your main loop, then put the new item at the end of the array, binary search for the insertion point, then shuffle the item into place (you could use the ROTATE ARRAY command in my array plug-in for this).

There'll be variations depending on specific circumstances.

For example
- loading a whole new group from scratch.
Don't reuse image group id's, generate a new one higher than the previously used id's. Append them all to your existing array, then sort just those if you need to.

- removing a whole group
Binary search for an item in the group, linear search left and right to find the start of the group and its size, use the ROTATE ARRAY command to shift it to the end of the array, then shorten the array by the size of the group.

It's going to be a lot of new code, but if you want the performance, code it up and benchmark it against what you have right now.

Login to post a reply

Server time is: 2025-08-08 15:29:48
Your offset time is: 2025-08-08 15:29:48