Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Dark GDK / dbSyncRate = Useless?

Author
Message
Jason C
16
Years of Service
User Offline
Joined: 19th Jun 2008
Location:
Posted: 29th Jun 2008 00:56
I realized quite quickly that dbSync() doesn't return until (1/syncRate) seconds has gone by. This means that your entire code will only be run(if at a sync rate of 60) 60 times a second. That is incredibly slow if you need to do CPU intensive things, other than rendering, like sending/receiving network packets, large calculations, etc. Doing those kinds of routines inside of a drawing routine where you only get 60 cycles a second is incredibly insufficient.

Look at this code:

The output should be:
Custom FPS: 61
Custom CPS: 61
dbScreenFPS: 60

So, I decided to check if a certain amount of time has passed before doing any drawing. Somthing like "If 1/60th of a second has passed, draw some stuff." That right there makes dbSyncRate() useless.

Now Check this code out:

with MAX_FPS at 60:
Custom FPS: 59
Custom CPS: 150000
dbScreenFPS: 60

with MAX_FPS at 30:
Custom FPS: 29
Custom CPS 300000
dbScreenFPS: 30


So, that proves that dbSyncRate is useless. And also shows some newbies that they should put their drawing routines inside a timer.
Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 29th Jun 2008 02:27
What could you honestly need to happen more than 60 times a second? If there even is something, you can make a loop that gives the same effect.
Jason C
16
Years of Service
User Offline
Joined: 19th Jun 2008
Location:
Posted: 29th Jun 2008 02:51
well, instead of waiting for dbSync to return your program could be doing something useful.

My idea of dbSync:


either way, dbSync wastes resources when not using your own timer routine.
KISTech
16
Years of Service
User Offline
Joined: 8th Feb 2008
Location: Aloha, Oregon
Posted: 29th Jun 2008 04:43
To be honest I have to agree with Jason. There is no reason for GDK to be sitting twiddling it's thumbs. When DBPro can hit FPS rates in the hundreds, and GDK is essentially the same thing only using C++ you would think it would be MUCH faster than DBPro and certainly faster then 60 FPS.

Although, from a marketing standpoint, I can see why GDK being FREE might cause TGC to put a performance limiter in there so that it's speed is comparable to DBPro.

The question then is, would TGC be willing to pull that limitation out of the GDK for those that pay for the commercial license?

Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 29th Jun 2008 04:48
There is no reason to be above 60-75 FPS. That's as fast as it needs to be.
programing maniac
16
Years of Service
User Offline
Joined: 19th Apr 2008
Location: Bawk, Bawkity
Posted: 29th Jun 2008 04:58
Quote: "There is no reason to be above 60-75 FPS. That's as fast as it needs to be. "


I completely agree, I don't think anyone realizes if something goes 60 FPS or 75 FPS. At least I can't.

Also, if you need it to go faster, just have dbSetFrameRate ( 0); Then it will go as fast as possible.

elantzb
16
Years of Service
User Offline
Joined: 10th May 2008
Location: Classified
Posted: 29th Jun 2008 08:23
what exactly does dbSync() do, then?

~you can call me lantz~
Jason C
16
Years of Service
User Offline
Joined: 19th Jun 2008
Location:
Posted: 29th Jun 2008 09:45
Its not about how many frames are rendered a second, 60 is sufficient. Its the fact that the program could be doing things while the GDK cant render a frame. Instead of dbSync sitting there waiting for the time to pass. Id rather be sending and recieving packets, handling keyboard and mouse events as well as GUI events. In a complete game, there are a ton of things that go on other than just drawing a picture. Doing everything a full featured game requires in 1/60th of a second is asking quite a bit when when a function just sits and waits until the proper time to do what it should have done when it was called.

instead of dbSync handling the time it should be something like dbCanRender().

so like this:


Of course, that function is an easy one to code yourself but thats what the GDK for.

One more Example:


Basically, if it needs to get done, do it. Dont wait untill a frame can be rendered.

I say dbSyncRate is useless because you need to define your own timer which ends up regulating the FPS anyways, so why use dbSyncRate at all?
dark coder
22
Years of Service
User Offline
Joined: 6th Oct 2002
Location: Japan
Posted: 29th Jun 2008 09:52
If you think dbSync() taking a minimum of 16.6ms to complete is a good thing then you must not care about scoping code for performance bottlenecks. dbSyncRate( 0 ); should work, and allow you to get more than 60FPS, how else do I see how long rendering takes? Sure.. for a final-build it may not be required to have more than 75~ FPS, i.e. something VSync would handle, but forcing 60FPS causes too many problems.

Jonas
19
Years of Service
User Offline
Joined: 10th Aug 2005
Location: What day is it?
Posted: 29th Jun 2008 10:29
Actually, even with dbSyncRate ( 0 ) I'm still getting ~60fps... and at dbSyncRate ( 120 ). ??

P4 3.4ghz/2gb RAM (+PF=24gb)/GeForce 6200 A-LE 256meg AGP8x (Altered for 16pipelines, 768mb forced TurboCache-ish)/WinXP Pro/74gb WDRaptor(SATA,OS)/3x 200gb WDCaviar(IDE)/250gb WDCaviar SE16(SATA)
dark coder
22
Years of Service
User Offline
Joined: 6th Oct 2002
Location: Japan
Posted: 29th Jun 2008 10:42
Yes because it's capped at 60, hence why I said should work. This will probably never get fixed .

bjadams
AGK Backer
16
Years of Service
User Offline
Joined: 29th Mar 2008
Location:
Posted: 29th Jun 2008 13:09
its not capped at 60 its capped at the monitor's refresh rate. if you have your monitor at 75 or 80 you get the same vsynch!
Codger
21
Years of Service
User Offline
Joined: 23rd Nov 2002
Location:
Posted: 29th Jun 2008 18:05
Frame rates faster than your monitors Sync rate are quite simply wrong.
In reality if the cpu can do its job 1000 times per second, the GPU 10000 times per second and the monitor can only manage 60 frames per second. The lowest common denoninator rules you only get 60 FPS

If the CPU can only manage 40 FPS and the monitor is at 60 FPS you get 40 FPS.

The Sync command allow you to choose a slower consistant rate i.e. 30 fps rather than speeding up and slowing down as the load on the cpu changes. This is also means that you get to choose the slowest machie that your project can run on at a playable level

Codger

System
MacBook Pro
Windows XP Home on Boot Camp
MACRO
21
Years of Service
User Offline
Joined: 10th Jun 2003
Location:
Posted: 29th Jun 2008 18:32
For the rendering using a sensible limit on FPS is basically a good idea.

That said, any function that sits there waiting for an arbitrary time doing nothing because of a fixed frequency limit is bad. Your CPU could and arguably should be doing any number of operations in the time wasted (AI, physics, network, collission...).

If dbSync() is to be capped at the monitor refresh (which I have no issue with because it makes good sense) then dbSync() should either render if an appropriate amount of time has passed to meet its limit or return allowing your program to get on with non-rendering useful operations.

In my perfect world (and I can code this myself so its not a biggy) something like dbSync would take a Boolean flag to indicate if you wish it to block or not and return a Boolean which is true if a new frame was rendered or false otherwise. That would allow coders to choose how they want to program. Those that want to code basing everything on a limited frame rate can do so and those that want to separate rendering from the other logic can do it that way.

MACRO
MACRO
21
Years of Service
User Offline
Joined: 10th Jun 2003
Location:
Posted: 29th Jun 2008 19:05
Not for me it doesn't, it still caps me to the refresh of the display which is in my case 60 FPS.
MACRO
21
Years of Service
User Offline
Joined: 10th Jun 2003
Location:
Posted: 29th Jun 2008 20:58
If I make a call to dbSyncOff() and then don't call dbSync() anywhere I get a black screen with nothing on it. Which is reasonable because the program is not being told to draw the screen.

From a previous discussion on this I got the impression that the locking to vsync is a preference with the DX SDK.

It is reasonable to lock the FPS in draw terms to vsync but I would seriously question the implementation using a blocking call to do it.
KISTech
16
Years of Service
User Offline
Joined: 8th Feb 2008
Location: Aloha, Oregon
Posted: 29th Jun 2008 21:09
Quote: "If dbSync() is to be capped at the monitor refresh (which I have no issue with because it makes good sense) then dbSync() should either render if an appropriate amount of time has passed to meet its limit or return allowing your program to get on with non-rendering useful operations."


This is how it should be. If dbSync() can't draw then it should return control so that other game code can be executed. It just makes sense to not waste those CPU cycles.

Jason C
16
Years of Service
User Offline
Joined: 19th Jun 2008
Location:
Posted: 29th Jun 2008 21:12
Quote: "
You are talking about the refresh rate, don't you? Turn sync off, make sure that you don't call dbSync() anywhere and that v-sync is off, and then test how often the main loop gets executed. I'm pretty sure it will be more that 60 times a second. Otherwise there's a bug in Dark GDK.
"


Have you tried not using dbSync()? it cause nothing to render and if you hit escape while the progrm is running it wont exit. So dbSync is an essential function in the GDK.

in fact dbSyncOn() and dbSyncOff() seem to have no affect and dbSyncRate() is capped at 60 even when set to 0.

Like I said dbSyncRate is useless, just put your render routine in your own timer so whatever code you need to be execute asap will do so.
sydbod
16
Years of Service
User Offline
Joined: 14th Jun 2008
Location: Just look at the picture
Posted: 29th Jun 2008 21:34
I always thought that one would optimize and create their game so that all that has to be computed, would be computed in one game loop, on the slowest target computer that the game was meant to be run on.(lets not talk about threading for this arguments sake)

If all computations are already being done, then where is the need to do more computations?

I would imagine that the idle time created by the dbSyncRate(int) function returns control back to the operating system to use as it sees fit for multitasking or background tasks.

Am I missing something?
Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 29th Jun 2008 21:47
I don't see how you could honestly use that little bit of time. What needs to be calculated more than 60 times a second? Really?
Benjamin
21
Years of Service
User Offline
Joined: 24th Nov 2002
Location: France
Posted: 29th Jun 2008 21:52 Edited at: 29th Jun 2008 21:55
Quote: "It just makes sense to not waste those CPU cycles."

Doing operations at an unnecessary rate is a waste of CPU cycles. If the game relinquishes CPU time while it is waiting to render the next frame, then this is a good thing as it saves CPU time and lowers power usage. There's no reason to do anything extra when there is spare time.

It is my belief that applications - and games - should not use more resources than necessary. There's no point using 100% CPU time unless the game actually needs to do this to function at full speed.

MACRO
21
Years of Service
User Offline
Joined: 10th Jun 2003
Location:
Posted: 29th Jun 2008 21:55 Edited at: 29th Jun 2008 21:58
While it can be argued that you should aim for your code to be managed within the frame time there are a number of cases where you want raw speed and iterations become important.

In reality you should write your code so that it isn't relying on someones system being able to render at any frame rate so your code is speed independent and degrades gracefully on slower systems.

While it is true that you shouldn't tie your movement steps etc to frame rate it is equally true that you shouldn't constrain your games underlying processing ability to it either.

What I want is my rendering happening using vsync but my physics, network and AI code etc wanging it around as fast as possible so that I can do more in the time I have.

Consider network messaging for game updates...

In my world the network socket is serviced every loop iteration (I don't use threads for this at the moment) to pick up messages, process them and send a response if required.

At present I am limited (well I am not really because I work around this limitation) to a minimum time between servicing the socket of 1/60th a second where I could be checking it a hell of a lot more frequently.

For one or two messages this makes little difference but when I start flooding the network with UDP packets full of data the quicker I can read and deal with them the better.

A lot of messages could accumulate in that 1/60th of a second which could mean a lot of processing that could have been done up front having to wait for the next render cycle.

Assuming a 1 Megabit connection that could be up to a theoretical 2083 bytes which could be a hell of a lot of messages to work through in a single frame:

1 Megabit = 125000 Bytes (wikipedia)

Macro

Edit...

I should add that I agree with not using more power than is necessary however I believe it should be up to the programmer of the application/game to decide when it is appropriate to yield not one component of the SDK's being used.
dark coder
22
Years of Service
User Offline
Joined: 6th Oct 2002
Location: Japan
Posted: 29th Jun 2008 21:58
Uhh, Physics, AI, you name it, all of these things take up CPU time. If the dbSync() function takes time doing nothing but waiting for an empty frame then you're wasting time that could be spent doing other things. It has nothing to do with how many times a frame something has to happen, remember than certain calculations such as lightmapping or networking cannot be done in a single frame.. usually, and require calculations over multiple frames, if your frames are being impeded by dbSync() then you're wasting time, clear now?

Benjamin
21
Years of Service
User Offline
Joined: 24th Nov 2002
Location: France
Posted: 29th Jun 2008 22:02 Edited at: 29th Jun 2008 22:04
Quote: "What I want is my rendering happening using vsync but my physics, network and AI code etc wanging it around as fast as possible so that I can do more in the time I have."

Why would you need to calculate the physics more than once per frame?

Either way, if you want asynchronous behaviour like this then you should look into multithreading. After all, why waste extra CPUs that might be available to use.

Jason C
16
Years of Service
User Offline
Joined: 19th Jun 2008
Location:
Posted: 29th Jun 2008 22:05
Quote: "
I don't see how you could honestly use that little bit of time. What needs to be calculated more than 60 times a second? Really?
"


Client presses a key. Program waits 16.6~ms to handle the event and send data to server. Server receives data and does what it needs to do, Server sends new data to Client, Client waits 16.6~ms to do anything with that data.

Problem: 33.3~ms of extra latency.
MACRO
21
Years of Service
User Offline
Joined: 10th Jun 2003
Location:
Posted: 29th Jun 2008 22:10 Edited at: 29th Jun 2008 22:12
Some physics calculations are in my experience more accurate when iterated over a set of small adjustments than they are when thrown larger adjustments less frequently. The same can be said for accurate collision detection.

Don't get me wrong here I am not saying that you should hammer the living hell out of your CPU for the sake of it but I think that the decision of when to yield your programs control should be down to the programmer and not an arbitrary function in an SDK.

Macro

Edit - And threading is very much on the cards
Benjamin
21
Years of Service
User Offline
Joined: 24th Nov 2002
Location: France
Posted: 29th Jun 2008 22:13
Quote: "but I think that the decision of when to yield your programs control should be down to the programmer and not an arbitrary function in an SDK"

Agreed.

sydbod
16
Years of Service
User Offline
Joined: 14th Jun 2008
Location: Just look at the picture
Posted: 29th Jun 2008 22:14
Hang on.... one can not have it both ways.

If all the network processing or lightmapping can not be done in the one frame, then having a more intelligent dbSyncRate(int) function will be of no use because it will not be able to return any time to other code for processing. There will be no left over time.



Quote: "Megabit = 125000 Bytes (wikipedia)"

That is in the ballpark...will be a difference depending upon TCP or UDP traffic and depending about packet overheads with reguard to the MTU used.
The thing is, a buffer of 125000 Bytes is a very small data buffer.
For that sort of potential connection, you would assign a data buffer of at least 3 times the expected data expected.
If the expected data can not be processed in one game loop, then there will not be any time that the dbSyncRate(int) function will be in a waiting state.
MACRO
21
Years of Service
User Offline
Joined: 10th Jun 2003
Location:
Posted: 29th Jun 2008 22:25 Edited at: 29th Jun 2008 22:28
My main issue here is that while I can understand the reason for the limit (and resulting sleep) I still feel that forcing it on the developer is a bit harsh. If I don't need the cycles to do stuff then I will yield, if I do then I wont but that as the programmer should be my choice.

From a practical standpoint I don't expect it to be a technical issue anyway, if I do need more iterations for some reason I can easily code around it.

I am thinking of this from an MVC point of view where the rendering simply provides a view onto my model. I dont want the view placing constraints on the rest of the system if I can avoid it.

Macro
sydbod
16
Years of Service
User Offline
Joined: 14th Jun 2008
Location: Just look at the picture
Posted: 29th Jun 2008 22:36
Quote: ".....I still feel that forcing it on the developer is a bit harsh."

I would have to agree on that point.

There are many areas where that has been done.
Any of the functions that load graphics or models or sounds do not provide a return value to see if the operation was sucessful and will cause the game to crash without letting the programmer include a graceful exit and notification code fragment.
Codger
21
Years of Service
User Offline
Joined: 23rd Nov 2002
Location:
Posted: 29th Jun 2008 23:07
If your program had more to do then it would be running slower than the sync rate of your monitor....

but if you really need to see a higher sync rate this is the easy way to accomplish it




Codger

System
MacBook Pro
Windows XP Home on Boot Camp
Sephnroth
22
Years of Service
User Offline
Joined: 10th Oct 2002
Location: United Kingdom
Posted: 29th Jun 2008 23:28
I prefered it when I was able to see how fast my code truely ran which, if the highest you can bench is 60fps, you no longer can see. Its nothing to do with what your end user will perceive or what you really "need", its to do with bench marking your code.

If my game was running at 400fps and I wrote some radar code and suddenly it had dropped to 200fps I might consider looking over my radar code and seeing if I could improve it to run a little more effciently. However when my program tells me its running at 60fps before the radar code was added and it says 60fps afterwards I will perceive no change in performance.

This may seem like a "so what?" situation to some but when you're going around adding "polish" to a game and throwing in effects and you find your game not running at the speed you expected it really helps to of known whilst developing which times you saw the biggest fps drop - now our only option is manual profiling.

Finally there are times when you want that frame rate uncapped in a real world scenario. "What needs to run more than 60 times a second, really?" says mahoney. Well, what doesnt? With properly written code more time for code to execute can only be a good thing. There are many examples of things you dont want capped at all - the most obvious being when a program is loading things!

When loading media I could make sure I dont call dbSync at all - this will allow things to load the fastest (and loading times are IMPORTANT to someone sat there waiting to play the game). But I might want a loading screen. So I could just draw the loading screen once and then call no more syncs until all the loading is finished.

But if I desire to display the name of the file being loaded at that moment then I need a sync after every file it loads and right now this is a foolish thing to do because being capped to 60fps really REALLY slows down the loading of lots of files. So I have to compromise and I write to the screen what type of data is being loaded (world, shaders, images, etc) so I only sync when I move on to another type of data. Its fine and all that, but thats not something I should be forced to do.

Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 30th Jun 2008 22:16
I understand not being forced to use it and for performance analysis. I see that now.

What I still don't get, though, is why someone would be calculating things more than 60 times a second ( aside from networking ).
dbGamerX
16
Years of Service
User Offline
Joined: 23rd Nov 2007
Location:
Posted: 30th Jun 2008 22:33
You hit the nail on that one. I don't see what the need is for 60+ calculations unless you are planning to do multiple background processing and threading.

Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 30th Jun 2008 22:36
I do understand networking, though. Hadn't thought of that. Never really liked networking.

But, unless you are making a very high performance physics/game engine ( Havok, PhysX, CryENGINE 2 ) you shouldn't need that level of control. If you do, I think you're more than ready for DirectX programming.
jason p sage
17
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 30th Jun 2008 23:01
I don't totally follow your reasoning but I agree with your last sentance Mahoney.

If you simply have to much going on to be able to render at a speed that is desirable - due to complex code (versus sloppy doggy code) - then you are probably ready for the big league.

Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 30th Jun 2008 23:09
I simply meant that, unless you are making something very complex/spectacular, you don't need to calculate anything more than 60 times a second. If you do need to, you're ready for DX.
jason p sage
17
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 30th Jun 2008 23:13
I do understand the issue of performance analysis, though. That is a little bit of a problem. Otherwise, it should be fine.
KISTech
16
Years of Service
User Offline
Joined: 8th Feb 2008
Location: Aloha, Oregon
Posted: 30th Jun 2008 23:24
..and then there's the fact that a game running at 130 FPS just looks more "crisp" than one running at 60 FPS. Sure the human brain only needs ~30 FPS, but games at those speeds run at a snail's pace, and if 60 were good enough there wouldn't have been a need to build monitors that have refresh rates of 85 Hertz or more.

The fact is I really don't know anything about the internal workings of the GDK, or DBPro for that matter. I know that like most game development kits within my price range, they all have performance issues. Once you load it up with Objects, Models, Animations, and Effects the framerate goes to hell.

For many various reasons, whether you understand or agree with them or not, wouldn't you just like to see the true frame rate, and not an artificially capped one?

jason p sage
17
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 30th Jun 2008 23:29
There are two types of performance analysis I care about - and mind you performance analysis is a A LOT OF WORK to set up correctly, and in a way that it produces results that are usuable for analysis - so I don't do it unless there is a problem or I'm REALLY REALLY motivated to speed something up. I TRY to use good programming practices and go outside the box as necessary for performance considerations as necessary.

1 Method I find useful is the "How much time is each routine taking" which you can do just for routines you suspect are a problem, or just major "chunks" of your code - whatever.... defaintely the most granular way to find out where your FPS are going!

2: Different approach - not as focused - but useful. Instead of recording how much time each routine is taking (or in addition to), you can record how many times each function is called - and then run the app for a bit... and look at the biggest counts (like web page counters kinda) and the ones with the most hits? Go in there and try to devise ways to speed up the code - whatever you can think of ...optimize the living poo out of the most worked functions!!!! This seems silly - but when you shave a few clock cycles off here and there - and then multiple that times how frequently said function(s) are called - the gains are huge.

Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 30th Jun 2008 23:36
Quote: "I do understand the issue of performance analysis"


Quoting myself, here. And,

Quote: "..and then there's the fact that a game running at 130 FPS just looks more "crisp" than one running at 60 FPS."


Not really. You can't notice, except for the possible VSync mouse lag. That's it.

Quote: "Sure the human brain only needs ~30 FPS, but games at those speeds run at a snail's pace"


Then you must have never played Crysis on a machine that costs under $2000. It seems pretty smooth even at 30 FPS. I imagine it feels flawless at 60.

Quote: "and if 60 were good enough there wouldn't have been a need to build monitors that have refresh rates of 85 Hertz or more."


If you do your research, you'll realize that CRT's have refresh rates above 60 Hz. TFT's have only 60. CRT's are bad for your eyes at 60 Hz. That's the main reason. They are much less annoying/straining at 75 Hz.

@Jason

Yeah. I simply try to write the code in an efficient manner from the start. Best way to do it. I do worry, though, that most machines don't match up to what most of us have. At least, what a lot of game devs, if you will, have.
jason p sage
17
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 30th Jun 2008 23:46
I actually have a old but reliable single core 1.8ghz, with a decent nvidea... ge7600 or ge7800 or whatever - I think it has half gig ram - Decent Card - not so Decent PC.

So... In some ways - I think its better to code on a slow box - then when some die hard gamer runs your code - its fine for them...

I can't tell you the joy of giving someone a project you've been working on - and bumming out because its only 20 FPS... even though you tried everything you could - to then have a the person trying it out report back - DUDE this thing ROCKS!!! I'm peeked at 60fps steady and this is smooth!

LOL - If you have a slower box - I think it makes you code tighter out of pure necessity.

If you write garbage or good code and it runs the same - whats the point of writing good code? This is folly. This is what I believe .Net is. To Slow? Buy better hardware... Yeah... Sure... OK... for what? To run your Office Ribbon? LOL - wasted resources IMNSHO

LOL - I digress - yeah - and there is another reason 60hz is actually a CRAPPY hz to run at - (not talking frame rate - talking monitor refresh vsync) - is that 60hz - is audible! If you pick up 60hz interferance - (try am radio for proof of concept .. but for anything that catches it and turns it audible.. like music equipment - guitar amps etc) - If you use a much much higher hz - it becomes less audible as it gets beyond the frequency our ears can hear.

So Mahoney - I think its great you're trying to write clean tight code from the start - makes for easier maintenance and usually faster running code. Though like I said - some "good rules" need to be bent time and again to lean out clock cycles sometimes - but - that's the art I call creative coding

--Jason

Lilith
16
Years of Service
User Offline
Joined: 12th Feb 2008
Location: Dallas, TX
Posted: 30th Jun 2008 23:53
Quote: "it becomes less audible as it gets beyond the frequency our ears can hear."


So we can't hear it once it gets beyond 20,000 cycles. But your dog would complain. That's some serious speed. Better to get it below 16 cycles so you don't even hear a hum.

Lilith, Night Butterfly
I'm not a programmer but I play one in the office
Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 30th Jun 2008 23:54
I'm running on a dual-core and an 8600 GT, so I really am not sure if I'm writing my code well enough or not. Though, if I really need to, I can test it on my brothers computer ( 1.5 Ghz P4 and a 6200 ).

I forgot about the audible 60 Hz thing. I hate that about it! My TV does that, and it is so annoying.

Yeah, I try. I do make good use of OOP, though, so I guess trying to write fast code levels it out. I love making it really fast. I don't know why, I just love the feeling of optimizing it.
jason p sage
17
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 1st Jul 2008 00:58
That's the NEED FOR SPEED - CODING STYLE! LOL

Lilith - I'm not sure its the hz of the monitor vsync - as much as the hz of your framerate effects the much higher (audible) interferance associated with said monitor freq's.

Like 60hz vync... makes a pretty high pitched annoying sound if caught as interferance in other devices - yet kicking it up to 75 or 85 usually helps squash it. It could just be differeing the freq from the 60hz AC freq that's our USA common AC cycle to - by breaking the ossicllations etc dunno -

Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 1st Jul 2008 01:02
Need for Speed coding style. . . I like it.

I've been like that ever since I started C++: I want to write extremely fast code ( Yet, I use OOP. Go figure. ). Don't know why. Just something I find fun.
jason p sage
17
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 1st Jul 2008 02:01
OOP isn't slow bro - but inheriting to much is. Classes? Compiler handles it. Type Casting? Compiler handles it (but can add wasted cycles if doen to much .. there is a trick to counter this this). Polymorphism? Compiler handles it.

I used to do OOP in Assembler! YES...IT CAN BE DONE! Its actually why I understand OOP as well as I do... because when you understand WHAT OOP is (at the class level - forget the inheritance part - though VERY doable also... actually better in assembly then how C++ does it)

In short - MOST OOP "stuff" is fast as hell - and efficient as hell too - its just the inheritance that can be a cpu gobbler - actually overrides and things - CAN be a nuisance depending on the compiler (c++ or whatever) implementation.

Think about it.... what is a class? Its a structure + code. In assembly how do you make objects?

You make a structure - and code that deals with THAT structure - via indirect addressing (pointers we'll say to KISS it).

Then you can allocate as many instances of the "structure" in memory that you want... wait - thats what a constructor does behind the scenes!!! So.. write it - your constructor now has to allocate the structure space in the heap, and return the allocated address for use later - and of course do all the other things a class might do in the constructor like init the data structure etc. The destructor would do the opposite.

Note - one trick for speed sake used very often in DirectX samples etc - is allocate the memory for the structure - and fill it with zeroes - decent - easy to debug "starting point - but I digress...

Anyhoo - it makes sense no matter the language - the basic principles remain the same - modular coding style and code reusu to the nth degree unless to do so would be a performance detriment (implementation specific of course, task at hand etc)

but generally speaking - I contest - oops is not slow unless your careless - and note - that careless is the accepted mainstream "way". (Hence - everyone wants .net ... software gets much slower each year - and the gains are minimal.)

I had a discussion at work today and a colleage swore he could take some old software and do more in less time than one could with the new stuff "every one is buying" and I couldn't agree more - but hey - who are we?

Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 1st Jul 2008 02:13
I know that OOP isn't slower. It's just a way of keeping code neat. It's all translated to the almost the same assembly in the end. But inheritance is what I really meant. I should have said that.
Mahoney
16
Years of Service
User Offline
Joined: 14th Apr 2008
Location: The Interwebs
Posted: 1st Jul 2008 02:14
Very good explanation, though. Well done.

Login to post a reply

Server time is: 2024-11-20 15:38:01
Your offset time is: 2024-11-20 15:38:01