ATI likely has that control panel available. This whole dbSyncRate issue, flawed or not, I have trouble with framerates being an issue.
At one point I saw this as a major issue, but its not. Now just to make sure I cover the bases - if its the flashing you see using ATI, well.. I don't know about that...
However I am familiar with frame rates and screen tearing. I think if some control is "missing" for sync rate, we can at least be thankful that it is "stuck at 60" versus not being capped at all.
For performance concerns, you set up your own timer and only call vSync when you get the 16.6 millisecond "tick" from your home made timer. Why? You can run code faster than the refresh rate/sync rate using this technique.
Now I'm working on a directX application that needs the absolute fastest performance one can get from directX, and tearing is not an issue because I'm dealing with points and wireframes mostly and frankly... its not a video game so tearing even with textures is acceptable because the only time the screen changes is when the user whips the mouse around to change the view.
I've read in a few articles that have said basically the same thing on this issue in regards to 3D in general - which has helped change my reasoning about tearing and screen rates.
I used to think the faster the batter... but I'm now convinced that this is only true during benchmark testing and while optimizing your code to run as smooth as fast as possible because the FPS can be a great gauge.
Tearing images in your final product just looks messed up... and 60FPS is actually a pretty darn fast render speed.
I'm now convinced this is why TGC doesn't comment all to much about it... now for testing... ok you might want that unthrottled FPS for optimization and benchmarking like I said... and ok.. I'll agree this is an issue for many of you but I also know that each video card has different programmable parameters... and as such some may exclude this sync flag "monitor refresh rate" lock to vsync and not make it easy or possible to change. I'm also aware that DBPor/DarkGDK etc.. might not honor your preferences for some video cards forcing the lock for reasons they have with making things just workright for that brand of video card/and or chip set.
I have a nVidia 9600 gt and that runs even my DBPRO Iron Infantry at 65-70 FPS which is much faster than my previous system could do and is obviously not capped at 60. I also has my nVidia set to software controlled - meaning the software "is allowed" to change this setting if it wants to.
Now I'm not yelling or meaning to sound authoritive and this post just has my views on the matter I am interpreting this discussion is mostly about.
Now I want to know how this issue is causing major problems developing your game/editor.
I have read many people mention game timing and how their game doesn't run right if the FPS sync rate can't be controlled. I took the advice of one of the news letters and just wrote all my code to use my own flavor of a home grown timer system. I already knew the importance of writing code so my program performs as well as possible regardless if its on an old clunker or a turbo jet computer system... and after readibng that newsletter I decided to not use refresh rate as a "cheap shortcut" to get around this extra labor. Now once I forced myself to write my own timing routines to handle object movement etc based on this home variety of timer code - I was quite pleased! Furthermore I have early programs of Iron Infantry's development without it and newer stuff you guys have maybe seen on google that does have it.. and you know - the code that does have runs identically regardless of the video card or computer specs.. unless of course the PC is so slow the software is chugging along very slowly.
--Jason