Sounds like you're talking about binary search trees. They're useful for implementing sets. You could use a linked list, but finding a value takes a while of searching. You could use an ordered array and do a binary search (same basic principle behind it), but not expandable. Binary search trees are expandable and fast.
Now, for trees in general, lets have an example. Imagine a 3D game, with a complex scene with lots of 3D objects. Now, you can put them in an array, and just loop through it and update each object as needed. If, for example, an object is too far away, don't draw it. If there are many objects in the same place too far away, you have to loop through all of them to know not to draw them.
Now say you arranged this in a tree form (called a scene graph). You have the root, and its children are say, vehicles and buildings. They might have children too, for instance, the objects that are inside them (players riding vehicles for example). Now imagine you have a building, with lots of objects in it, and you go too far away from it and there's no need to draw it. Using the array model, you have to loop through every object in the building to know not to update it. Using this model, you simply say, "nope, building is too far away, so I won't even bother looking at it or any of it's children". Not only does this save you n loops (n = number of objects in the building), but it also saves you a distance check per loop. That's a lot of efficiency right there!
Yep, I guess you could say I liked trees from the start. [/Shawshank Redemption reference]
[edit]
Perhaps I didn't explain why binary search trees are useful. Ok,, lets say that you want to store a bunch of numbers, for quick lookup. If you use a linked list, you can just add them in order. Now, if you wanna see if a number exists in the list, you have to check the first node, second, third, etc... until you find it. Worst case scenario is that it'll take you n loops (where n is the size of the list). This is called linear complexity, O(n).
Now if you use trees, you can do it a lot faster. Once you check a node (the root for instance), you can check if the number you're looking for is greater or smaller than this one. If it's smaller, search the left 'subtree', otherwise search the right one. This means that every time you take a step, you're essentially halving the amount of data you need to check (unlike linked lists, where the amount you must check stays the same). So, in a linked list with the numbers 1 to 10, you would have to do 9 loops to find the number 9. But with a well balanced binary tree, you only have to do 3 or 4. By halving the amount of data you need to check every time, you get logarithmic complexity, O(log n).
If you think about the graphs of n (essentially y = x) and log n (y = log x), you will see that as x gets greater, y doesn't increase by as much in y = log x, where as with y = x it always increases - it's a straight line, but y = log x curves and flattens off. This is important, because it means you eventually get to a point where you can have a huge amount of data, and it will take just as long to process as it would if you only had say, half of it.
"everyone forgets a semi-colon sometimes." - Phaelax