Register forum user name Search FAQ

Gammon Forum

Notice: Any messages purporting to come from this site telling you that your password has expired, or that you need to verify your details, confirm your email, resolve issues, making threats, or asking for money, are spam. We do not email users with any such messages. If you have lost your password you can obtain a new one by using the password reset link.

Due to spam on this forum, all posts now need moderator approval.

 Entire forum ➜ MUSHclient ➜ Plugins ➜ (mapper drawing) force-directed layout for graphs with default layouts

(mapper drawing) force-directed layout for graphs with default layouts

It is now over 60 days since the last post. This thread is closed.     Refresh page


Pages: 1  2 3  4  

Posted by Fiendish   USA  (2,534 posts)  Bio   Global Moderator
Date Reply #15 on Tue 18 May 2010 04:26 AM (UTC)
Message
Tsunami said:

Even older computers these days should not have a problem handling a mere million floating point operations.
As easy as it is to play the "modern computers are fast" game, it just doesn't replace making your code more efficient. It is so easy to demonstrate scripts that just want to do more computing than even a nearly new computer can handle in between frames. A mere million floating point operations 10 times per second in an interpreted language with significant loop and data access overhead can become difficult to accomplish.

https://github.com/fiendish/aardwolfclientpackage
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #16 on Tue 18 May 2010 05:20 AM (UTC)
Message
Tsunami said:
Even older computers these days should not have a problem handling a mere million floating point operations. See how long it takes your comp to run 2 or 3 million trigonometric functions. Further, big O notation should be used as an estimate of an algorithms growth rate and efficiency, not running time.

This statement doesn't make a lot of sense, I'm afraid. If it's a measure of growth rate and efficiency, it is by extension a measure of running time, if at the very least a measure of how running time changes with input size.

Regarding the mere million floating point operations, I actually measured it on an older computer I have lying around, a Celeron 2.4GHz (bought around 2003-04).


$ cat test.lua 

require 'math'
local sin = math.sin

local total = 0
for x = 1, 3000000 do
    total = total + sin(0.2)
end
print (total)

$ time lua test.lua
596007.99239525
lua test.lua  0.86s user 0.00s system 99% cpu 0.865 total
$ 

As you can see, you wouldn't be able to do this more than once per second. And chances are rather high that you're doing more than just trivial floating point operations. Changing it from a trig function to pow(1.1, 2.7) (where pow = math.pow) changes the runtime from 0.86s to 1.64s.

Quote:
O(.000000000000000001 * N^3 + 100000000000) is still O(N^3) despite the fact that for non-large N's it is dominated by a linear term.

This statement, while technically true, is rather irrelevant to the point at hand.



Anyhow, the point here is that your algorithm might not be N^3 on average in the first place due to the constraints placed on the input size, but if it truly is N^3 average case (as opposed to worst case) that's not something to sneeze at.

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Tsunami   USA  (204 posts)  Bio
Date Reply #17 on Tue 18 May 2010 05:39 AM (UTC)

Amended on Tue 18 May 2010 05:40 AM (UTC) by Tsunami

Message
David Haley said:

Tsunami said:
Even older computers these days should not have a problem handling a mere million floating point operations. See how long it takes your comp to run 2 or 3 million trigonometric functions. Further, big O notation should be used as an estimate of an algorithms growth rate and efficiency, not running time.

This statement doesn't make a lot of sense, I'm afraid. If it's a measure of growth rate and efficiency, it is by extension a measure of running time, if at the very least a measure of how running time changes with input size.

Regarding the mere million floating point operations, I actually measured it on an older computer I have lying around, a Celeron 2.4GHz (bought around 2003-04).


$ cat test.lua 

require 'math'
local sin = math.sin

local total = 0
for x = 1, 3000000 do
    total = total + sin(0.2)
end
print (total)

$ time lua test.lua
596007.99239525
lua test.lua  0.86s user 0.00s system 99% cpu 0.865 total
$ 

As you can see, you wouldn't be able to do this more than once per second. And chances are rather high that you're doing more than just trivial floating point operations. Changing it from a trig function to pow(1.1, 2.7) (where pow = math.pow) changes the runtime from 0.86s to 1.64s.

Quote:
O(.000000000000000001 * N^3 + 100000000000) is still O(N^3) despite the fact that for non-large N's it is dominated by a linear term.

This statement, while technically true, is rather irrelevant to the point at hand.



Anyhow, the point here is that your algorithm might not be N^3 on average in the first place due to the constraints placed on the input size, but if it truly is N^3 average case (as opposed to worst case) that's not something to sneeze at.


Frankly, I'm curious to see how much of that is Lua overhead. If you have time, could you run the same thing in pure C, and perhaps under LuaJIT? In unoptimized C# the same program ran in 90ms and 290ms for Math.Pow(...). Admittedly, I have a more modern processor which prolly doesn't compare with your test machine.

While it is of course technically correct that the Big-O complexity of a program affects running time, personally I regard thinking of speed optimization in terms of Big-O is a sin on the terms of premature optimization.

Simply picking an algorithm with a better Big-O has nothing to do with actual optimization, and in many cases can actually result in worse running times when one hasn't taken the effort to understand the problem domain.

A common example is quicksort vs mergesort. Most beginning CS students know that quicksort is better. Why would anyone ever use merge sort? Although it has a worse average case, merge sort has a better worst case, making it ideal for presorted data. Further quicksort is great for in-memory data, but terrible for large data sets which cannot be stored on disk, which is why most database programs use variants on mergesort.

I agree that Big-O is quite useful and definitely something to be always aware of, but it is far from the first thing one should consider when it comes to optimization.

Edit: Sidenote @ Fiendish, this isn't the sort of thing I would plan to run in Lua in any case :)
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #18 on Tue 18 May 2010 03:07 PM (UTC)
Message
Quote:
Frankly, I'm curious to see how much of that is Lua overhead. If you have time, could you run the same thing in pure C, and perhaps under LuaJIT? In unoptimized C# the same program ran in 90ms and 290ms for Math.Pow(...). Admittedly, I have a more modern processor which prolly doesn't compare with your test machine.

It will be faster in C and LuaJIT but I haven't tried it. And yes, a lot of it is probably Lua overhead. But then again, MUSHclient plugins are usually in Lua...

Running on a faster processor didn't really make a huge difference, for what it's worth.

Quote:
While it is of course technically correct that the Big-O complexity of a program affects running time, personally I regard thinking of speed optimization in terms of Big-O is a sin on the terms of premature optimization.

Simply picking an algorithm with a better Big-O has nothing to do with actual optimization, and in many cases can actually result in worse running times when one hasn't taken the effort to understand the problem domain.

I'm sorry but I read statements like this and my mind kind of boggles. Obviously big-O analysis doesn't take into account the constant-time factors of the particular details of a function's implementation. But going from that to saying that big-O shouldn't be considered an important factor in choosing an algorithm seems kind of, well, preposterous. If your input is large enough -- where "large enough" isn't necessarily that large -- the constant factors become noise, because in the real world you don't have algorithms bounded by 0.0000001*N^3.

In my experience as a programmer, professionally, academically and otherwise, many considerable speed improvements are gained precisely by realizing that big-O performance is poor and using a better algorithm. In fact, this can make the difference between being able to run a program and not being able to run it at all!

I don't know why you say that picking a better big-O algorithm will "in many cases" result in worse running time -- do you have more examples of this? Of course there are special cases, like when you have particular constraints on the problem domain, but in general your statement seems quite wrong.

Quote:
A common example is quicksort vs mergesort. Most beginning CS students know that quicksort is better. Why would anyone ever use merge sort? Although it has a worse average case, merge sort has a better worst case, making it ideal for presorted data. Further quicksort is great for in-memory data, but terrible for large data sets which cannot be stored on disk, which is why most database programs use variants on mergesort.

Sure -- and in this case, the algorithm choice is made because the problem domain is very particular. In general, it is still usually better to recommend quicksort over straight mergesort.

I'm not sure why it's sinful premature optimization to look at big-O performance, but fine to look at things like where a data set is stored (RAM vs. disk), and so forth. :-) It's all just part of the algorithm selection process once you start thinking about it.

If you're looking at an algorithm whose average case is O(N^3), for example, and you know that your input size is large, then you really should consider if you have no choice but to use an O(N^3) algorithm because there necessarily will come a point where no amount of optimization trickery will help you speed up the algorithm.

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Tsunami   USA  (204 posts)  Bio
Date Reply #19 on Tue 18 May 2010 08:49 PM (UTC)
Message
Quote:

In my experience as a programmer, professionally, academically and otherwise, many considerable speed improvements are gained precisely by realizing that big-O performance is poor and using a better algorithm. In fact, this can make the difference between being able to run a program and not being able to run it at all!

I don't know why you say that picking a better big-O algorithm will "in many cases" result in worse running time -- do you have more examples of this? Of course there are special cases, like when you have particular constraints on the problem domain, but in general your statement seems quite wrong.

Quote:
A common example is quicksort vs mergesort. Most beginning CS students know that quicksort is better. Why would anyone ever use merge sort? Although it has a worse average case, merge sort has a better worst case, making it ideal for presorted data. Further quicksort is great for in-memory data, but terrible for large data sets which cannot be stored on disk, which is why most database programs use variants on mergesort.

Sure -- and in this case, the algorithm choice is made because the problem domain is very particular. In general, it is still usually better to recommend quicksort over straight mergesort.

I'm not sure why it's sinful premature optimization to look at big-O performance, but fine to look at things like where a data set is stored (RAM vs. disk), and so forth. :-) It's all just part of the algorithm selection process once you start thinking about it.


I think I should differentiate; when it comes to picking an algorithm, I'm all for knowing its complexity and understanding why that is so. When it comes to optimization, I think you should still know the relevant big-O factors, but I contend that it's largely useless for optimization.

Why? Because perf optimization is about profiling, testing, understanding the bottlenecks, and knowing your hardware. Big-O tells you nothing about any of that. You ask why it's sinful premature optimization to look at big-O performance, but fine to look at things like where a data set is stored (RAM vs. disk), and so forth; I answer because looking at the big-O complexity tells you nothing about what the algorithm actually does or how it operates, where as looking at things such as where a data set is stored assumes a detailed knowledge of exactly what is happening and how hardware is affecting it.

This is why I object to statements which for instance, dismiss Fruchterman/Reingold because it has a base O(N^3) complexity. It's complexity alone tells us -nothing- about how useful it may be. It may take a million second or a millisecond for one node, it may be disk bound or cpu bound, we don't know.

The fact is, our problem domain is dealing with <100 nodes visible, and <10000 node maps (usually). This is a strongly CPU bound application, and depends heavily on the current CPU/FPU and very little on hardware bandwidth or seek times. Now that we know this, we can profile the algorithm, check which loops take longest, apply our knowledge of big-O, and modify the data structures to allow O(n log(n)). We can cut down on unneccesary power and trigonometric usage, etc...

Quote:

If you're looking at an algorithm whose average case is O(N^3), for example, and you know that your input size is large, then you really should consider if you have no choice but to use an O(N^3) algorithm because there necessarily will come a point where no amount of optimization trickery will help you speed up the algorithm.


I think this is exactly what I'm talking about. Before you look at the complexity, first you need to profile. Maybe the best case is O(1), maybe the average data size is <10, maybe there is a O(log(n)) algorithm elsewhere in your code with an average data size of 1000000. Maybe this is an algorithm that applies itself to caching extremely well, or maybe your algorithm is O(N^3) with a large data set, but its disk bound, and decreasing its time complexity won't help.

Assuming automatically that big-O is anything other than a vague guide to an algorithm's performance in a specific real-world case is a mistake to me. I don't want to sound like I don't care about categorizing algorithms' complexity, because I think this is an irreplacable and extremely important tool in a very broad and wide range of areas. Frankly, I'm usually a lot happier working in those areas than caring about perf :P But it's not a good metric when it comes down to the nuts and bolts in the specific area of perf optimization.
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #20 on Wed 19 May 2010 03:10 PM (UTC)
Message
Quote:
You ask why it's sinful premature optimization to look at big-O performance, but fine to look at things like where a data set is stored (RAM vs. disk), and so forth; I answer because looking at the big-O complexity tells you nothing about what the algorithm actually does or how it operates, where as looking at things such as where a data set is stored assumes a detailed knowledge of exactly what is happening and how hardware is affecting it.

Big-O necessarily tells you how something operates with respect to the input, that's the whole point of the big-O analysis. You don't just make it up because you feel like it; you derive it from what the algorithm is actually doing.

For example, big-O analysis can change depending on the data structures used to store intermediate results. The very same algorithm might be O(nlogn) with a binary tree and become O(n) with a hash table.

I agree that it doesn't tell you what is going on with the hardware or whether you will be running into disk I/O bottlenecks, but I feel that you are considerably exaggerating when you say that asymptotic complexity analysis tells you "nothing" about what an algorithm is doing or how it will perform...

Quote:
This is why I object to statements which for instance, dismiss Fruchterman/Reingold because it has a base O(N^3) complexity. It's complexity alone tells us -nothing- about how useful it may be. It may take a million second or a millisecond for one node, it may be disk bound or cpu bound, we don't know.

I don't think that anybody made such a statement; you might have misunderstood... nobody said it was useless or should be dismissed. Nick just raised a flag about performance, but that's it. Besides, as I said earlier, sometimes you have no choice but to use a "poor" algorithm w.r.t. asymptotic performance, because there simply are no alternatives. (I'm not convinced that this is such a case, but that's beside the point.)

Quote:
Maybe the best case is O(1), maybe the average data size is <10, maybe there is a O(log(n)) algorithm elsewhere in your code with an average data size of 1000000. Maybe this is an algorithm that applies itself to caching extremely well, or maybe your algorithm is O(N^3) with a large data set, but its disk bound, and decreasing its time complexity won't help.

All of these "maybes" sound like somewhat contrived scenarios designed to prove a point, not describe real-world situations. For example, if an algorithm lends itself well to caching, that can be taken into account in an average-case (or even worse case) complexity analysis. If the I/O time complexity is worse than CPU time complexity, that can be taken into account. Even something that is disk-bound will be dominated by the time complexity for large enough inputs: disk reads are (basically) linear and eventually time will catch up (of course, that might be quite a while away).

Quote:
Assuming automatically that big-O is anything other than a vague guide to an algorithm's performance in a specific real-world case is a mistake to me. I don't want to sound like I don't care about categorizing algorithms' complexity, because I think this is an irreplacable and extremely important tool in a very broad and wide range of areas. Frankly, I'm usually a lot happier working in those areas than caring about perf :P But it's not a good metric when it comes down to the nuts and bolts in the specific area of perf optimization.

I don't think that anybody said it was a nuts-and-bolts type of metric. Perhaps this last paragraph clarifies your position best. It sounded an awful lot like you were saying (even in the first sentence of this paragraph) that big-O analysis is basically useless and an afterthought; now you're saying it's irreplaceable and extremely important. Perhaps you're simply being too emphatic elsewhere in saying how useless it is, and painting with too wide of a brush?

If the question is simply: is big-O a nuts-and-bolts metric for optimization, I think that the answer is pretty obviously that it's not. But if the question is whether it should be on your mind as you design/choose an algorithm, well, I think it should and I'm not sure if you do or not (you seem to have said both things), but if not I suppose we should just agree to disagree and move on here...

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Tsunami   USA  (204 posts)  Bio
Date Reply #21 on Wed 19 May 2010 05:48 PM (UTC)
Message
Perhaps I was to emphatic in my earlier statements :) I didn't mean to imply I think Big-O is not useful or anything, only that I think it can often be used to naively, without a full understanding of what it implies.

Anyways, some more screenshots of the layout in action. Again, Shallam, this time expanded to 400 rooms.

Initial Layout:
http://i.imgur.com/BlBn8.png

Final Layout:
http://i.imgur.com/8zwVJ.png

The full layout stopped after about 2 minutes, but frankly, the most useful part was done at 5 seconds, everything after that was expansion. I think I will need to come up with a new heurisitic on when to end the layout which is perhaps more aggresive w/ respect to the number of nodes, so that the simulation is able to end sooner. I could also increase the node/node repulsive force to see if I could get expansion to happen quicker. Right now I am using a bad approximation of the total acceleration within (at least whatever you call acceleration without a directional component...?) the system to check when to finish layout.

As you can see, the layout has about reached the limits of what it can do without human intervention. With some human aid, such as setting various edge lengths to an appropriate default, even better layouts could be obtained.

I can also see there might be some value in a post-processing step, which adjusts edge angles closer to exact verticals/horizontals/diagonals. This would make it look prettier in the final rendering.
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #22 on Wed 19 May 2010 06:15 PM (UTC)
Message
I think it would look better if edges were allowed to stretch in addition to or instead of changing the angles of exits. I find that the end result actually doesn't look as nice, even if it separates rooms more clearly. It just looks very messy with all the irregular angles.

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Tsunami   USA  (204 posts)  Bio
Date Reply #23 on Wed 19 May 2010 08:44 PM (UTC)

Amended on Wed 19 May 2010 08:50 PM (UTC) by Tsunami

Message
David Haley said:

I think it would look better if edges were allowed to stretch in addition to or instead of changing the angles of exits. I find that the end result actually doesn't look as nice, even if it separates rooms more clearly. It just looks very messy with all the irregular angles.


If you look at it, you will notice that edges are already allowed to strech or compact like springs. I could decrease the spring force constant along edges, but that might lead to increased simulation run time, it's still something I am playing with. At the moment I have no good rules for adjusting the constants in various force equations, I just play around with them until something works.

It is easy enough to balance edge and node/node forces, since both operate in the direction of their local energy minima. It is more difficult to balance edge torque forces, since these operate on an angle to their local minima. I have given some thought to trying to point them towards their local minima, but this seems far to error prone and difficult at the moment.

As for your suggestion of not changing angles, if you think that through you will realize that the only way of doing that is to effectively increase angle torque to infinity. That isn't possible, so we have to make do with raising the torque constant quite high, and post-processing edge angles. I'm already considering both of these :)

I've also found some articles on parallel layout algorithms, which may be promising :)
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #24 on Wed 19 May 2010 08:57 PM (UTC)
Message
Quote:
As for your suggestion of not changing angles, if you think that through you will realize that the only way of doing that is to effectively increase angle torque to infinity. That isn't possible, so we have to make do with raising the torque constant quite high, and post-processing edge angles. I'm already considering both of these :)

Well, perhaps this is a failing of the force-directed approach for this kind of input, which isn't an abstract graph but a strongly spatially oriented and directed map. If post-processing edge angles and increasing the torque constant high enough works, that would be fine. (It's ok to change by a degree or two, but some of the changes in that final result image are much higher.)

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Tsunami   USA  (204 posts)  Bio
Date Reply #25 on Wed 19 May 2010 09:41 PM (UTC)

Amended on Wed 19 May 2010 09:52 PM (UTC) by Tsunami

Message
David Haley said:

Quote:
As for your suggestion of not changing angles, if you think that through you will realize that the only way of doing that is to effectively increase angle torque to infinity. That isn't possible, so we have to make do with raising the torque constant quite high, and post-processing edge angles. I'm already considering both of these :)

Well, perhaps this is a failing of the force-directed approach for this kind of input, which isn't an abstract graph but a strongly spatially oriented and directed map. If post-processing edge angles and increasing the torque constant high enough works, that would be fine. (It's ok to change by a degree or two, but some of the changes in that final result image are much higher.)


The only other approach I can imagine would involve mapping groups of nodes into metanodes, where a metanode could be moved only along a certain vector common to all the contained nodes (and edges). This means vectors for all the cardinal directions, plus two more for expansion and shrinking. Forces would then be decomposed and applied to any applicable metanodes containing the node in question. It seems that you could easily run into metanode conflicts which would prevent any movement short of an advanced analysis however.

Perhaps a more fruitful approach would be to use metanodes to disallow movement rather than allow it, so that a metanode would track what directions contained nodes were not allowed to move.

Interesting thoughts, but I don't think I have the time at the moment to write a research paper on MUD specific layout techniques at the moment :P At least, I'm having a hard time thinking of any other domain that could apply to!

Edit: Actually, something I just thought of, perhaps I can reduce edge forces to zero if there is any other force acting along the edge axis, and raise them otherwise. This might have the effect of letting edge angles be directed without regard to edge length, and avoid creating awkward angles solely due to the edge spring force. It would still allow odd angles due to node/node repulsion.
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #26 on Thu 20 May 2010 04:42 AM (UTC)
Message
Out of curiosity, why is it necessary to use a force-directed layout in the first place? It's fine if the answer is "because I feel like it and learning about it", but I'm curious why you feel that this is the best kind of algorithm. For example, I could easily imagine that an algorithm that simply stretches edges to minimize intersection could work quite well in many cases, although clearly any geometry where exit directions are "funky" (I won't define that precisely for now) would confuse this algorithm.

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Tsunami   USA  (204 posts)  Bio
Date Reply #27 on Thu 20 May 2010 04:18 PM (UTC)
Message
David Haley said:

Out of curiosity, why is it necessary to use a force-directed layout in the first place? It's fine if the answer is "because I feel like it and learning about it", but I'm curious why you feel that this is the best kind of algorithm.


MUD maps generally differ from theoretical graphs only in that they associate an angle with an edge. Whatever layout algorithm we chose must have some way to respect this then.

This immediately eliminates:

  • Spectral Layouts
  • Symmetric Layouts
  • Tree Layouts
  • Hierarchical Layouts


This leaves only orthogonal and force-directed. We can eliminate orthogonal, since modifying it to support another axis would be quite difficult, and it probably still wouldn't give the type of layout we want. This leaves force-directed layouts, which are extremely flexible and customizable, not to mention simple to implement (I also don't have to retake some of my college math classes to understand it :P ).

There are several different starting algorithms for force-directed layout, along with several optional heuristic methods of searching for local energy minima (simulated annealing, genetic algorithms, maybe neural networks). Fruchterman-Reingold and Kamada-Kawai are the most popular algorithms; I chose Fruchterman-Reingold because I felt it applied itself better to the types of final layouts I want to see. I have not gotten to the stage of thinking about additional heuristic to find better energy minima, but I suspect they wouldn't do much in this specific domain since default layouts tend to give us the local minima we want anyways for a layout closest to the ideal map.

David Haley said:

For example, I could easily imagine that an algorithm that simply stretches edges to minimize intersection could work quite well in many cases, although clearly any geometry where exit directions are "funky" (I won't define that precisely for now) would confuse this algorithm.


Approaching this from a non-force directed layout direction sounds incredibly complicated (unless you pick edges to stretch at random, compare edge crossing before and after, and wait a couple days for your results), though I'm open to ideas if you have any. The simplest way of doing this seems to me to assign force fields to edges which interact with the field produced by other edges such as to push them away at orthogonal angles. This lets the rest of the graph correct itself automatically to accommodate the newly uncrossed edges. In fact, this is an approach some FDL algorithms have already taken. I'm not particularly interested in it because it's complicated (how would you pick which orthogonal direction to exert forces in? If you pick the wrong direction even once you would end up with terrible results), it would have a terrible perf impact, and it wouldn't improve layouts all that much.
Top

Posted by David Haley   USA  (3,881 posts)  Bio
Date Reply #28 on Thu 20 May 2010 04:43 PM (UTC)
Message
Quote:
MUD maps generally differ from theoretical graphs only in that they associate an angle with an edge.

I'm sorry, I don't agree with this at all, really. MUD maps tend to have all kinds of extra properties about the spatial relationship between rooms. You don't often find areas such that going west then north then west lands you in the original room. You don't often find egregious overlap, which is entirely possible in theoretical maps. So no, I can't agree that MUD maps are really so similar to the abstract theoretical graph.

Quote:
We can eliminate orthogonal, since modifying it to support another axis would be quite difficult,

Then don't; just stack each Z-axis plane on top of the others.

Quote:
Approaching this from a non-force directed layout direction sounds incredibly complicated (unless you pick edges to stretch at random, compare edge crossing before and after, and wait a couple days for your results)

How amusing that you use an algorithmic complexity argument... ;)

But no, you wouldn't pick edges at random. Off the top of my head, you could pick the edges that are "worst", as in have the most overlap, and work around those.

The main problem is what to do when stretching simply won't work anymore, such as when you have "irrational" layouts where you go west and to go back you need to go not east but north, or where going north, east and south lands you in the same spot.

Well, if somebody can give me the data set we're working with, I can play with it and see what happens...

David Haley aka Ksilyan
Head Programmer,
Legends of the Darkstone

http://david.the-haleys.org
Top

Posted by Tsunami   USA  (204 posts)  Bio
Date Reply #29 on Thu 20 May 2010 09:39 PM (UTC)

Amended on Thu 20 May 2010 09:41 PM (UTC) by Tsunami

Message
David Haley said:

Quote:
MUD maps generally differ from theoretical graphs only in that they associate an angle with an edge.

I'm sorry, I don't agree with this at all, really. MUD maps tend to have all kinds of extra properties about the spatial relationship between rooms. You don't often find areas such that going west then north then west lands you in the original room. You don't often find egregious overlap, which is entirely possible in theoretical maps. So no, I can't agree that MUD maps are really so similar to the abstract theoretical graph.


But the important point is that while this may hold true 90% of the time, it there are still points where it doesn't (whether the map designer was having a bad hair day, or its just magic). This means that we can't draw any conclusions from this unless we are sure that the entire graph doesn't have any what I'm calling (perhaps somewhat inaccurately) non-Euclidean geometry.

David Haley said:

Quote:
We can eliminate orthogonal, since modifying it to support another axis would be quite difficult,

Then don't; just stack each Z-axis plane on top of the others.


I meant in addition to horizontal/vertical, you'd need diagonal. Orthogonal layouts produce very simplistic graphs, I don't think they are appropriate for this use.

David Haley said:

Quote:
Approaching this from a non-force directed layout direction sounds incredibly complicated (unless you pick edges to stretch at random, compare edge crossing before and after, and wait a couple days for your results)

How amusing that you use an algorithmic complexity argument... ;)

But no, you wouldn't pick edges at random. Off the top of my head, you could pick the edges that are "worst", as in have the most overlap, and work around those.


I was complaining more about the efficiency of random guessing than about any unknown complexity :P But in any case, consider the following scenario: Two edges cross, and you have identified these edges. Now you need to manipulate the length of X other edges in such a fashion that the edges no longer cross. How do you identify the edges you need to manipulate and how you need to manipulate them? Going on just intuition, this seems like an NP complete-ish problem.

David Haley said:

Well, if somebody can give me the data set we're working with, I can play with it and see what happens...


I'm just using some Achaean map Nick sent me. I've put it up at http://rapidshare.com/files/389728716/achaea.com_23.db (only 10 downloads allowed, so get it while it's hot...)

If you don't mind coding in C# I can send you my stuff so you don't have to write everything from the ground up, and can concentrate on the layout bit :)
Top

The dates and times for posts above are shown in Universal Co-ordinated Time (UTC).

To show them in your local time you can join the forum, and then set the 'time correction' field in your profile to the number of hours difference between your location and UTC time.


134,828 views.

This is page 2, subject is 4 pages long:  [Previous page]  1  2 3  4  [Next page]

It is now over 60 days since the last post. This thread is closed.     Refresh page

Go to topic:           Search the forum


[Go to top] top

Information and images on this site are licensed under the Creative Commons Attribution 3.0 Australia License unless stated otherwise.