Quote: Now just how much time are you saving when all that a non-blocking socket read does is copy memory from kernel space? None. Yes, but so what? The point is that the MUD is not processing input, and not emitting output. The point is not how much CPU time is saved, but how long I/O takes from the player's perspective. As long as we don't exceed, in real-time, the time until the next pulse, then we don't care how much extra time we spend.
Quote: No I haven't at all. In fact I pointed out that you will be executing your loop much more frequently than Diku does. You haven't addressed the scenario I mentioned. Which one? That I would run the loop more often? Yes I have: it's not a problem, because you're not processing that much I/O to begin with, and since you recheck time till the next tick after each processing loop, you won't get behind schedule.
Quote: In Smaug's case which commands would send more than 4K in output? It looks to me like certain OLC commands that list objects, possibly long help files, command lists, social list, area list, etc. Looks like Derek took the opposite tack from you and essentially nerfed the output rate of commands that produce large output. Those that you've called out of game commands. Since Derek was actually running a game that had 200+ players online at it's peak, I'm certainly not going to gainsay that experience and what he's done. I'd suggest to you that he was doing performance tuning. I don't consider tweaking buffer sizes to be fundamanetal design problems. I'd want to see actual performance measurement before believing that it's that much more efficient to chop data into small chunks. If we're talking about network efficiency, the OS is supposed to take care of that. If we're talking about the time it takes to copy data, well, ok, I agree that tweaking buffer sizes isn't a fundamental design problem. (Then again, I didn't say it was.) What I *did* say was a fundamental design problem was to not bother even sending out output when you are otherwise doing nothing at all.
Quote: I never said any such thing. I said "The change would require a rewrite of everything between the selects, separating out what is strictly network I/O from command and update processing." And that is true. Yes, I apologize, you did not use those exact words. I got the impression due to your suggesting that it would take a separation of I/O from command/update processing, which to me, if it were to be done completely properly, could indeed entail a fairly substantial rewrite. Unless I misunderstood what you meant, of course.
Quote: So you've gained nothing from processing I/O faster than the commands are processed, except searching the descriptor list dozens of times more every pulse. Oh wait, no that's not it, you've actually optimized the handling of commands without a wait state. So the player typing "HELP STORY" 20 times a second will have optimal servicing. Brilliant. ;-) I believe you misunderstood. I meant that players cannot get an advantage over other players in e.g. combat by typing in lots of commands in the hope to get multiple ones in per tick.
As for the output servicing, I think we've established over and over again that you don't give a hoot either way but it really bothers me to get choppy output. Unless you demonstrate to me that I am wrong to care about choppy output, I suggest that we should declare that issue to be pretty much over due to being an apparent matter of opinion.
Quote: You've actually increased processing per the scenario you ignored my last post. We are not talking about the same kind of processing, I think. I am saying that you do not seem to care about the decreased output throughput. Or, put another way, you do not seem to care about the potential to increase output throughput.
Quote: If your argument is now that all things are relative, then it hardly makes any sense to post crap like, "The remaining time is somewhere close to a quarter second, which, while not that much, is still an awful lot of wasted time." and "Having two select statements is a fundamental design flaw." I am not sure what point you are making here. And I don't refer to your claims as crap. :-)
Besides, I still don't understand why we should not be sending output when we'd be doing nothing whatsoever instead anyhow. Again, if it has to do with network usage, let the OS's implementation of the network protocol worry about that. And yes, I do consider it a design flaw to be sitting around idle when there is stuff you could be doing (at what is basically no cost as far as the rest of the system is concerned) that you are not doing.
Quote: One of the interesting things about the Diku loop is that because of the order of execution (input/update/output) all output will be sent every pulse levaing only that that exceeds 4K for the next pass. As noted, the choppy output appears to be something deliberate in Smaug. Well, sure, of course it's deliberate. What I'd like to see are the reasons for the decision, and if possible, some data points to back them up.
Let's forget about input for now. As I said, I can see potential problems in properly handling the input case. Let's focus on output. What I'd like to hear from you is simply why we should sit idle when there is output we could be sending. |