Ine Multicast Deep Dive Download Skype
'You can’t even imagine a way that reliability could be implemented on top of UDP that beats TCP? What total bullshit' This.
Incarcat de Accesari 1109 Data 30.10.10 Marime 5.1 MB Browserul tau nu suporta HTML5.
This 'TCP already does it best in every situation' is a common trope usually spouted by people who know nothing about what TCP does and why. The easiest way to get over this misconception is to ask yourself, can you think of a protocol that not only works well over all sorts of networks, all the way from two boxes connected by a 10Gig switch (with 10us ping latency) vs two hosts talking to each other via a satellite link (with perhaps, 2 second ping latency) but is the best option across all those situations? The truth is that TCP is the common denominator that's available in pretty much every device you would want to use.
But in almost every individual use case, you could come up with a far better protocol if you took the time and effort to treat the problem seriously. I would hardly call protocols like [SIP]() and [RTP]() 'custom'. There are many more standard protocols out there than just TCP and UDP. There's even a 'better TCP' in the form of [SCTP](). The fact is that TCP was the MVP for reliable streams of data on the 1970s internet, and is woefully inadequate for modern usage, but we're stuck with it (including its oft-disabled parts like Nagle's Algorithm) because it's what's available everywhere.
Game network protocols are pretty easy in comparison with some of the protocols I've encountered in my day job, especially once you've learned the lessons of people like the author of that article. ' I would hardly call protocols like [SIP].
The meaning of 'custom' in this context was to distinguish it from a full TCP or raw UDP. Not to imply that there's no standard definitions for VOIP protocols. Quite the contrary in fact. The point being made is that UDP is just a substrate to create other protocols that also need to be designed for a purpose. And just calling all these protocols 'UDP' is a mistake. Btw, not to be argue from authority, but to back my words with some context, I'm responsible for at least one such 'custom' protocol myself.
UDP multicast is widely used by financial trading networks to distribute state-of-the-market updates. Getting those networks to work reliably (ie. Right groups distributed to right places with minimal packet loss, reordering, latency and jitter) is orders of magnitude more difficult than getting unicast traffic working. I don't think it's going to happen. Our current networks can't even do unicast well (eg.
Bufferbloat, general ISP randomness). But I'd be keen to read the OP's rants if someone tried. I debate whether multicast has any use case in gaming. When online gaming started, it was fairly common to send the entire world state to all clients. Multicast helps when you are sending the same data to everyone. But online gaming started, worlds were small (think Quake I level).
However, this type of model does not scale as the world size increases, and eventually the world is too big to send everything to every client. Plus, cheating.
Sending the entire world state to every client, opens up cheating vectors. The trend has been for the game server to send only state that the particular client can actually see or needs. This limits the damage of what a compromised client can do.
Plus, it allows the world size to scale up. Don't picture a single multicast connection socket for all clients. Picture a multicast pub/sub topic model: each client would have a regular unicast socket to send to the server, but to receive, each client would subscribe to a set of server-multicast channels, one listener-socket per channel. For each event, the server would find (or create) a multicast-socket channel that has only the correct subscribers, and push the message once over that multicast socket.
The network would then do the job of making that message arrive on every client. And, of course, you can separately encrypt each multicast channel[1], handing out keys over a regular 1:1 TCP+TLS 'control' socket to the clients. (This would usually be merged with the client's unicast 'send' socket.) Guess what design I have just coincidentally described? Cable set-top-box pay-per-view! (The original kind, not the 'over-the-top' access-on-demand kind which is effectively equivalent to Netflix.) In legacy STB PPV, each movie stream chunk is an 'event' as described above; each stream is separately encrypted; and each set-top-box gets a low-bandwidth TCP-like duplex control channel to the head office to request and receive keys for streams. People requesting the same movie at the same time get put into a queue and then bucketed on ~15min intervals; each bucket gets temporarily allocated a UHF band; and then a stream is broadcast over that band to everywhere that head-office reaches, injected on the line similarly to a local public-access cable TV channel, but only existing for two hours. --- [1] If your clients are okay with their bandwidth being wasted, you can be a bit sloppier and get away with fewer (or even just one) multicast socket(s).
Imagine a multiplayer game like Starcraft: you could push every player's update events to all players. Encrypted with keys in a keybag whose owners are the people the event should be visible to. Clearing fog-of-war would literally involve the client performing an action that the server responds to by sending a key to decrypt previously-received opaque event data. This matches the model of how, say, collaboration software treats group ACLs: being granted access to a new group means being given the key to decrypt the object-change-events within the global event stream that were relevant to that group.
I've worked with enet and I'm a big fan. It is definitely well suited for games. I'm not familiar with the other two. A non-obvious feature of enet is that you can create two channels on a single connection then send permanent info (item 3 has been collected by player 2) reliably on one channel and send short term data (player 2's position is xyz) unreliably on the other channel.
The point is that the packets with a channel are ordered but different channels have separate timelines and retry queues. That means even if a reliable packet needs a lot of retries and takes long time getting through, it won't stall the packets on a different channel. If your target market is gamers at HOME. Then ENet is a good way to go.
Of the three different ways 1) Custom Protocol - Great, but the internet doesn't support it. So this is not viable. 2) Defined Protocol - Again, even though there is a spec for this protocol, if you try to use it you will find its not well supported, meaning its going to work intermittently or slowly at best, at worst not work at all. 3) Supported Protocol with added layers. This looks like your best bet, they just add a reliability layer on top of UDP (done that myself). The reason I eventually abandoned UDP on my project (a virtual world) was I needed my app to work within corporate environments, and many corporate routers do NOT support UDP with out IT departments support.
I sometimes ask the question 'how does the internet work?' In interviews.
The vague and wrong answers I hear are astonishing. I'm looking for a basic understanding of the layering of TCP, or RDP, or ICMP (or anything actually) on top of IP.
I don't get that level of insight much. The people who do have that insight usually can go deeper -- slow-start, BGP, etc. I wonder if we have a modern version of C.P. Snow's mid-twentieth-century two cultures lament.. That's the complaint that educated humanities folks don't know the basics of science. Are we miseducating our so-called 'full-stack' engineers by not offering them a basic understanding of telecom?
Or are they not listening? This goes on the list of interview questions that's great at letting the interviewer claim pretty much any candidate they want to discard is unworthy of the position. It's the tech community's variant of a voting literacy test. If you ask me 'how does the internet work' and what you really want is details on the underlying protocols, at around the range of TCP/IP, why aren't you actually asking for that? Your question is so broad that it's not shocking in the slightest that most people pick a different focal point for their answer, especially given the already hostile interview environment and the added agitation provided by such an unclear question.
If your goal is to elicit and understanding of somebody's background in TCP/IP and the surrounding protocols, ask them a question about that. If that's not relevant for the position (it's almost certainly not for most 'full-stack' engineers), switch to asking questions that are relevant to the work they'll actually be doing. I agree the potential for error needs to be actively corrected against but I think this class of question can be quite useful for seeing where someone has worked and, ideally, where they're comfortable saying “I don't know”.
For web developer positions, one of my favorite questions is “I'm looking at a “Buy Now” button on a web page. What happens when I click it?” with follow-up questions to expand if someone gets stuck on a tangent. I like that more than the form above because it allows anyone to focus on the areas they know more about – e.g. A JS dev will talk about events, client-side validation, etc. A lot more than a backend person.
Some of the best people simply got to a point where they said something like “I know SSL is involved but I haven't had to work on that”; some of the classic overconfident promoters feel they have to explain everything and cook up some very fanciful stories, like the guy who told me HTTPS transactions happen over a leased-line connection separate from the Internet for security. >If you ask me 'how does the internet work' and what you really want is details on the underlying protocols, at around the range of TCP/IP, why aren't you actually asking for that?
Because the more specific the question, the less likely any interviewee is going to be able to answer it correctly. >If that's not relevant for the position (it's almost certainly not for most 'full-stack' engineers), switch to asking questions that are relevant to the work they'll actually be doing. A full-stack engineering position is really broad in scope. You need to be able to handle any type of web-related task they throw at you. From architecting new features to troubleshooting slowness.
You're expected to be a hammer they can swing at all their technical problems, which all look like nails to them. Don't like it?
Too bad, that's the job. Where a general 'how does the Internet work' question fits in is, ok, you've got complaints coming in from customer service about site slowness. English Guru Free Download Cd. How do you start to troubleshoot this problem? At what point can you just say, 'oh, it's not us, it's with the Internet,' and will you be able to explain why it's not us to the satisfaction of the people paying you to know about these things? The more abstract, fundamental knowledge you have about how the Internet works, the more you can start to nail down the scope of the problem. Also, some problems will have you writing custom protocols.
It helps to not get lost. I'll chime in as someone with a skill set that is completely not special (linux application development, and hardware / firmware / software integration), but in certain circles, is unique since it doesn't involve JavaScript (yet). I think the question is too vague, but I've asked a similar question: To kernel hackers (not application developers) - Tell me what you know about the Linux boot process To anyone with a 5th-gen language on their resume: how's a hash table implemented? (use of a feature or function without understanding why and how it works can lead to really interesting performance problems, even if the code is functionally correct).
>to anyone with a 5th-gen language on their resume: how's a hash table implemented? Use of a feature or function without understanding why and how it works can lead to really interesting performance problems, even if the code is functionally correct) Someone recently argued something similar with me, although they were even more emphatic about the importance of taking an interest in how the datastructures you're using are put together. Interestingly though, there are a few well known languages where if you assumed that the default Map was implemented as a Hashtable, you'd be making bad assumptions, as it's a Hash Array Mapped Trie, a data structure that certainly wasn't taught back when I was studying computer science. Unsurprisingly, despite the strong opinion that you should always find out how the things you use are implemented, my interlocutor didn't know this. And I didn't expect him to. I think that an obsession with particular shibboleths that 'any decent programmer should know' misses the point that you're a decent programmer because of what you can do when you need to, not because of the datastructure implementations you've memorised.
People vastly underestimate the amount of useful information that exists that you could learn. If someone asked me why I don't know some micro-fact, I'd want to respond by saying 'because I've learned these other 10 things instead.' If you spend all your time learning some particular domain of knowledge, you're not going to be prepared to ask questions about the domains you don't know.
And most people will have a hard time seeing the value in those things they didn't learn. Especially when they're playing an authority figure. I'd argue that the cases where a hash table's performance characteristics come into play are few and far between for most developers, and when they do, it's way more relevant that the person have a general idea how to profile perf in their code and say 'yup, this is the spot that I need to worry about optimizing first'.
By marking down people who don't know how to implement a hash table, you're cutting out a large chunk of your candidate pool, and I'm willing to stake my hiring choices on the fact that a decent chunk of them could easily by taught that knowledge if it became relevant for their career on your team. I did an interview once where they asked me questions around the implementation of a hashmap. At the time I'd been working about a year out of university, and had not in that job come across any situation where I'd needed to know anything about how hashmaps worked under the hood (if anyone had a first job where you did, I'm happy for you, yours was a lot more interesting than mine). Any remnant of an explanation I'd got from my CS degree, most likely in first year, was way back in the mists of time. In the pressured situation, I simply admitted I didn't know. At this point the interviewer's face soured, and he basically wrapped up the interview straight away - it had been running about 5 minutes at that point, with only one question before that, so it was pretty obviously being cut short.
Immediately after the interview I went to Wikipedia to look up hashmaps, instantly understood the basic mechanics, and 5 minutes later could have given a stellar answer to that question. If that interviewer was looking for someone with an encyclopaedic knowledge of data structures, with flawless recall under pressure, then his question served him well in filtering me out. I sincerely doubt that's what he was looking for, though. Since that experience I've been very careful, doing interviews myself, never to ask straight-up knowledge questions like this - or at least if I veer into a question like that, allow the candidate to explore the idea (even tell them the answer, and just talk about it to see if they get it) if they initially don't know. That will get you a lot closer to what you're actually trying to assess them for. It's very easy to fall into a trap of interviewing with a 'mirror' approach, where you assume that the candidate must know what you know, or else be bad.
Kind of a paint by numbers, where the finished product looks a lot like you. But everyone learns on a different path.
You need to remember that there was a time where each concept you use take for granted every day in your job was once new and alien to you, too. You picked it up just fine, and so can they. In fact, this is by far the most core skill you are looking for - that ability to actually pick up new things and use them. Given the choice between two functionally correct data structures with different performance profiles, you should pick the one more appropriate to the task at hand. A lot of performance issues can be proactively avoided with a bit of analytical thinking, but you do need to know how your data structures work and what their trade-offs are. Profilers are wonderful tools, but in the 15 years I've been professionally writing software the number of people I know that have ever used one numbers in the low 10s.
Of those, the number that can derive meaningful data from a profile is even smaller. And finally, those that can spot 'death from a thousand cuts' situations (quite common) from the proverbial method that takes 20% of all CPU time is even smaller yet. Profiling has a place, of course. But it's tricky to get right, incredibly time consuming without sampling, and often doesn't give you a complete picture.
It's the performance equivalent of a debugger and really should be a fallback, not the only tool in your arsenal. I think perhaps there is a misconception: the idea for a sufficiently vague answer is to see how deep the person can go. A non-answer isn't such a big deal (if someone doesnt know, better for them to say it than to make up some nonsense). Also depends on the role. If i were interviewing for a network engineer, you better believe i expect the person to talk about IP/BGP/routing tables, and ideally be able to talk transport protocols and further. If they have no idea, and just know how to 'conf t' or click in a GUI, thats pretty informative.
Which is why, thankfully, it's possible for the interviewer to have a dialogue. You don't have to lead in with 'How does the internet work' to get that level of depth, and in fact it's not usually the best way to find if a candidate can deep dive. You can lead in by asking the candidate how something higher-level works, and then as they provide feedback, take the conversation towards lower-level network protocols and layers.
That way, the person answering doesn't need to guess either your desired starting depth or direction, and they can focus on communicating their knowledge to you. Interviews tend to be far more productive when everybody involved is working to help the candidate provide as much of their knowledge as possible, without setting up confusing questions or overly hostile hurdles to 'challenge' them. The question is a stealth customer support question. I'm not talking about interviewing for customer support, but unless you're hiring a hermit they'll be talking to other, probably non-technical people. I guess the standard HN car analogy would be asking a car mechanic how suspension works and you want to hire the mechanic that makes the questioner happy and comfortable and feels informed, not the mechanic who says F off noob LMGTFY or gets 'stuck' in a deep corner of Hookes law analysis of torsion bars vs axial springs regardless of questioners interest or simply starts making stuff up about left handed crescent wrenches and frequency grease.
Also you want a reasonably fast answer, not beating around the bush for an hour without figuring out what the customer is looking for. Its not just dealing with non-techies anyway. I could elaborate at length on ye olde FDDI 10b/8b optical code making analogies with good old AMI/B8ZS encoding on T1 circuits to maintain clock sync, but I might have to talk to a guy about 2015-era photodiode manufacture trivia or the photodiode guy might have to talk trivia with me, and its interesting to see reactions. Usually tech people get along better with tech people even if non-tech people ask the same question which is kind of interesting. As a strategy to 'win' at an interview its pretty hard to beat a table turning approach of you tell me your mental model and I'll be your tour guide as we straighten out misconceptions as you steer us down the path to whatever it is you want answered or you reach your fill of answers. Gotta be brave for this one if they inevitably take some weird side detour, and yes they will notice if you start steering away from your own weak spot.
Very little over 10G exists on any single wavelength, its all in stacking at different wavelengths. Much as your cabletv has a zillion 6MHz wide channels holding HDTV or whatever, you can run 4, 10, whatever, 10G ethernet light wavelengths on a piece of fiber at slightly different colors. Note the difference between latency and bandwidth. Infiniband isn't really all that fast, but it is very low latency.
Infiniband-FDR 'fourteen data rate' is technically over your speed criteria. You start having problems with infinitely narrow pulses because glass isn't linear enough and lasers aren't monochromatic enough, and multipath etc. That question might be a conceit. I like to think that stuff is pretty important--I used to do that level of dev for a living. But does it really matter?
That stuff is all pretty well abstracted these days. When was the last time the three-way handshake affected anyone? I'd compare it to the fact that I know jack shit about how cars work. I can't tell a spark plug from a ball bearing--I just take my car to the dealer when it makes a funny noise and keep writing checks until it goes away. And I'm okay with that.
>Are we miseducating our so-called 'full-stack' engineers by not offering them a basic understanding of telecom? Or are they not listening? What's that 'stack' anyway? Usually I get the impression that people frontend (user-interface, so usually HTML, JS, maybe a thin web app) and backend (API service, business logic and storage in a database). Is understanding of the OS, of the memory model, of networking protocols (and possibly even how they are implemented), specifics of file systems etc. What of the instruction set of the process, of the actual silicon layer?
Understanding the physics of the embedded transistors? Understanding the quantum and thermodynamical problems that chip makers run into?
The answer is probably different for everybody who uses that term, since it's a term people use for self-marketing, or for vaguely describing requirements of a job. Every time I've heard the term full stack used, it referred to web developers, and I think it just implies that someone could go from nothing to having a full website without outside help. That is, they can create HTML and JavaScript front-end and a back-end using a web framework and a database. Anything outside of that is pretty much ignoring common usage. Obviously it isn't a situation like 'if you wish to create a website from scratch, you must first create the universe' because that would be a singularly unhelpful definition of full stack. While I totally agree with your point, I think the problem goes both ways.
A lack of understanding of science causes serious problems, especially when you look at policy making for example. However, there are plenty of scientists and engineers with completely wrong-headed ideas about social issues, history, and so on. I would argue there needs to be a lot more crossover: Liberal Arts majors should probably have to get through Chemistry, Physics, Calculus, Biology and Electronics, and STEM majors should probably have to get through (sorry, US-centric) Women's Issues, Western and World History, African-American studies, Introductory Constitutional Law, etc. >Their loss I suppose. Our loss, really: less good quality games for everyone. I threw in the towel at GameDev for reasons very similar to yours ('premature optimization' also comes up far too frequently) - from, what can only be, people who have never even worked on specific components of games that they are claiming knowledge on.
The StackOverflow system somewhat fails on GameDev, because there aren't enough knowledgeable people on the site to issue corrective votes (likely because these people are scarce in general). Incidentally the bad advice is bad on many levels: even if you somehow ignore UDP it's still bad advice to do 'RPC over TCP'. TCP works best (fastest, least latency) when the send buffers are full at both ends - which RPC, by definition, does not do.
This means that these people simply couldn't have researched the fundamentals of the protocol that they assume to be 'good enough,' how could they therefore be aware of how it is better or worse than alternatives? World of Warcraft is a pretty slow-paced game on 'the inside', it just looks like a fast real time game but it's almost turn based when you analyze the gameplay. When you cast a spell, you'll instantly see the animation that your character is casting the spell, but the actual effect comes only when the animation is complete. This animation is used to hide the network latency (other players might see the animation played back slightly faster to compensate for the lag). Further, once you cast the spell, it must succeed and you can't really do anything before it has completed. In other words, reliability is essential and latency can be hidden. Contrast this with a fast paced, (soft) real time multiplayer game.
When a player jumps, the jump has to start immediately. If you want to jump and shoot, the shots have to be fired right away, you can't wait for the jump to complete before shots are fired.
With TCP, you'd be stuck on the ground until the 'jump' packet is re-transmitted (two or more network round trips, tens to hundreds of milliseconds, very noticeable) and no shots would be fired until the character is off the ground. In a real time game, old packets are next to useless. Re-transmitting is wasting bandwidth and causing lag by blocking on information that is no longer useful. The networking model (like OP describes) is a constant stream of packets containing redundant information, minimum latency is essential and loss of packets is tolerated. You should be aware that TCP vs. UDP becomes apparent only when network conditions are bad.
You could choose either and have satisfactory results 90% of the time, but once packets get lost, TCP does the wrong thing when it comes to fast paced real time games. If you do take the time to read Gaffer's original article, you should see that it is different from TCP in many ways. Yes it does similar things (reliability, flow control, etc) but in a completely different manner, tuned for a completely different use case. I forgot where i read this, but WoW is a bit of a special case because several issues were just circumvented through game design.
Example 1: there's no collision detection between players, i.e. You can block neither PCs nor NPCs. Players blocking each other wouldn't have been possible with TCP at all. Example 2: you don't aim and fire and hit/miss shots due to timing and precision (twitch) - you select a target and execute an action which is sent to the server which will roll a die to see the outcome. It was fairly usual for the early game to break down when a lot of players congregated in a small area (hogger or tarren mill, anyone?), where an instant spell took several seconds to execute and PCs were teleporting around (after running into a wall for several seconds) constantly because their positioning updates took too long. But because of the style of the gameplay this still worked well enough in most situations. What i want to say is that wow worked around TCPs performance issues by adjusting the game design.
I can't fathom the reason for this - it might have to do with their focus on low hardware and connectivity requirements (for countries with unreliable connections). But even then i don't see why TCP would have been the better choice. Check out this presentation on UDT protocol: It details the problems of TCP, names most alternatives, and describes UDT work. I used to use UDT to eliminate TCP's problems.
It's just bad design and the fact that changing it is usually a kernel modification. Hence, all these application-level alternatives with UDT being most general-purpose and one of best. Be interesting to see if someone can adapt it to games. For now, probably best to use game-specific model as in article and comments. UDT for other stuff where you control how the app works on both ends. TCP where you control one side. TCP is suitable for 'flat' static content, which you only retrieve at punctual intervals, and is relatively large.
A good example is a file which won't change for the next 5 min, and you know that you want that exact file BUT you are ready to wait until it loads perfectly. UDP is for real time, continuous data, when you can't wait for your hardware to check for packet consistency.
A good example is paper mail versus a telephone call. The paper mail will land, but you don't care if your phone line cuts for 1 second. You just wait for the other guy to check if he can hear you UDP is really much better when you want to have performance, especially when it comes to responsiveness and latency, but can afford to discard packets if they're not reliable.
I think it boils down to people not realizing how hard it is to transmit data over long distances, and that the protocol don't consider that data transmission is 100% good, because it never is. You can have a lot of interference in your network, but TCP will always manage to land those packets.
I think the topics should be split: you have 'networking' which is about connecting computers typically using TCP, UDP, RDMA. And 'distributed systems' which is about how to use connected computers to achieve something. And yes, both are difficult, but for other reasons: - networking is hard because the APIs are old, platform specific and probably don't behave very intuitively. So trying to build messaging systems on top of sockets is not simple, (but you will only find out after you've moved beyond the prototype stage). - distributed systems is hard because the odds of successfully exchanging messages aren't in your favour. It's not that networking is really hard, it's that development of synchronized games that run over unreliable links is really hard.
If you want a simple solution for a simple game, here's one. Use a fixed-format UDP packet to pass current location and orientation of the player and whatever else changes rapidly. This should be stateless; if you lose a packet, the next one has a full state update. This is enough for a simple FPS game.
Anything else goes over one or more TCP connections. If you have to load assets (level maps, textures, etc.) just use HTTP over TCP, which means you get to use standard servers and client software for that stuff. If your game is too complex for that model, you're probably going to have to do some serious thinking about distributed synchronization. The single problem with TCP this rightly points out is head-of-line blocking.
But overcoming that still leaves you with the problem of coping with out of order or missing events at app level, with clients and servers seeing events in differing sequences (fex packet sequence of move-duck-fire, now shuffle the move around.). I witnessed one game engine project implement custom networking on UDP and then disable unreliable & out of order messages because of game logic headaches. And the payoff is quite small, since packet loss is rare in healthy networks and TCP handles the rare loss pretty efficiently based on ack clocking, not timeouts (fast retransmit, basically same as his idea of 'redundantly sending un-acked data').
You shouldn’t just compare C++ libraries. First, define your requirements, both functional (what should your networking do) and technical (latency, throughput, available bandwidth, operating systems of both clients and servers). Second, pick a protocols stack. Only then, look for those C++ libraries. If on step #2 you’ll choose e.g. ProtoBuff + UDP, you’ll have totally different set of available libraries than if you choose SOAP + HTTPS. Besides, very few of those libraries work well on multiple platforms, e.g.
Libuv does, while libevent does not. In addition, C++ is not the only choice. I once developed networking module for a game in C#. Have a few friends who are doing the same in finance industry. Not every game is critical to network latency.
Not every game is tolerant to packets loss. If those 4-8 players in the game are playing 3-dimension virtual reality poker, TCP or even HTTP will work just fine. I have read the original thread, and the author says somewhere in the comment, “it is likely to be similar to that you would see in something like Diablo II. Players exist in a world, attack enemies and interact with NPCs (including trading).” Which makes the article irrelevant to the original question. Games like Diablo 2 are not real-time; they can usually use TCP just fine. Not supporting it because why though? IP multicast routing requires end-to-end state for each (source, group).
Further, the boxes on the market for doing multicast (not a little GPCPU router) have limitations concerning how they do multicast within the box that makes scaling multicast a real pain (basically, internally, they will send forwarding engines traffic they don't need and choke up there because it runs out of internal ways to express the separate streams). Some of this can be shortcut by tunneling techniques, but it is just masking the situation and adding another layer of network complexity (in an area where humans still try to manage/troubleshoot mostly by hand). >He could or should have replied with less attitude and more facts, though. I can see his frustration: ignorance frequently has the loudest voice - especially on The Internet.
* 'TCP is good enough for Minecraft, it's good enough for you.' Blissfully ignores the numerous networking-related bugs that have been present in Minecraft for years. * 'Premature optimization is the root of all evil.' Blissfully ignores the remainder of Knuth's quote. * 'UDP is too complex, you won't get it right.' Blissfully ignores any and all challenges, never improving their ability.
* 'Internet is fast enough.' No, it isn't.[1] * Learns any or all of the above from someone else and believes it because it requires less effort. Here's someone who has been putting out top-notch content on how to solve the hard problems out of nothing else but for the common good: educating people on how to make games. What do people do? Challenge those facts with superstition and unbelievable ignorance. He's got all reasons to be frustrated, I didn't even write those beautiful articles and I'm frustrated.
He's been putting in so much effort and when people blissfully ignore it, yeah, it's going to be frustrating. Let him have his rant - at least he's not throwing the towel in. Everyone has their bad days. I don't know if this is a general failing of people's critical thinking abilities, a sloppiness in their reading, or something else, but I have noticed over the past (feels like) few years a growing problem: A lot of folks conflate personal insults ('you, ggambetta, are clearly a dunderhead when it comes to reading comments') with general sentiments about a group or abstract population ('Hacker News posters tend to be overly sensitive to the point of being crybabies'). A great deal of the flavor and comedy in writing, especially pieces that are self-identified rants, are not from personal insults but instead from hyperbolic and sometime vitriolic statements about an abstract other. Which specific behavior concerns you?
My only real complaint is that he's addressing a particular person here. In his shoes I would have anonymized the source of the text and maybe blended material from a few different people.
I'd rather the individual didn't accidentally recognize their words. Other than that, I think I'm ok with people making exasperated rants. For those of us who actually understand the topic or have experience of teaching, I think a rant like this can at least be funny, and often can be cathartic. One shouldn't howl at the noobs, but howling at the moon seems fine by me. I think it's also helpful for novices to see the occasional bile-dump like this.
The question comes across to me as kinda lazy. The querent never really took the time to understand the core technologies and instead just spent a few months implementing broken stuff. But rather than think it through, he just waves away an expert answer as 'kind of silly'. A piece like this can help novices see how frustrating and self-defeating that kind of laziness is.
I know I've benefited from seeing others get roasted for mistakes I could well have made. You are probably right -- it was bad taste. I got a bit annoyed by the tone of the article; I don't think personal insults have a place in a technical discussion. I taught Computer Graphics for several years, and I came across two kinds of students: uninterested, so no matter what you do they probably won't learn; and interested and trying hard but not understanding, in which case it's my responsibility as the teacher to find a way to get the idea into their heads. In this latter case I believe asking someone who is trying (but failing) to understand what I'm saying 'What the fuck is wrong with you?'
Doesn't lead to better understanding on their part. Yes, I promote my own work, but this is because I feel it caters to a public that is usually left out -- the absolute beginner. The kind of people who are missing a mental leap or two to get the concepts but are either ignored or essentially called idiots for not understanding.
Gaffer is absolutely not saying 'What the fuck is wrong with you?' To beginners. He's saying it to the so-called experts who posted nonsense like this: 'Yeah.
Gaffer is well-known as a guide, but also kind of horribly flawed. Question 1 for networking development: Do you need everything that TCP offers?
If yes, then use TCP. You're not going to outperform TCP to do what it's good. If not, read on.' This was a gilded +20 comment in the thread despite a) being nonsense and b) having the temerity to call his advice 'horribly flawed.' He is an expert with multiple shipped AAA games. And he prefixed his rant with RANT MODE ON.
Sometimes you just need the cluestick. Even looking at it like that, it's wrong. TCP has a lot of tunable parameters where the choices are baked into it, and it assumes that extra round trips are no big deal. Crypto does not have all that many tradeoffs. You just need expert implementation. It's dangerous and unnecessary to do it yourself. For something that is low-bandwidth and latency-sensitive, you can take TCP and make minor adjustments and come out with something that is far better suited.
Even something as dumb and easy as sending all packets twice could turn an semi-common jitter into an ultra-rare jitter. If you could open a TCP socket with forward error correction options, and disable head-of-line blocking, then you could probably argue against custom TCP-like protocols. But that's not the world we live in. I have mixed feelings about it. A student with character will at least read the material, try to understand it, and then ask questions about confusing points. So, I put it to the test by loading up the article the Redditer saw. First paragraph explained it's part of a series with this link: Very first link explained some of what Reddit poster asked with others covering how to solve the problem and why each method was used.
Reading that person's post shows he or she took a casual glance at Part 4, didn't attempt to read rest, drew conclusions, then smeared same article on Reddit, then pretended to know what he or she was talking about on advanced aspects, and then asked some questions. This is the kind of behavior that makes the Internet drop verbal bombs on people as punishment and to deter future instances of laziness and arrogance. So, when seeing this, the writer has several choices: reward the behavior by rewriting their articles points in new comment for just that lazy person w/ specific references to things he or she ignored; link back to original w/ guidance on specific parts he or she ignored; blast him or her for being a lazy idiot who writes criticisms of stuff he or she doesn't even read. I think the latter is an entirely valid strategy in such a situation. It has the side benefit of preparing the person for the fact that the Internet will drop bombs on them when they display laziness, ignorance, and sense of authority at the same time. Hopefully the deterrent effect kicks in with the poster reading the next article and its links first.
And only then questioning or criticizing part of it. Less likely to get rants that way from that writer, other writers, FOSS projects, and anyone else that doesn't tolerate such bad behavior. You're much more patient than most.
Nothing wrong with that either. Let's just differentiate between people making solid attempt to learn and people like this poster. Nobody's insulting the former. If they did, I'd totally agree with you.
Far as this poster, I'd have grilled him or her too. This is not one of them. Frankly, unless you have a reason NOT to use TCP (and a very good one is realtime lossy communication), don't try to implement your own damn custom protocol on top of UDP that's just going to be a shittier TCP. This doesn't mean that TCP is perfect, but if you want lossless communication you will PROBABLY be happier there. If I'm writing a chess game, I could easily just stream the text format of the games over an IRC channel.
Many things, including games, don't necessarily need high bandwidth or low latency over consistency. But the attitude is definitely not warranted. >Do you know if there is a standard way of accomplishing this? Synchronizing clocks is one of the difficult problems in distributed computing. Games tend to avoid doing proper synchronization of clocks by working in discrete time steps (frames). Because games run at 15-60 frames per second (a bit more for simulators), you can get away with synchronizing the clocks to +/- a few frames, ie.
An accuracy of tens of milliseconds is good enough to provide a perception of real time. In other words: game time is measured with an unsigned integer that tells how many time steps have elapsed since the game started. Sorry but that needs a rather large asterisk applied to it. In the current era of 120Hz LCD displays and a Billion dollar market for video games. It's becoming very evident that the 'best', not in a AAA title sense, but in the 'so popular it becomes a sport with professional players' sense, games are those that have put a LOT more effort into the implications of their network synchronisation algorithms.
Some game engines have extremely long tail lives as popular mods take advantage of reliable network behaviour to create gaming experiences that don't infuriate passionate players keeping them alive many years after the game that wrote the engine moved on. Usually these days we use client/server for most games, meaning that initially the server is taken as being authoritative on time. It can collect and use a running average of pings to/from all clients which it can then use to order events (ie. Did I do X before you did Y?). But that's where the really hard (and fun!) coding and choices start, such as for example future prediction and unrolling of actions. Roughly speaking, for prediction (to lessen the effect of perceived lag), the server could predict that I was moving forward up to the last packet received, so I'm probably still going to be moving forward.
It can then use the predicted values in all calculations until it receives an actual packet informing it what actually happened (at which point it will need to decide how to handle possibly conflicting information, which is handled the same as below). For unrolling, if you have for example a 50ms ping and I 100ms; then at T+51ms you tell the server that you killed me, but I do the same at T+100ms - effectively compensating for the 50ms of lag difference between us you should die since I killed you 1ms before you killed me. BUT now it really depends on the server choices, do you kill me because your message got there first (old-school FPS); or does the server wait for my info since I'm involved in the action and then retroactively adjust you? Worse, since the server only gets my info on T+100 you will at best only get it at T+150, meaning that you will perceive 99ms of being alive (and maybe during which you killed someone else) which the server will then have to discard before killing you (some modern FPS use this scheme, you'll notice it where you peek out around a corner from safety and then go back, before suddenly being dead from an impossible shot). Also tough is the fact that it's not a single unroll, during the extra 99ms of being alive you would have still been sending packets to the server which then all have to be unrolled.