|[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]|
On 03/07/2010 11:56 PM, Pete Zaitcev wrote:
On Sun, 07 Mar 2010 17:51:41 -0500 Jeff Garzik<jeff@xxxxxxxxxx> wrote:If a site implements distinct endpoints for each tabled node ("t1.example.com", "t2.example.com", etc.) then redirects should result in directing clients to the current master, assuming that slaves have a deterministic manner of discovering the current master.I did not implement it, but it's trivial in CLD. Just create a file named "master" in the group. For added cleverness, split up the notions of "group master" (who owns the file and most of the DB, and knows about each master for each bucket), and "bucket master".Such a setup also makes use of IP Virtual Server impossible.Bah, big deal.
It is, if we actually want to attract users.
But that brings us to our second problem, a common problem in computer science: the thundering herd. When a tabled endpoint crashes or loses its master status, clients must move en masse to the new master. As client counts increase, this becomes a "thundering herd" DDoS'ing the new target machine.Not a problem in practice, I expect. S3 clients are not clients that sit connected and then are notified about a failover. Instead, they connect, perform operations as fast as they can, quit. Therefore, there is not going to be a spike in traffic because of the failover that is significantly greater than the normal operations rate.
Think standard web browser behavior, including HTTP 1.1 pipelining and extended connections... Think also about the length of time it takes to negotiate a new master, and what the clients will do in the meantime. Major thundering herd.
Ideally, we want to enable writing on every tabled node in a cell. Given that the metadata is the only bit that _must_ be performed on the master, it seems like the least-effort, least-cost solution for us is for slaves to send a "write metadata" message to the master, and then perform the data write itself.I would not do it, at least not yet. A better effect would be to have separate DBs with separate masters for each bucket.
No, that just multiplies the problems already inherent in the current design, as well as creating new problems. BDB just isn't built for that, so scaling that solution is a major problem. A bucket should be a scalable unit, and that does nothing to solve it. Whereas if we solve the current problem described in $thread, buckets are automatically scalable as well.
Another thing, how many clients do you think tabled is going to have accessing it at any given time in any realistic deployments for years to come? How about ONE (although, it may be multi-threaded)? One retarded thing we can do now is to rush into implementing things like slave-to-master metadata forwarding when we do not have a single installation to guide us.
I am glad Apache httpd hackers never set such strict, low goals :) tabled is a web server, with all that entails, because the S3 API is often used to front a web site (or at least the static portion thereof). Standard web browser behavior and multiple, pipelining clients are part of tabled's client base.
If we fail to understand and solve problems that people already solved ten years ago, then tabled certainly will not attract end-user installations. Understanding standard web server design, and the problems and solutions that arose from that, are very important.
Jeff -- To unsubscribe from this list: send the line "unsubscribe hail-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html