Re: [forum] About our effort at NoMachine
|[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]|
On Wednesday 26 March 2003 07:15 pm, Jim.Gettys@hp.com wrote: > > > 7) image transport is a real issue, whether for glyphs or other images. > > > This is in part due to image sizes having grown due to greater depth > > > (most people have 32 bits/pixel these days). So besides compressing > > > glyphs, having the window system able to better > > > take images in pretty native formats would be a serious win for alot of > > > applications. > > > > It is also due to the inability to do long term caching of images in > > the server/on disk _between_ server runs on low bandwidth links > > Yup: this is what the Web does extensively. The trick is that round trips > cost you dearly, and how do you know what is cached? It's simple. Just negotiate the cache at startup and let both sides track what is added and discarded. Only X client side manipulates cache of requests stored at the X server side and only X server side manipulates cache of events and replies stored at X cient side. To negotiate the persistent cache to be restored, just send a list of MD5 of caches available at one side and let the remote side match a local copy. This is the way we've done and it works very, very well. > > If you want to go to really low bandwidth, that is clearly a likely win, > and proxy pairs can certainly share such information. Yep. Keith Packard, if I remember correctly, seems to not agree on proxies being used as a long term solution, but I think they offer the best compromise. There are optimizations that proxies can do that cannot be built in the X protocol. In NX I implemented a mechanism to stream images bigger than a given threshold. When image has been completely transferred at remote side, the X client gets an event and can start using it. To build this into X protocol would not make sense. Proxies can efficiently multiplex the traffic, and optimize bandwidth usage according to the type of traffic being generated. They can finally leverage the cross-client recurrence of common X operations to achieve significant compression ratios. > X does work pretty well over broadband for most applications, and > moderate amounts of work can improve it greatly. > The question in my mind, given the Web's existance and network > technology improvement is how far in that direction to go. I don't think X alone does work pretty well on the Internet. The proof of this is that a lot of people want to get rid of it. I don't even think Web can be a replacement of X :-). I rather think X can be a good replacement of Web in many cases, while Web can progressively become a server side technology. I think X plus good X proxying can work quite well, but still X lacks a good architecture to make available X applications over the Internet. This, of course, has not to do with X protocol and can be built at a much higher level. > But we want to make sure to only improve the local case (the most common), > and never make that worse. Probably the ability to run remote applications at an acceptable speed can be guaranteed by proxying, while differences between remote and local displaying can be handled by toolkits. After all, toolkits already do the hard job of choosing the best way to draw according available extensions. In NX we hide some extensions that are not well compressed or are not implemented and both GTK and QT are able to deal with this without any problem. Having to run on so many different X servers, they probably had to learn the hard way :-). /Gian Filippo.
[Photo] [Yosemite] [MIPS Linux] [ARM Linux] [Samba] [Linux Security] [Linux RAID] [Linux Resources]