Maximizer shouldn't really be run over a WAN connection (this is my own personal opinion) in its standard form as it is Client Server application designed for ideally 10/100Mb LANs
The Pervasive engine that is the underlying foundation of Maximizer is a Transactional Engine
Technically The TCP/IP protocol which is being run over a LAN and a WAN, like other protocols, has a number of built-in limitations. How these limitations affect your application depend on whether it is a transactional or streaming application.
Transactional applications are affected by the overhead required for connection establishment and termination. For example, each time a connection is established on an Ethernet network, three packets of about 60 bytes each must be sent and approximately 1 RTT (Round Trip Time) is required for this exchange. When termination of a connection occurs, four packets are exchanged. This is compounded when an application opens and closes connections often, which Maximizer in all its wisdom does do.
In addition, when a connection is established, “slow-start” takes place. This artificially limits the number of data segments that can be sent before acknowledgement of those segments is received, an efficiency designed to limit network congestion.
When a connection over Ethernet is first established, regardless of the receiver’s window size, a 4-kilobyte (KB) transmission can take up to 3-4 RTT due to slow-start.
A TCP/IP optimization, known as the Nagle Algorithm, can also limit data transfer speed on a connection. The Nagle algorithm was designed to reduce protocol overhead for applications that send small amounts of data, such as a Telnet session sending a single character at a time. Rather than immediately sending a packet with lots of header files and little data, the stack waits for more data from the application or an acknowledgement before proceeding.
Delayed acknowledgements or “Delayed Ack” was also designed into TCP/IP to enable more efficient “piggybacking” of acknowledgements when return data was forthcoming from the receiving side application. However, if this data is not forthcoming and the sending side is waiting for an acknowledgement, delays of about 200 ms+ per-send can be experienced.
When a TCP connection is closed, the connection resources at the node that initiated the close are put into a wait state, called TIME-WAIT, to guard against data corruption if duplicate packets linger in the network (ensuring both ends are done with the connection). This can deplete resources required per-connection (RAM and Ports) when applications frequently open and close connections.
In addition to being affected by delayed ACK and other congestion avoidance schemes, streaming applications can also be affected by a default receive window that is too small on the receiving end.
Add to all this the potential Latency issues with the majority of WANs.
In a network, latency, a synonym for delay, is an expression of how much time it takes for a packet of data to get from one designated point to another. In some usages, latency is measured by sending a packet that is returned to the sender and the round-trip time is considered the latency.
The latency assumption seems to be that data should be transmitted instantly between one point and another (that is, with no delay at all). The contributors to network latency include:
Propagation: This is simply the time it takes for a packet to travel between one place and another at the speed of light.
Transmission: The medium itself (whether optical fiber, wireless, or some other) introduces some delay. The size of the packet introduces delay in a round trip since a larger packet will take longer to receive and return than a short one.
Router and other processing: Each gateway node takes time to examine and possibly change the header in a packet (for example, changing the hop count in the time-to-live field) the amount of gateways involved can be tested with a simple 'TRACERT' to test the hops.
Other computer and storage delays: Within networks at each end of the journey, a packet may be subject to storage and hard disk access delays at intermediate devices such as switches and bridges. (In backbone statistics, however, this kind of latency is probably not considered.)
2) In a computer system, latency is often used to mean any delay or waiting that increases real or perceived response time beyond the response time desired. Specific contributors to computer latency include mismatches in data speed between the microprocessor and input/output devices and inadequate data buffers.
good tools to test the response times in these situations are AnalogX's Netstat Live and Sysinternals TCPview.
The best solution in this situation is to implement Terminal Services or Citrix, however please note this particular solution is still not supported by the software manufacturers
With a Terminal Server or Citrix configuration the Server at base stores all data and carries out all the processing, sending just moniter display changes (GUI interpretations) using a Protocol such as RDP (terminal server) or ICA (Citrix) over TCP/IP to a Dumb terminal or Thin Client thus reducing the amount of Bandwidth sapping Client/Server transactions 50 fold
Hope this helps
Regards
Maxtalk Administrator
(excerpts of this posting were sourced from WhatIs.com)
[This message was edited by CABC Support on April 29, 2003 at 02:35 PM.]