You are encountering intermittent errors on requests going through ScaleArc but are able to successfully execute the same requests/queries by going directly to the databases servers.
The most common cause of this is due to incorrectly set idle times. In a direct to database environment you have:
- Client to database idle timeout (set on the client)
- Client to database timeout (set on the server)
In a ScaleArc deployment, this situation is much more complex, which includes:
- Client to ScaleArc (set on client)
- Client to ScaleArc (set on ScaleArc)
- ScaleArc to Database (set on ScaleArc)
- ScaleArc to Database (set on database)
If the settings at all four points in a ScaleArc deployment are not correct, it is possible that the client will try to transmit a request to the database, but the connection is being closed on the database or on ScaleArc, but the termination packet hasn't been received by the client yet. This often is represented by the client reporting an error that the connection was closed during a request, but doesn't expose the underlying timing of the problem.
In order to avoid issues like this, the best practice is to set idle timeouts where the client to ScaleArc (on the client-side) timeout is the lowest timeout, then the client to ScaleArc (on the ScaleArc) is the next lowest, etc. As such, the client will control the actual timeout behavior, then ScaleArc, then the database. The idea is that at no point will an idle timeout occur other than on the client if everything is working properly.
Some JDBC/ODBC connectors provide multiple settings for idle timeouts, including an idle when no response is pending, vs. an idle when a response is pending. Currently, for ScaleArc behavior, you want to have the larger of any timeout to be set lower than the ScaleArc client timeout behavior, which should again be lower than the ScaleArc server timeout behavior.