How does database connection pooling work
Maintenance of inactive or empty pools involves minimal system overhead. A connection pool is created for each unique connection string. When a pool is created, multiple connection objects are created and added to the pool so that the minimum pool size requirement is satisfied. Connections are added to the pool as needed, up to the maximum pool size specified is the default. Connections are released back into the pool when they are closed or disposed.
When a SqlConnection object is requested, it is obtained from the pool if a usable connection is available. To be usable, a connection must be unused, have a matching transaction context or be unassociated with any transaction context, and have a valid link to the server.
The connection pooler satisfies requests for connections by reallocating connections as they are released back into the pool. If the maximum pool size has been reached and no usable connection is available, the request is queued. The pooler then tries to reclaim any connections until the time-out is reached the default is 15 seconds.
If the pooler cannot satisfy the request before the connection times out, an exception is thrown. We strongly recommend that you always close the connection when you are finished using it so that the connection will be returned to the pool. You can do this using either the Close or Dispose methods of the Connection object, or by opening all connections inside a using statement in C , or a Using statement in Visual Basic.
Connections that are not explicitly closed might not be added or returned to the pool. Do not call Close or Dispose on a Connection , a DataReader , or any other managed object in the Finalize method of your class. In a finalizer, only release unmanaged resources that your class owns directly.
If your class does not own any unmanaged resources, do not include a Finalize method in your class definition. For more information, see Garbage Collection. The connection pooler removes a connection from the pool after it has been idle for approximately minutes, or if the pooler detects that the connection with the server has been severed.
Note that a severed connection can be detected only after attempting to communicate with the server. If a connection is found that is no longer connected to the server, it is marked as invalid. Invalid connections are removed from the connection pool only when they are closed or reclaimed. If a connection exists to a server that has disappeared, this connection can be drawn from the pool even if the connection pooler has not detected the severed connection and marked it as invalid.
This is the case because the overhead of checking that the connection is still valid would eliminate the benefits of having a pooler by causing another round trip to the server to occur.
When this occurs, the first attempt to use the connection will detect that the connection has been severed, and an exception is thrown.
NET 2. This will good for pool performance?? And let me know my understanding is little wrong?? YeWin, no that sounds good. Re your question about connections being left in the pool, that can happen but generally only if you end up at some point with concurrent active connections. Otherwise, there'd be re-use and it wouldn't reach In terms of that happening, see my penultimate paragraph, particularly the "may actually close old connections when the usage pattern quietens down" bit.
DiegoMariani, slower than if I'd done it by hand, faster than if I'd try to coerce MS Word into making it easier :- — paxdiablo. Show 6 more comments. Images speak a thousand words paxdiablo gave an awesome description : Source. You have no accept record. I seen 4 connection in Pool. So connection number is restrict in this pool by pooling type?? Or what will happen when connection is not free in Pool? Client need to wait connection free??
Most of the pools create a new connection when connections reach the max capacity. This can keep on growing till the db reaches a threshold. In some case like oracle jdbc you can specify "initial size" and "max size" during pool construction itself. Unfortunately, the image doesn't say the most important thing. That is: Why keeping 10, 20, 30, How it can be?
For example:. To ensure that the connection pool is closed correctly when an application stops running, the application must notify the DataDirect Connection Pool Manager when it stops. If an application runs on JRE 1. The PooledConnectionDataSource. For example, if changes are made to the pool configuration using a pool management tool, the PooledConnectionDataSource.
We closed the connection at , , , and iterations. The total elapsed time for each run was measured when connection pooling was used and was measured again when connection pooling was not used as shown in the following table:.
When connection pooling was used, the first connection took the longest time because a new physical connection had to be created and the pool manager had to be initialized. Once the connection existed, the physical connection was placed in the pool and was reused to create a handle for each subsequent connection. You can see this by comparing the first connection the first iterations with its subsequent connections.
NOTE: In our connection pooling example, all subsequent connections were reused because they were used for the same user and pool cleanup had not occurred. Now, compare the pooling results at each iteration checkpoint to the non-pooling results. Clearly, connection pooling represents a significant improvement in performance.
Connection pooling provides a significant improvement on performance by reusing connections rather than creating a new connection for each connection request, without requiring changes in your JDBC application code. Note that the time for the iteration pooled case is faster than the iteration pooled case. If the JIT compiler is disabled, the time for the iteration pooled case increases to 94 ms, while the time for the other pooled cases remains the same. Get Customer Support. Pgbench is based on TPC-B.
TPC-B measures throughput in terms of how many transactions per second a system can perform. Based on TPC-B-like transactions, pgbench runs the same sequence of SQL commands repeatedly in multiple concurrent database sessions and calculates the average transaction rate. Pgbench uses the following tables to run transactions for benchmarking.
As you see, in our initial baseline test, I instructed pgbench to execute with ten different client sessions. Each client session will execute 10, transactions. From these results, it seems our initial baseline test is transactions per second.
Pgbouncer can be installed on almost all Linux distributions. You can check here how to set up pgbouncer. Alternatively, you can install pgbouncer using package managers like apt-get or yum. If you find it difficult to authenticate clients with pgbouncer, you can check GitHub on how to do so. We will make use of the transaction pooling mode. Inside the pgbouncer. As in the previous test I executed pgbench with ten different client sessions.
Each client executes transactions as shown below. As you see, transaction throughput increased from transactions per second to transactions per second. Unlike pgbouncer, pgpool-II offers features beyond connection pooling. The documentation provides detailed information about pgpool-II features and how to set it up from source or via a package manager. I changed the following parameters in the pgpool. Like the previous test, pgbench executed ten different client sessions.
Each client executes transactions to the Postgres database server. Thus we expect a total of 10, transactions from all clients. There are several factors to consider when choosing a connection pooler to use. Although pgbouncer and pgpool-II are great solutions for connection pooling, each tool has its strengths and weaknesses. If you are interested in a lightweight connection pooler for your backend service, then pgbouncer is the right tool for you.
Unlike pgpool-II, which by default allows 32 child processes to be forked, pgbouncer uses only one process. Thus pgbouncer consumes less memory than pgpool2. Apart from pooling connections, you can also manage your Postgres cluster with streaming replication using pgpool-II. Streaming replication copies data from a primary node to a secondary node.
Pgpool-II supports Postgres streaming replication, while pgbouncer does not. It is the best way to achieve high availability and prevent data loss. Finally, if you want to add load balancing and high availability to your pooled connections, then pgpool2 is the right tool to use.
0コメント