Performance Analysis for Javaв„ў Websites
| Note: Code listings are numbered in a separate series in each chapter and gathered in their own subsection, following the figures. Figure 1.1 Customers in a traditional bookstore Figure 1.2 Customers using an on-line bookstore Figure 1.3 Concurrent users in a traditional bookstore Figure 1.4 Concurrent web site load Figure 1.5 Active customers in a traditional bookstore Figure 1.6 Active web site clients Figure 1.7 Traditional store customers per hour Figure 1.8 Throughput dynamics without waiting Figure 1.9 Throughput dynamics with waiting Figure 1.10 Throughput curve: brick and mortar store Figure 1.11 Typical web site throughput curve Figure 1.12 Total checkout time in a traditional store Figure 1.13 Response time in an on-line store Figure 1.14 Think time example Figure 1.15 Response time queue for a brick and mortar store Figure 1.16 Throughput curve and response time graph Figure 1.17 Steady-state measurement interval Figure 1.18 Reducing the checkout process path length Figure 1.19 Example bottleneck: gift-wrap department Figure 1.20 Traditional store handling more customers in parallel Figure 1.21 Scaling with multiple bookstores Figure 1.22 Horizontal scaling example Figure 2.1 Model-View-Controller within a Java web application Figure 2.2 Small servlets versus front controller strategy Figure 2.3 Threading/queuing "funnel" Figure 2.4 A frozen web site Figure 2.5 Servlet loading versus JSP loading Figure 2.6 Cookies and HTTP sessions Figure 2.7 Failover with a persistent HTTP session database Figure 2.8 Enterprise JavaBean transactions without a fa §ade Bean Figure 2.9 Enterprise JavaBean transactions using a fa §ade Bean Figure 2.10 Clones in both vertical and horizontal scaling Figure 3.1 Router placement in a large web site Figure 3.2 A typical web site firewall configuration Figure 3.3 Reverse proxy server used for security Figure 3.4 Caching reverse proxy servers Figure 3.5 Different speeds for networks handling different loads Figure 3.6 A load balancer distributing HTTP traffic Figure 3.7 An example of load balancer affinity routing Figure 3.8 Proxy server impacting even load distribution with affinity routing Figure 3.9 Users accessing a web site through a proxy server farm Figure 3.10 A simplified version of the SSL handshake Figure 3.11 Plug-in operation Figure 3.12 Vertical scaling of the application server Figure 3.13 A small web site using horizontal scaling Figure 3.14 A DMZ configuration with servlets, JSPs, and EJBs Figure 4.1 Two examples of web site traffic patterns Figure 4.2 A conceptualized JVM heap and corresponding settings Figure 4.3 Typical garbage collection cycles Figure 4.4 Garbage collecting too frequently Figure 4.5 Typical memory leak pattern Figure 5.1 A brokerage web site's traffic patterns Figure 5.2 Yearly traffic patterns for an e-Commerce site Figure 5.3 Web site configuration supporting access by pervasive devices Figure 6.1 Peak versus average web site load Figure 6.2 One HTML assembled from multiple HTTP requests Figure 6.3 The same daily traffic volume spread over both 8- and 24-hour days Figure 6.4 An example test environment Figure 7.1 Java Pet Store demo home page Figure 7.2 Java Pet Store hierarchy Figure 7.3 Example of primitive scripts Figure 7.4 LoadRunner parameter properties GUI Figure 7.5 Result of Browse for Fish Figure 7.6 Example of customizing think times Figure 8.1 Script recording Figure 8.2 Example LoadRunner ramp-up Figure 8.3 Example of SilkPerformer V workload configuration Figure 8.4 Testing environment Figure 8.5 Sample LoadRunner Summary Report Figure 8.6 Sample LoadRunner monitor Figure 9.1 An example test environment Figure 9.2 Shared network impact Figure 9.3 Typical DMZ configuration for application servers Figure 9.4 Single-disk versus multidisk database I/O Figure 9.5 Testing live legacy systems Figure 9.6 Master/slave load generator configuration Figure 9.7 Testing the networks with FTP Figure 9.8 One possible test ordering for a large test environment Figure 11.1 Iterative test and tuning process Figure 11.2 Test run measurement interval Figure 11.3 Measuring over long runs to simulate steady state Figure 11.4 Example of high run-to-run variation Figure 11.5 Example of acceptable run-to-run variation Figure 11.6 Example of run variation with a downward trend Figure 11.7 Results after introducing a change Figure 11.8 Throughput curve to determine saturation point Figure 11.9 SMP scaling comparisons Figure 11.10 Vertical scaling using multiple JVMs Figure 11.11 Load balancing architecture options Figure 11.12 Testing individual server performance Figure 11.13 Quantifying load balancer overhead Figure 11.14 Testing a two-server cluster Figure 11.15 Two-server cluster scalability results Figure 12.1 Sample vmstat output Figure 12.2 Sample Microsoft Windows 2000 System Monitor Figure 12.3 Sample JVM heap monitor Figure 12.4 Sample Thread Analyzer screen capture Figure 12.5 Sample Resource Analyzer output Figure 13.1 Underutilization example Figure 13.2 Example of bursty behavior Figure 13.3 Example of a bursty back-end database system Figure 13.4 Example of high CPU utilization Figure 13.5 Example of high system CPU utilization Figure 13.6 Example of high CPU wait on a database server Figure 13.7 Uneven cluster loading Figure 13.8 IP-based versus user -based affinity Figure 14.1 TriMont performance test configuration Figure 15.1 Sample user ramp-up performance test results Figure 15.2 User ramp-up performance test results Figure 15.3 TriMont scalability results Figure D.1 Summary test results graph |