C++ Network Programming, Volume I: Mastering Complexity with ACE and Patterns

Ru-Brd

Motivation

Although the ACE_Select_Reactor is flexible, it's somewhat limited in multithreaded applications because only the owner thread can call its handle_events() method. ACE_Select_Reactor therefore serializes processing at the event demultiplexing layer, which may be overly restrictive and nonscalable for certain networked applications. One way to solve this problem is to spawn multiple threads and run the event loop of a separate instance of ACE_Select_Reactor in each of them. This design can be hard to program, however, since it requires developers to implement a proxy that partitions event handlers evenly between the reactors to divide the load evenly across threads. Often, a more effective way to address the limitations with ACE_Select_Reactor is to use the ACE Reactor framework's ACE_TP_Reactor class, where "TP" stands for "thread pool."

Class Capabilities

ACE_TP_Reactor is another implementation of the ACE_Reactor interface. This class implements the Leader/Followers architectural pattern [POSA2], which provides an efficient concurrency model where multiple threads take turns calling select() on sets of I/O handles to detect, demultiplex, dispatch, and process service requests that occur. In addition to supporting all the features of the ACE_Reactor interface, the ACE_TP_Reactor provides the following capabilities:

  • It enables a pool of threads to call its handle_events() method, which can improve scalability by handling events on multiple handles concurrently. As a result, the ACE_TP_Reactor::owner() method is a no-op.

  • It prevents multiple I/O events from being dispatched to the same event handler simultaneously in different threads. This constraint preserves the I/O dispatching behavior of ACE_Select_Reactor , alleviating the need to add synchronization locks to a handler's I/O processing.

  • After a thread obtains a set of active handles from select() , the other reactor threads dispatch from that handle set instead of calling select() again.

Implementation overview. ACE_TP_Reactor is a descendant of ACE_Reactor_Impl , as shown in Figure 4.1 (page 89). It also serves as a concrete implementation of the ACE_Reactor interface, just like ACE_Select_Reactor . In fact, ACE_TP_Reactor derives from ACE_Select_Reactor and reuses much of its internal design.

Concurrency considerations. Multiple threads running an ACE_TP_Reactor event loop can process events concurrently on different handles. They can also dispatch timeout and I/O callback methods concurrently on the same event handler. The only serialization in the ACE_TP_Reactor occurs when I/O events occur concurrently on the same handle. In contrast, the ACE_Select_Reactor serializes all its dispatching to handlers whose handles are active in the handle set.

Compared to other thread pool models, such as the half-sync/half-async model in Chapter 5 of C++NPv1 and Section 6.3 of this book, the leader/followers implementation in ACE_TP_Reactor keeps all event processing local to the thread that dispatches the handler. This design provides the following performance enhancements:

  • It enhances CPU cache affinity and eliminates the need to allocate memory dynamically and share data buffers between threads.

  • It minimizes locking overhead by not exchanging data between threads.

  • It minimizes priority inversion since no extra queueing is used.

  • It doesn't require a context switch to handle each event, which reduces latency.

These performance enhancements are discussed further in the Leader/Followers pattern description in POSA2.

Given the added capabilities of the ACE_TP_Reactor , you may wonder why anyone would ever use the ACE_Select_Reactor . There are two primary reasons:

  1. Less overhead ” Although the ACE_Select_Reactor is less powerful than the ACE_TP_Reactor it also incurs less time and space overhead. Moreover, single-threaded applications can instantiate the ACE_Select_Reactor_T template with an ACE_Noop_Token -based token to eliminate the internal overhead of acquiring and releasing tokens completely.

  2. Implicit serialization ” The ACE_Select_Reactor is particularly useful when explicitly writing serialization code at the application-level is undesirable. For example, application programmers who are unfamiliar with synchronization techniques may prefer to let the ACE_Select_Reactor serialize their event handling, rather than using threads and adding locks in their application code.

Example

To illustrate the power of the ACE_TP_Reactor , we'll revise the main() function from page 96 to spawn a pool of threads that share the Reactor_Logging_Server 's I/O handles. Figure 4.5 illustrates the architecture of this server. This architecture is nearly identical to the one in Figure 4.4 (page 96), with the only difference being the pool of threads that call ACE_Reactor::handle_events() . This example is in the TP_Reactor_Logging_Server.cpp file. The C++ code for main() is shown below.

Figure 4.5. ACE_TP_Reactor Logging Server with Controller Thread

1 #include "ace/streams.h" 2 #include "ace/Reactor.h" 3 #include "ace/TP_Reactor.h" 4 #include "ace/Thread_Manager.h" 5 #include "Reactor_Logging_Server.h" 6 #include <string> 7 // Forward declarations 8 ACE_THR_FUNC_RETURN controller (void *); 9 ACE_THR_FUNC_RETURN event_loop (void *); 10 11 typedef Reactor_Logging_Server<Logging_Acceptor_Ex> 12 Server_Logging_Daemon; 13 14 int main (int argc, char *argv[]) { 15 const size_t N_THREADS = 4; 16 ACE_TP_Reactor tp_reactor; 17 ACE_Reactor reactor (&tp_reactor); 18 auto_ptr<ACE_Reactor> delete_instance 19 (ACE_Reactor::instance (&reactor)); 20 21 Server_Logging_Daemon *server = 0; 22 ACE_NEW_RETURN (server, 23 Server_Logging_Daemon (argc, argv, 24 ACE_Reactor::instance ()), 1); 25 ACE_Thread_Manager::instance ()->spawn_n 26 (N_THREADS, event_loop, ACE_Reactor::instance ()); 27 ACE_Thread_Manager::instance ()->spawn 28 (controller, ACE_Reactor::instance ()); 29 return ACE_Thread_Manager::instance ()->wait (); 30 }

Lines 1 “12 Include the header files, define some forward declarations, and instantiate the Reactor_Logging_Server template with the Logging_Acceptor_Ex (page 67) to create the Server_Logging_Daemon type definition.

Lines 16 “19 Create a local instance of ACE_TP_Reactor and use it as the implementation of a local ACE_Reactor object. For variety, we then set the singleton ACE_Reactor to the address of the local reactor. Subsequent uses of ACE_Reactor::instance() will now use our local reactor. When reassigning the singleton reactor, the caller becomes responsible for managing the lifetime of the previous singleton. In this case, we assign it to an auto_ptr so it's deleted automatically if the program ends.

Lines 21 “24 Dynamically allocate an instance of Server_Logging_Daemon .

Lines 25 “26 Spawn N_THREADS , each of which runs the event_loop() function (page 97). The new singleton reactor's pointer is passed to event_loop() ( ACE_TP_Reactor ignores the owner() method called in that function).

Lines 27 “28 Spawn a single thread to run the controller() function (page 98).

Line 29 Wait for the other threads to exit and save the status as main() 's return value.

Line 30 When the main() function returns, the tp_reactor destructor triggers calls to the Logging_Acceptor::handle_close() (page 58) and Logging_Event_Handler_Ex::handle_close() (page 70) hook methods for each logging handler and logging event handler, respectively, that are still registered with it. By default, the ACE_Object_Manager (Sidebar 23 on page 218 of C++NPv1) deletes the singleton ACE_Reactor during shutdown. Since we replaced the original singleton with our local reactor object, however, ACE won't delete either the original instance (because we assumed ownership of it on line 18) or our local one (because ACE won't delete a reactor it didn't create, unless specifically directed to).

The primary difference between this example and the example on page 96 is the number of threads executing the event loop. Although multiple threads can dispatch events to Logging_Event_Handler and Logging_Acceptor_Ex objects, the ACE_TP_Reactor ensures that the same handler won't be invoked from multiple threads concurrently. Since the event handling classes in the logging server are completely self-contained, there's no chance for race conditions involving access from multiple threads. We therefore needn't make any changes to them to ensure thread safety.

Ru-Brd

Категории