.. _casestudy_clientserver: Case Study: Client-Server ========================= In this example we demonstrate how an experimenter can setup client and server traffic generators. First we illustrate the example with only one server and one client. Then we show how the same procedure an be used for a a significantly larger topology in the tutorial :ref:`casestudy_clientserver55`. We demonstrate three aspects of MAGI: specifying multiple event streams, synchronizing with triggers, and a special target called *exit* to unload agents. Event Streams ------------- This example has three events streams; the **server stream**, the **client stream**, and the **cleanup stream**. The coordination between the events can be illustrated as follows: .. image:: ../_magiimages/casestudy_clientserver/cs_workflow.png :width: 300px Event streams can be synchronized using event-based triggers or time-based triggers. The triggers are indicated as wait states in gray. The group formation and loading the agents, which is also automated by the orchestrator tool, is not illustrated above. Server Stream ^^^^^^^^^^^^^ The server event stream consists of three states. The start state which generates a trigger, called serverStarted, once the server agent is activated on the experiment nodes. It then enters the wait state where it waits for a trigger from the client event stream. Once the trigger is received, it enters the stop state, where the server is deactivated or terminated. The AAL description is as below: .. code-block:: yaml serverstream: - type: event agent: server_agent method: startServer trigger: serverStarted args: {} - type: trigger triggers: [ {event: ClientStopped} ] - type: event agent: server_agent method: stopServer trigger: ServerStopped args: {} Client Stream ^^^^^^^^^^^^^ The client event stream consists of five states. First, the client agent implementation is parameterized by the configuration state. This occurs as part of the agent loading process. The client stream then synchronizes with the server stream by waiting for the serverStarted trigger from the server nodes. Once it receives the trigger the client agent is activated in the start state. Next, the client stream waits for a period ∆t and then terminates the client agents in the stop state. On termination, the client agents sends a clientStopped trigger, that allows the server stream to synchronize and terminate the servers only after all the client have terminated. The AAL description is as below: .. code-block:: yaml clientstream: - type: trigger triggers: [ {event: ServerStarted} ] - type: event agent: client_agent method: startClient args: {} - type: trigger triggers: [ {timeout: 60000} ] - type: event agent: client_agent method: stopClient trigger: clientStopped args: {} Cleanup Stream ^^^^^^^^^^^^^^ The last event stream, the cleanup stream consists of two states. First, it waits for all the servers to stop and then it enters the exit state. The exit state unload and tears down all the comminucation mechanisms between the agents. The exit state is entered by the key *target* is used to transfer control to a reserved state internal to the orchestrator. It causes the orchestrator to send agent unload and disband group messages to all the experiment node and then it exits the orchestrator. .. code-block:: yaml cleanup: - type: trigger triggers: [ {event: ServerStopped, target: exit} ] Running the Experiment ---------------------- * Swap in the experiment using the network description file given below. * Set up your environment. Assuming your experiment is named myExp, your DETER project is myProj, and the AAL file is called procedure.aal. .. code-block:: bash PROJ=myExp EXP=myProj AAL=procedure.aal * Once the experiment is swapped in, run the orchestrator, giving it the AAL above. The orchestrator needs an AAL file, and the experiment and project name. The example output below uses the project “montage” with experiment “caseClientServer”. .. code-block:: bash > /share/magi/current/magi_orchestrator.py --experiment $EXP --project $PROJ --events $AAL Once run, you will see the orchestrator step through the events in the AAL file. The output will be as follows: .. image:: ../_magiimages/casestudy_clientserver/cs_orch.png :width: 600px :target: ../_images/cs_orch.png The orchestration tool runs an internally defined stream called *initilization* that is responsible for establishing the server_group and the client_group and loading the agents. Once the agents are loaded, as indicated by the received trigger AgentLoadDone, The *initialization* stream is complete. Now the serverstream, clientstream and the cleanup stream start concurrently. The serverstream sends the *startServer* event to the server_group. All members of the server_group start the server and fire a trigger *serverStarted*. The clienstream on receiving the trigger *serverStarted* from the server_group, sends the *startClient* event to the client_group. One minute later, the clientstream sends the event *stopClient* to the client_group and terminates the clientstream. All members of the client_group, terminate the client_agent and generate a *clientStopped* trigger which is sent back to the orchestrator. Once the serverstream receives the *clientStopped* trigger from the client_group, it sends out the *stopServer* event on the server_group. Once all the servers are stopped, the members of the server_group respond with a *serverStopped* trigger, which is forwarded to the cleanupstream. On receiving the *serverStopped* trigger, the cleanupstream enacts an internally define stream called *exit* that is responsible for unloading agents and tearing down the groups. The experiment artifacts, the procedure and topology file that were used for the casestudy are attached below. :Procedure: :download:`[casestudy_clientserver.aal] <../_magiimages/casestudy_clientserver/cs_procedure.aal>`. :Topology: :download:`[casestudy_clientserver.tcl] <../_magiimages/casestudy_clientserver/cs_topology.tcl>`. :Archived Logs: :download:`[casestudy_clientserver.tar.gz] <../_magiimages/casestudy_clientserver/cs_exparchive.tar.gz>`. Visualizing Experiment Results ------------------------------ In order to visulaize the traffic on the network, we modify the above mentioned procedure to add another stream called "monitorstream". This stream deploys a packet sensor agent on the server node to measure the traffic on the link in the experiment. The packet sensor agent records the traffic data using MAGI's data management layer. .. code-block:: yaml monitor_group: [servernode] monitor_agent: group: monitor_group path: /share/magi/modules/pktcounters/pktCountersAgent.tar.gz execargs: {} .. code-block:: yaml monitorstream: - type: trigger triggers: [ { event: serverStarted } ] - type: event agent: monitor_agent method: startCollection trigger: collectionServer args: {} - type: trigger triggers: [ { event: clientStopped } ] - type: event agent: monitor_agent method: stopCollection args: {} The recorded data is then pulled out by the below mentioned tools to create a traffic plot. In order to populate the traffic data, re-orchestrate the experiment using the updated procedure. The updated procedure file and the corresponding logs are attached below. The traffic can then be plotted in two ways: **Offline**: A plot of the traffic on the link connecting the client and the server can be generated by the :ref:`magigraph`. .. code-block:: bash > GRAPHCONF=cs_magi_graph.conf > /share/magi/current/magi_graph.py -e $EXP -p $PROJ -c $GRAPHCONF -o cs_traffic_plot.png .. image:: ../_magiimages/casestudy_clientserver/cs_traffic_plot.png :width: 400px **Real Time**: A real time simulated traffic plot using canned data from a pre-run experiment can be visualized `here `_. A similar plot using live data can be plotted by visiting the same web page, and additionally passing it the hostname of the database config node of your experiment. You can find the database config node for your experiment by reading your experiment's configuration file, similar to the following. .. code-block:: none > cat /proj/myProject/exp/myExperiment/experiment.conf dbdl: configHost: node-1 expdl: experimentName: myExperiment projectName: myProject Then edit the simulated traffic plot URL, passing it the hostname. .. code-block:: none host=node-1.myExperiment.myProject http:///traffic.html?host=node-1.myExperiment.myProject The procedure, graph configuration, and archived log files that were used for the visualization of this case study are attached below. :Procedure: :download:`[casestudy_clientserver_monitor.aal] <../_magiimages/casestudy_clientserver/cs_procedure_monitor.aal>`. :Archived Logs: :download:`[casestudy_clientserver_monitor.tar.gz] <../_magiimages/casestudy_clientserver/cs_exparchive_monitor.tar.gz>`. :Graph Config: :download:`[cs_magi_graph.conf] <../_magiimages/casestudy_clientserver/cs_magi_graph.conf>`. Scaling the Experiment ---------------------- Now suppose you wanted to generate web traffic for a larger topology. We discuss how the above AAL can be applied to a topology of 55 nodes in the next tutorial.