This article is a simplified illustration of the product. It is intended to aid analysts so they can visualize which entity fulfills what role. Readers are encouraged to check our documentations for detailed information on what is covered on this article.



For a small cluster, one FSU is often sufficient to fulfill all management roles. As your cluster grows, Primary Server can be overwhelmed. Adobe recommends the use of an additional FSU if it becomes the bottleneck.
Opomba:
For other FSU roles (Logging Server, Source List Server, and File Server) and components (Sensor and Repeater) please check our documentation here.


At first, DPUs do not know where to find various material or how to process them. Through synchronization, they will fetch the instructions, schema, and resource map from the Primary Server (FSU).


If a decoded event belongs to an existing visitor, it is appended to their cards. If it happens to belong a visitor on another DPU, it is forwarded. Finally, if the event belongs to no existing visitor, new card is created.
Visitors are evenly distributed across all DPUs. For example, on a 10-DPU cluster with 5 million visitors, each DPU holds data for 500,000 visitors. However, because each visitor has different data size, the size of temp.db will not be equal across DPUs (though they tend to be very close).
As each DPU process the input, dimension elements are indexed on Normalization Server simultaneously.

Finally, the dataset is ready for analysts. For in-depth analysis, a series of queries are executed as one question leads to another, and it is best done using the Data Workbench Client.
First, Data Workbench Client connects to a Primary Server, and one of the DPUs will be assigned as its Query Server.

Client application is now "Online" with the dataset profile. From this point on, this DPU becomes a point of contact for this Client.
The client will send query string to its query server. Once the query is received, Query Server forwards the same request to other DPUs. It also runs the query against its own temp.db (card holder) and return the result back to Client along with results from other DPUs.

Back on Client side, query results will be translated into various form of visualizations as they come in stream. The finished workspace can also be saves as template for Report Server.
Report Server automates the query executions. At a specified time, it picks up a report set from the Primary Server and executes them, and then deliver them through various methods such as emails.

Specific segments of the dataset can be exported as delimited text files. Based on the export definition file (*.export), each DPU filters out its share of data and sends them to the Segment Export Server.

Exported data from DPUs are combined into one file on the Segment Export Server, and uploaded onto a specified location. The file is typically imported into various third party analysis tools or custom applications for further analysis.
Opomba:
Just like Normalization Server role, one FSU can fulfill both Primary and Segment Export role on a small cluster. However, as it involves a batch of large data transmissions, Segment Export can easily strain FSU's resource.
FSU can work as a Transform Server. Unlike segment exports (which is dataset-to-text export), Transform is a simple text-to-text conversion. It also takes one or more types of text inputs and merge them into a single text file such as Sensor data (.vsl), log files, XML files, and ODBC (text) data. It is often used to pre-process the raw data before feeding them to a dataset.
