The purpose of this document is to provide a technical overview of the Service-Flow solution. The intended audience is technical stakeholders within customer organizations and Service-Flow partners.
About Service-Flow Corporation and Service Flow SaaS Solution
Service-Flow SaaS solution is ready-to-use, fast-to-implement, easy-to-manage and cost effective solution that enables a whole new service management ecosystem connecting service provider's, service buyer's, subcontractor's and partner's service management tools and processes making service integration and management (SIAM) possible.
The company was founded in 2011 and has its headquarters in Helsinki, Finland.
Figure 1: Service-Flow Architectural Overview
Service-Flow has been developed from the ground up as a multi-tenant platform, which is running in the cloud. Following is a description of its architectural components.
Service-Flow Platform is a multi-tenant, cloud-based Platform that consists of three main components: Broker, Adapter and user interface. This is where the actual integration processes are created, executed and managed. The Service-Flow Platform is hosted within the EU in a data center that has successfully passed multiple security certifications, e.g. SAS70 Type II, ISO 27001 and PCI DSS Level 1.
Broker routes messages between systems using content-based rules and applies transformations to the data.
Adapters deliver messages between external systems and the broker. They transform system-specific messages to the Service-Flow canonical format enabling routing and data transformation rules. Adapters are either system-specific or more generic and are developed and maintained by Service-Flow. Service-Flow is constantly on the lookout for more ITSM tools that could be connected to the Service-Flow ecosystem through adapters.
The Service-Flow user interface is a web application that allows users to view the messaging between their services. It also provides tools for administering systems, routing rules and mappings.
Customer systems are (ITSM) tools connected to one another through the Service-Flow solution. These tools are usually located either inside customer internal networks or in the cloud as SaaS applications.
Service-Flow is made highly available and fault-tolerant by building the system as an asynchronous, staged event-driven architecture (SEDA) where components communicate by transmitting messages and responding to events. Communication is further enhanced by persistent queues, which makes the system fault-tolerant as the receiver doesn't even need to be running for the sender to send a message.
24/7 monitoring is used to detect problems as they occur and to alert the Service-Flow operations team. Service-Flow database runs in a replicated setting where a crash of a single DB server doesn't bring the whole system down. The replication is done between separate data centers.
Service-Flow maintains the service in real-time, i.e. without any customer-visible down-time.
Adapters act as the communication bridge between end systems and the broker performing two main responsibilities. First: adapters transform system-specific messages to the Service-Flow canonical format enabling routing and data transformation rules within Service-Flow. Second: adapters convert the messages into a adapters specific representation and deliver them to the end system according to the system-specific communication protocol. As there are great differences between end systems, this can mean anything from sending the message as an email to converting it to complex system-specific XML-representations, sent to a SOAP interface using a custom communication protocol.
Adapters are either system-specific (e.g. ServiceNow adapter) or more generic (e.g. generic SOAP/REST adapter). New adapters are developed by Service-Flow when there is a new kind of system that needs to be added to the Service-Flow ecosystem.
Broker is the central service that handles the actual message routing and stores the integration conversations between related parties. Conversation is a context where all messages and routing information for a single integrated entity (e.g. an incident) is stored. For example, in integration between ticketing systems the conversation has information of all the messages passed between systems for a single ticket. Conversation makes it much easier to understand what has happened and in what stage the integration is in.
Figure 2: Service-Flow UI showing latest messages in ”Message queue”.
Figure 3: Service-Flow UI showing ticket lifecycle in ”Ticket Conversation”.
Routing is based on rules. These rules contain the following parts:
- Route information (e.g. source system, source entity type, target system, target entity type)
- Operation condition (e.g. create, update)
- Attribute and conversation conditions (e.g. content-based conditions that have to be met for the route to apply)
- Attribute mappings (e.g. the actual mappings from source to target data)
Multiple rules can match the incoming message and in this case it might be routed to multiple target systems.
Attribute mappings are the actual workhorse in transforming data between separate systems. Service-Flow supports many kinds of different mappings, like direct copy from field to field, template, append, 1-1 translation and n-n translation.
Figure 5: Service-Flow mapping editor
Connecting to Service-Flow Best Practices
Extending your ITSM processes outside the boundaries of your organization requires integrating with the service provider's ITSM system, for instance by using Service-Flow. Establishing a secure bi-directional network connection that enables the connectivity typically requires changes to the network infrastructure in which ITSM -tool has been deployed.
Connecting a SaaS ITSM tool that runs outside the customer network
If the ITSM tool is already running outside the customer network, it can usually be connected through HTTPS from Service-Flow without any additional network settings.
Figure 6: SaaS connectivity
Connecting an ITSM tool that runs in customer internal network
A reverse proxy acts as a controlled gateway between internal and external network resources by publishing internal services (i.e. ITSM-tool) to external users (here Service-Flow). By placing the reverse proxy in a Demilitarized zone (DMZ), access to it can be controlled and fine-tuned to match your requirements. Typically, access from the public internet is limited to HTTPS from Service-Flow while the service published is the web service API of ITSM tool. By using a reverse proxy, you retain full control of your network and security while gaining the additional benefit of using external services.
Figure 7: Secure reverse proxy -based connectivity between customer and Service-Flow
Connecting to Cloud Based ITSM –tool via email
The most used integration type is to send emails between separate systems. This method is also commonly used with event management and monitoring systems which are configured to send events in email format. Since email’s content is restricted, it is challenging for receiving ITSM-tool to automatically generate correct ticket types, update correct information and also handle large amounts of events.
With Service-Flow you can connect event management system easily to your ITSM-tool web-API (or similar). In this example we have connection that is uni-directional. Event management system creates events, events are sent to Service-Flow, which transforms event messages to correct format according to chosen ready-made Service Adapter.
Figure 8: Example of connecting event emails to an ITSM tool
We at Service-Flow take security very seriously. For a SaaS solution that communicates with several services across network boundaries, security is a critical aspect that must be carefully scrutinized on several layers.
Service-Flow needs to store data into its database. This data includes all the messages passing through the system, user account information and configuration. Service-Flow needs to store all the message information at least until the message has been processed and sent to target system. All data routed by Service-Flow is stored securely within the certified data centers with strict Access Control.
All communication between Service-Flow solution components needs to always be encrypted. Service-Flow uses SSL between web browsers and the components running in cloud and also between adapters and brokers. Service-Flow uses SSL certificates signed by trusted, well-known certificate authorities.
Service-Flow is deployed to Amazon Web Services (AWS) cloud in Ireland. AWS data centers have passed numerous security audits like SAS70 Type II. AWS has also achieved ISO 27001 certification.
Communication between your ITSM tools and the Service-Flow platform is secured by strong certificates signed by well-known Certificate Authorities. For most cases, HTTPS over port 443 is the preferred communication protocol, providing strong security with minimal changes to your infrastructure.
The Service-Flow cloud is a distributed system where access to the services is controlled by Access Control Lists. The services are developed and tested according to industry best practices, with security, fault tolerance and availability in mind. Service-Flow application security is audited yearly by an external security company.
Follow the links below for detailed information about available options