Developing hyper-scalable network analytics design, called Flowmon Distributed Architecture, was one of the biggest technology challenges we’ve faced to date. What were the drivers behind this resource demanding development project?
The deployment scenario where multiple Flowmon Collectors can each act as a processing and storing unit, all controlled by a single or more master units, required creating extensive orchestration and management communication processes. All in all, we now can not only generate flow in 100G line rates, but can also process and analyze millions of flows per second in real time, proving our position as market performance leader once again.
People have asked me why we decided to invest in such a resource demanding development when globally only a handful of organizations reach millions of flows per second. But, is it really just a “handful”? And do only customers with multi 100G links need a Distributed Architecture? To answer this, let’s explore some of the questions that convinced us to develop Flowmon Distributed Architecture.
Question #1: Today we use packet analysis to monitor only mission critical applications. With flow monitoring we could extend the monitoring scope. But, your most powerful Flowmon Collector supports 400k flows/s. Our single data center generates over a million.
Answer: Our collectors are already state-of-the-art, reaching the current technological limitations. So, instead of pushing the boundaries of more powerful appliances and facing the same issue in two or three years’ time, we thought such customers could benefit from multiple Flowmon Collectors joining their performance to act as a single powerful instance.
Question #2: We have multiple datacenters and require at least 2 months data retention. A single collector would only provide 3 weeks.
Answer: While we support backing up primary data outside Flowmon and then being able to restore it anytime whenever needed, to have the data ready for analysis at any time, we decided to extend the storage with the Distributed Architecture.
Question #3: We provide professional services with Flowmon and need to ensure 100% uptime. How can we build effective DA deployment?
Answer: Some of our customers simply cannot afford to have any downtime as Flowmon is their business critical service. Until the Distributed Architecture they could only deploy redundant instances, each collecting the same information. Each one is configured and managed separately. This is now all done through a single interface. Any configuration or policy applied on the master is automatically synced to slaves. This includes upgrades, reports, alerts, profiles and much more.
Question #4: We have different NPMD tools in every datacenter and separate IT teams supporting operations in each location. We’re now collocating services to a single global IT team. We want to recognize the standing flow sources in every location, but use a single instance to monitor all the sites.
Answer: Some of our customers tried to concentrate feeds from different solutions into SIEMs. This works for events. But, to carry out troubleshooting, engineers still need to jump from the SIEM to those respective tools and investigate there. This problem is also visible in cross-team operations. To reduce operational costs and to streamline cooperation on troubleshooting, companies look for shared instruments and shared pools of data.
Trends like Cloud-centric Architecture, Hyperconverged Infrastructure or Software Defined Networking enabled enterprises to collocate their data and services. This inherently had to lead to an explosion of east-west bandwidths within a single location, calling for hyperscalable network traffic analytics. Oddly enough, the same applies when distributing your load with Edge computing. Because in such a scenario we have to deal with multiple datacenters, each resource demanding, but we want to achieve a single-pane-of-glass visibility solution. And very much the same vertical scalability requirements apply to SaaS providers. Companies look for agile visibility solutions that can grow with their infrastructure needs and that would support their Hybrid IT environment, which is exactly what you get with the Distributed Architecture. Standardizing the analytics toolset and data source used for NetOps also leads to a substantial cost reduction as it requires less knowledge and time budgets necessary to implement and operate multiple solutions.
More information can be found in this product brief (PDF).
Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.
Learn MoreSubscribe to get all the news, info and tutorials you need to build better business apps and sites