- Retained graphics mode; which means that drawing (code) is not done immediately but updates an internal model (the scene graph). This internal model is maintained by the operating system and all optimizations and fine details are handled transparently by the graphics pipeline. All variations of XAML (WPF, Silverlight, Windows Phone) use retained graphics. In essence the XAML code is a partial mirror of the visual tree maintained by XAML and this visual tree is organized by .Net. The advantage of using retained graphics are manifold;
- You don't need to worry about pixels, refreshing (invalidating) bits and pieces, use coordinates (except the single coordinate which represents the root of the drawn object) and so on.
- Things are vectorial and all sorts of manipulations (scaling, rotation…) are rather easy. In any case, way easier than using immediate mode.
- Performance and optimizations are done by the framework. This doesn't mean it's always perfect but at least you can focus on application logic rather than having to dig in sometimes difficult rendering issues.
- Device independence; you don't need to have a different rendering logic for different form factors.
The disadvantages are essentially the advantages you find in the immediate mode below.
- Immediate (or direct) graphics mode; means your code draws directly on a canvas and the operation system does not keep a scene graph of what is being drawn. Usually the application or API has its own internal model of the scene. The (older but still very much used) GDI and GDI+ rendering of WinForms is the typical example which also resembles much the way one uses a writeable bitmap to draw and animate things. The advantages of a direct mode are;
- The only memory captured is the size of the drawing and the scene graph (if any). The retained graphics mode consumes much more memory due to the fact that the drawing instances (Control class, Shape class…in XAML) usually contain much more than what the application needs. In the case of XAML, all things related to styling, templating, triggers…are part of the rendering API whether or not you need it. While (usually necessary) the application's scene graph also absorbs memory it's usually more lightweight and more efficient for the case at hand.
- You have more control over potential optimizations and how optimizations which are inherent to the business case can be induced in the drawing process. Retained graphics is business-agnostic and only relies on purely technical (low-level rendering) knowledge. Referring to the two types of diagramming types below: the retained graphics pipeline doesn't know whether you want a particle system with large-scale topology or a small-scale UML diagram with rich interactivity. Using a writeable bitmap or something alike is likely more efficient if you have thousands or millions of shapes/items in your diagram.
- Diagrams where individual shapes and connections matter. These are diagrams where the user needs to click and select individual shapes, alter its properties, create new connections and so on. This is the Visio-like and RadDiagram paradigm. The API is rich and offers a framework which can be modelled according to the needs of the business context. Diagrams in this category benefit from the retained graphics pipeline (XAML, SVG…) since it articulates a RAD-methodology and adapts to the widely different business contexts in which diagrams are used.
- Diagrams which aim at giving a global (bird's eye) view of a certain topic, where global topology matters more than local links. These diagrams are about seeing clusters and broad relationships (e.g. LinkedIn and Twitter networks), about graph layout on a large scale (e.g. how does the internet look like on a global scale?), about how particle systems represent the dynamics of a certain (business) system (e.g. the air traffic control on a global scale). This is the datavisualization paradigm aiming at representing data and giving insights into big data sets. In this context, the result is usually more important than the creation process (see e.g. the LinkedIn diagram below; who cares about how it's created…? The resulting topology is merely interesting and the data is not in the diagram but a result of data stored and edited elsewhere). This paradigm benefits from the direct rendering pipeline (Canvas, bitmap, GDI…) and the result is often quite static (tooltips being the prototypical 'interaction').
The problem zone is the gray area between the Visio and the datavisualization paradigms; what if you want to display huge diagrams and keep the interactivity to the max? Certain business domains are indeed susceptible to this dilemma; forensic data analysis, social network analysis, security and anti-terrorism agencies and alike. What can be done in this case? Let's be first and foremost straight away clear about the RadDiagram framework; is sits in the Visio paradigm and will not scale to the millions. There are various reasons for this:
- By design; the RadDiagram framework was designed (and this is the general spirit of Telerik's suite of XAML controls) in order to rapidly create great diagrams with a minimum of knowledge about diagramming drawing and graph theory. It was not developed in function of large scale diagrams like the LinkedIn sample above. The shapes (RadDiagramShape) and connections (RadDiagramConnection) are rich controls which on top of the already loaded .Net framework API (i.e. the ContentControl and Control classes) add an additional layer of interactivity and customization which enables rich, interactive diagram but sacrifice (to some extend) memory and processing. This doesn't mean we sacrifice lightly performance but the choice is consistently on the breadth and scope of applications (workflow, organization charts and so on) rather than on scalability.
- The XAML framework is inherently a bad choice to scale things to the millions. Even when using virtualization techniques, each and every instance of, say, the Control class comes with a wealth of stuff (templating, styling, triggers, events…) which consume memory whether or not you need it in your concrete business context (application). The XAML framework is rich in scope but at a price. Maybe on top of this one could add the fact that, even though this has become less an argument than ten years ago, the managed programming paradigm is maybe also an issue. Some situations still benefit from a pure C++ approach than from a managed approach.
- RadDiagram can articulate a wide variety of diagramming tasks but at the same time it does not focus on any type in particular. If your application is all about tree graphs and large hierarchies then there are for sure ways in which the tree-layout code could be optimized in function of the data you wish to display. That is, the graph layout and internal engine managing shapes and connections is not geared to anything in particular while there are definitely shortcuts possible if some knowledge (properties) of the to-be displayed data is known. For example, testing for graph cycles in the layout could be omitted if the data is guaranteed to be acyclic. There are on many levels ways in which a custom implementation could give specific applications a performance or scalability boost. This customization is part of the Telerik consulting services and not something which can be done by customers due to the End User License Agreement.
- What does my end-user gain from being able to see a million of shapes and connections? Data is not the same as information, displaying a lot of shapes is not (and usually does not result in) better information visualization. A diagram is in most business applications just another way and not the only way to gain insight into a dataset. Usually it has to be combined with other visualizations (timeline, Gantt, pies…) in order to give a fuller picture of the question at hand. Will your user really see the needle in the huge diagram, wouldn't it be better to guide the user to a smaller set first and then display the diagram?
- What is the business question the user tries to answer using the diagram? A great visualization is not an application where data is just being presented, an application should invite a user to follow a certain path to answer a business-related question, it should present a workflow (screen flow) and user experience (UX). In much the same way, it's not very useful to display a million rows in a datagrid if there is no way to filter and experiment with the data; one should try to focus on what the aim is of a representation (in order to solve a business related question) rather than just offering bluntly a lot of data and a lot of widgets.
- Do the details matter or is it only the result? In many situations data is either noisy or needs pre-processing before being visualized and this is true for diagrams as well. In various business domains it's necessary to pre-process data using SQL Server's Analysis Services, StreamInsight (or alike) or use OLAP techniques before sending the result to the (visualizing) client. Many situations where large diagrams and interactivity is expected can be solved by delegating the filtering and selection process to the backend. The UX experience aimed for and a study of what the application is intended to do often dictates how big diagrams can be reduced to the essence. Blaming hardware and software performance is too often an excuse for not digging into the tougher (business and UX) questions.
- What is the long term vision of the application and how will the data scale on the long term? The choice of data visualization controls (not just diagramming) should be taken in function of tomorrow and not how your applications is today (or has been in the past years). The importance of this question resides in the difficulty one faces when trying to shift a diagramming visualization between the two paradigms. Because the approach (aka rendering techniques) is so fundamentally different it takes quite a turn to shift things from one into another.
- If you have moved from a direct rendering context to a retained context (e.g. you upgraded a Winform app to a WPF app) you might have discovered that you gained in interactivity but that the scalability is not what you expected. If this is the case, you need to question the shift or whether you can find compromise or whether a rethinking of the UX is in order.
- If you have a big data warehouse and wishes to use XAML you need to think about how to fully exploit the backend processing before sending naively terrabytes of data to the client (with unrealistic scalability expectations).
- If you do not want to compromise on the amount of items in the diagram and you consider to target mobile platforms, you might discover that web technologies offer nowadays solid performance and scalability. The price to pay in this case is the shift in programming paradigm and the lack of rich (diagramming and non .Net) API.
- As explained above, much fine-tuning is possible in RadDiagram in concrete business cases and with more knowledge of the data. The customization of RadDiagram through consulting services is maybe what you need.