REST is a powerful tool for integrating external applications with your community. There are aspects to REST that can effect your application performance if you are not conscious of them from the beginning. This guide will help you navigate those hurdles to help you build a powerful and responsive REST application.
[toc]
There are 2 main REST specific factors that can create bottlenecks in an application. The first is the simple fact that REST is based on making requests over HTTP. This means that you are not only depending on the community to process your request you have to send that data to and from the community over the network adding an additional factor into the equation. Additionally the more data you are requesting the larger the payload being sent over the network which could also slow down your application.
Synchronous vs Asynchronous
There are 2 approaches to using asynchronous patterns, one specific to server side interaction the other as it relates to interface responsiveness.
Async Patterns and Practices for Server Side Code
The following applies to integrations written using .NET. If you are using another technology consult that technology's documentation for similar functionality.
In .NET 4.5 async programming was introduced which greatly reduced the effort it took to create asynchronous code. This model was extended to common classes such as the HttpClient which are key when it comes to communicating with REST over HTTP. Essentially what this pattern allows is for key execution methods, specifically communication based methods, to happen out of process so that you can execute them but not necessarily wait for them to complete before continuing execution. This way you could process unrelated logic or subsequent requests that are not dependent on each other and independently. The details of the implementation of such a pattern is beyond the scope of this article but you can read about how it works by reviewing the documentation on MSDN.
If you are using the REST SDK it has async/Task based methods available to use already.
Async Patterns for The User Interface
The worst thing that can happen for an application is when your user is stuck waiting for a page to load for an unreasonable amount of time. If you are making several requests during the loading of a page then that can be what happens. When designing your application you should load the minimum amount of useful information(which might be nothing) when you load a page, then load secondary information asynchronously via javascript. The term coined for this in the web world was AJAX(Asynchronous Javascript and XML). For example if you have a page that lists threads, you may consider loading only the forum information first, then make that information available in the page's javascript so it can load the threads. This way you are providing information to the user about the forum immediately but loading the threads after the fact in such a way that you can provide feedback to the user that the page is still working(a spinner or loading animation of some kind). This practice is done in many places in the community interface as well, the activity story stream being an important example. Listing activity stories is an expensive operation so we load them asynchronously after the page has loaded.
You do need to keep in mind javascript only allows you to make requests on the same domain for security reasons without enabling CORS(Cross-Origin Resource Sharing) or using an on-domain proxy like a custom http handler(.net). You can enable CORS for your community in Administration->Integration->CORS but be sure to learn more about CORS if you are not familiar with it already so that you understand any potential security risks. If you choose to write a proxy and you are using .NET you can also utilize the async programming pattern discussed earlier in the server side implementation.
Batching
If you do have to make multiple requests at once and asynchronous is not an option you can potentially collapse all those requests in a single HTTP call using the Batch REST Endpoint. This single call will process the included child requests in-process on the server versus making a network request for each one. You should however use caution here as well since as you add child requests to a batch request, you will increase the size of the request and the response you will receive. You can learn how to do batching in REST in the batch endpoint section here or via the SDK here in the batching section.
Controlling HTTP Payload
Aside from simply limiting your requests and being careful with batches you can also control the payload coming back from your community in a few other ways. You can also return different types of data like pre-formatted html. There are 4 main ways to manage payload, some do require varying degrees of additional development effort while others are available automatically or with a simple change in the request itself.
Paging
This only applies to LIST requests, or requests for data sets versus individual items. The majority of LIST requests are paged sets so you do not necessarily get all of the items in the set you are requesting. What you do get back is the current page in the set, how many items in each page, and how many total items exist in the entire data set. Here is an example from the user list endpoint:
{ "PageSize": 10, "PageIndex": 0, "TotalCount": 2345, "Users": [...] }
Or XML
<Response> <Info /> <Warnings /> <Errors /> <Users PageSize="10" PageIndex="0" TotalCount="2345"> ..... </Users> </Response>
Using this information you can create a pager in your user interface. Creating a pager is beyond the scope of the topic but this information is enough to develop a custom one or potentially look at third party libraries to use. What is important is paging limits the payload size. REST allows you to set a page size of 100 maximum so if you have a list of 101 items or more, minimally that will be 2 pages. If you try and request more than 100 it will default to 100. If you don't specify the page size it is 20 by default.
Before you run off and set your page size to 100 or even accept the default understand the size of the response. The easiest way to think about is Total Size = Size of 1 object * Page Size MINIMUM, and that is assuming equal size objects which is not going to be the case ever. If you look at the example response of a single user REST request you can see how this might be a significant amount of data. Also consider your actual needs. Is it truly necessary to show 20 blog posts in my list immediately, or is 5 enough with a pager since most people only care about the newest content?
You can further improve paged results by applying asynchronous loading when switching pages as discussed earlier.
Include Fields
The easiest way to reduce the payload is using the include fields feature on REST requests. By adding IncludeFields
to the query string along with a separated list of property names you can reduce the response to those fields and top level objects only. This only works for basic properties, you cannot exclude full objects. If you have a response with 2 objects both that have a similarly named property, it will include both. For example, if we call the Info REST endpoint we normally get back 4 full objects, the SiteSettings, ApplicationConfiguration, AccessingUser and InfoResult(not including the standard Info/Errors and Warnings). If we only want the UserId, Username and Timezone from the AccessingUser and only the SiteName from InfoResult we can format the Url like this:
http://yourcommunity.com/api.ashx/v2/info.json?IncludeFields=SiteName,Username,Id,Timezone
Our result would look something like this:
{ "InfoResult": { "SiteName": "Your Community Name","TimeZone":-6 } ,"SiteSettings": null ,"ApplicationConfiguration": null ,"AccessingUser": { "Username": "wally", "Id": 2210,"TimeZone":-6 } ,"Info": [] ,"Warnings": [] ,"Errors": [] }
Notice the main objects are still there, but only properties matching our list came back. Also note that both AccessingUser and InfoResult have a TimeZone property and while it wasn't intentional you would get both back.
***IncludeFields does not work on some endpoints, specifically new endpoints added in 8.0 and newer versions. This is a known issue that will be addressed in a future release.
Scripted Custom Endpoints
Scripted Custom Endpoints are powerful in 2 ways. The biggest is that it is essentially executing a widget on the platform and returning the output via REST. There is no C# code you need to write, compile and deploy, instead you can use the widget editor in Telligent Community to design your logic and then easily make adjustments right from the administration interface just as you would if you were customizing the UI. This method gives you access to all the widget APIs so you can actually make a widget API call in the widget and reformat the response into your own JSON payload. If you want to display something in your external site but you want to manage the markup from the community and not worry about dealing with JSON you can return pre-formatted html that has been populated from community APIs. The only thing you cannot do is rely on any community javascript APIs or libraries. You should stick to plain HTML only. You can see an example and learn how to use this feature here in the scripted endpoint section.
Custom REST Endpoints
Lastly you can just create your own endpoint if you choose. Custom REST Endpoints aren't just for custom data, you can create your own response based on data from the In-Process API as well. You simply create your own wrapper object containing the fields you want and then populate it with information you retrieve from one or more IN-Process API calls.
Caching Response Data
Also consider caching data you retrieve somehow. For .NET applications this could simply be ASP.NET runtime cache or a simple memory cache. If its not .NET consult documentation on your specific technology for caching options.
Community itself caches data extensively to prevent constant database trips. REST benefits from this in the sense that once a request is received it may not need to be queried but it doesn't help the actual transmission of that data. Your application can take common requests and cache them for short durations to avoid making unnecessary HTTP calls. For example for a publicly available list of forums that doesn't change you can cache it to avoid having to constantly request it via REST when the page is accessed. You probably don't need to cache it for more than a minute or 2 to avoid staleness but if its seldom changing data it could be longer. Also be aware of user specific data. You can cache this as long as you remember to differentiate by user(usually using a user identifier in the cache key) in order to avoid an information disclosure security vulnerability by showing one users cached data to another.