Skip to main content

2 posts tagged with "dashboard"

View All Tags

· 2 min read

We push changes to production almost every day and some times multiple times a day. The changes include new product features, bug fixes and any other priority changes required.

Based on customer feedback, we rolled out several changes to monitoring dashboards and also surfaced few metrics around execution time and % success for monitoring checks.

Here is an example screenshot of the new monitor results page:

Monitoring metrics

We clearly show the result trend showing success% for a monitoring check in 24-hour, 3-day and 7-day intervals.

We also show the 90th percentile value for runtime for a monitoring check. The 90th percentile value is widely used to identify performance issue while testing application functionality. We also show a warning if the 90th percentile value for 24-hour window is breaching the 3-day or 7-day interval by over 20%.

There is also a new chart on the page that allows seeing the execution time of the monitor's latest 20 checks for all selected monitoring locations. The chart allows you to visualize the performance based on the location and see a trend for the recently executed checks.

Collection results

Collections allow you to group monitoring checks to test an end-to-end flow or allow grouping multiple monitoring checks to run against multiple environments.

With the recent introduction of Collections, we realized that the experience to view results for a monitor specific to a Collection was complex.

We have simplified this experience with the changes this week. You can now check results of a monitor specific to a Collection.

Here is an example:

Monitoring metrics

The screenshot shows a monitor named User Profile Checker configured to run on Staging and Production using Collections.

The results for this monitor on Staging or Production can now be accessed using the new Collection tiles. Each of the tiles also show the recent results for the monitor in the Collection.

Changes to notifications

The Synthetic Testing and API Monitoring email, Slack, Teams notifications are now updated to reflect the Collection name as well now. This allows you to get more context about a monitor reporting a failure or recovery and prioritize accordingly.

Other updates include:

  • Inconsistent use of shadows vs. borders for tile and card layouts have been fixed. We will consistently use bordered flat layouts.
  • bug fixes

· 2 min read

Today we are announcing the availability of No-Code Editor for defining Synthetic Tests in DevRaven.

Previously, adding a new synthetic test required users to directly define the Javascript code for executing Playwright tests. However we received feedback from our users that directly writing code is hard and some users are not familiar with Javascript and use other languages for automating tests.

Based on this feedback, we added support for No-code Editor which offers ready-to-use actions that can be composed to create an end-to-end flow.

Browser interactions, assertions, miscellaneous actions and custom scripts can be added part of the flows with a simple click. Steps can also be dragged and dropped to change the execution sequence. For complex scenarios, we continue to support the full Scripting Mode.

Editor Actions

We also published content including video for users who prefer to use a browser based recorder to generate the code. Refer Recording Tests for more details on how to use recorder for generating Synthetic tests.

Other updates include: