Skip to main content

· 3 min read

We are announcing the closure of, which will take effect starting on June 30th, 2023. For paying customers, the closure will take effect at the end of their current paid term after June 30th, 2023.

Here are some example scenarios to illustrate how the closure will take effect for your Workspace:

  • Free Workspace: Effective June 30th, 2023.
  • Paid Workspace (Monthly): At the end of the current term after June 30th, 2023.
  • Paid Workspace (Annual): At the end of the current term after June 30th, 2023.

The decision to shut down has not been easy for us. However, after careful evaluation of the current market conditions and a thorough analysis of our product-market fit, we have determined that it is no longer sustainable to continue operating

We started building based on our experiences part of fast-paced engineering teams and after talking to prospective customers that having automation around the most important workflows and use cases will help with faster prod rollouts. But we gross underestimated the effort it takes to sell to enterprises and the process to fulfill Vendor Risk Assessments(VRAs). Even though we offer features for monitoring complex workflows, we found that enterprises are not willing to invest time in yet another monitoring tool. Overall, it's fair to say we did not achieve the right product market fit and be the differentiator for getting our prospective customers excited.

Here's what you can expect in the coming days:

  1. Access to You will be able to access and use the service until the effective closure date for your Workspace after June 30th, 2023. We encourage you to retrieve any data, tests, or information before the effective closure date for your Workspace.
  2. Account Deletion: Starting in the first week of July 2023, we will begin the process of deleting the Workspaces (or you may proactively initiate the Workspace deletion yourself from the Workspace Settings page). This process is irreversible, and once the Workspace is deleted, all associated data will be permanently removed.
  3. User Account Deletion: Once all associated Workspaces for your user account are deleted, we will delete your user account as well.

While is shutting down, based on our learnings and the problems we encountered while building and running the service, we have launched an open-source, MIT-licensed, self-hostable integration and data pipeline platform called Additionally, we will be open-sourcing some of the technology we built for in the coming days. We hope you will find these technologies useful.

Thank you for your understanding and cooperation. We are incredibly grateful to have had the opportunity to serve you, and we sincerely apologize for any inconvenience caused.

Wishing you all the best for your future endeavors.

· One min read

We are announcing retirement of Sydney monitoring location. There are currently very limited number of customers using this monitoring location, but we still incur costs for maintaining the monitoring location. Retiring resources that are not effectively used will allow us to cut down the costs and use those resources for offering better services.

What does this mean?

  • Effective today, you will not be able to select Sydney monitoring location for running any new monitors.
  • If you already have any monitors running at Sydney location, they will continue to run.
  • You will be able to deselect the Syndey location for currently configured monitors, but once deselected you won't be select the Sydney location again.

If you have any questions, please contact [email protected] for more details.

· 2 min read

I am excited to announce the availability of two new features today.

Distributed Tracing

We also rolled out support for distributed tracing with OpenTelemetry traceparent header (W3C trace context specification). Distributed tracing can now be enabled for Browser Tests or API Tests. When distributed tracing is enabled for a Test, the test results section will show a unique Trace ID that can be used to trace the transactions in your distributed tracing tool.

Here is an example Trace ID generated for a Browser Test:

Trace ID

By using the Trace ID, the requests, their latencies, performance, associated logs and other data can be traced. The screenshot below is a view from Google's Cloud Trace (the tool we use) showing the Trace view for the executed test.

Distributed Trace


Services allow you to quickly visualize the Service Health based on the status of associated checks. Services can be associated to SSL Monitors, API Monitors, Browser Tests or even one or more Collections. When an associated test or a test that is part of an associated Collection fails, the Service status will be changed to reflect the failed test.

Here is an example Overview page for Services in a Workspace.

Service Overview

· 2 min read

A browser test recorder has been one of the most asked features by our customers. Today I have happy to announce the launch of DevRaven Recorder.

DevRaven Recorder is a free chrome extension that you can install to quickly and easily record a brower test. The extension automatically generates the code for your test scenario as you perform the operations and the generated code can simply be copied to setup a synthetic monitor for your scenario. Zero coding skills required!

The extension currently supports the following features.

  • Captures mouse interactions including click, hover, double-click.
  • Captures data inputs to input fields such as text fields, password, single and multi-select fields and textarea fields.
  • Captures interactions with checkboxes and radio boxes
  • Captures keystokes including Meta(Command), CTRL, ALT (Option) keys.
  • Detects page navigations such as full web page, SPA or hash change events and automatically waits for navigation to complete.
  • Support for capturing screenshots while recording the tests.
  • Support for adding waitForSelector.

Here is a demo of the recorder in action:

We have also decided to open-souce the extension under Apache 2.0 license. If you are interested in contributing, feel free to send a pull request. The git repo for the extension is available at


The extension will be available on Chrome Web Store very soon following the review process. However you can side-load the extension today to try it out. Just follow the installation instructions available at There is a recording as well for the installation process.

· 2 min read

Developers and Quality teams spend lot of resources to automate the testing and monitoring of their web applications. It's pretty common to see failing network calls or console error messages, even if all the automated tests just pass.

These failing network calls or errors can happen for varities of reasons. It's just humanly not possible to keep track of network failures/errors while executing your tests all the time and also executing the tests from multiple locations. So, these errors just go unnoticed until a customer reports an issue.

Today we are introducing tracing capabilities that allow automatic capture of network requests and console log messages while executing your browser-based tests. Your test could be as simple as logging in and visiting all your application's web pages or you might have a test covering a complex scenario. You will get visibility into all the network requests that happen while executing your flow.

Here are a couple of screenshots:

Network requests: A familiar user interface similar to Developer Tools that allows you to search/filter the network requests.

Network Requests

Console log messages: UI showing console messages with an ability to search or filter the messages.

Console Logs

Refer our documentation for more details on enabling tracing for your browser-based tests.

Other updates:

  • No-code editor now supports executing tests on Chromium, Firefox, WebKit browsers with simple configuration.
  • No-code editor now supports waitForLoadState operation.
  • No-code editor now supports changing waitUntil option for Go To Url operation
  • other minor fixes and enhancements

· 2 min read

We push changes to production almost every day and some times multiple times a day. The changes include new product features, bug fixes and any other priority changes required.

Based on customer feedback, we rolled out several changes to monitoring dashboards and also surfaced few metrics around execution time and % success for monitoring checks.

Here is an example screenshot of the new monitor results page:

Monitoring metrics

We clearly show the result trend showing success% for a monitoring check in 24-hour, 3-day and 7-day intervals.

We also show the 90th percentile value for runtime for a monitoring check. The 90th percentile value is widely used to identify performance issue while testing application functionality. We also show a warning if the 90th percentile value for 24-hour window is breaching the 3-day or 7-day interval by over 20%.

There is also a new chart on the page that allows seeing the execution time of the monitor's latest 20 checks for all selected monitoring locations. The chart allows you to visualize the performance based on the location and see a trend for the recently executed checks.

Collection results

Collections allow you to group monitoring checks to test an end-to-end flow or allow grouping multiple monitoring checks to run against multiple environments.

With the recent introduction of Collections, we realized that the experience to view results for a monitor specific to a Collection was complex.

We have simplified this experience with the changes this week. You can now check results of a monitor specific to a Collection.

Here is an example:

Monitoring metrics

The screenshot shows a monitor named User Profile Checker configured to run on Staging and Production using Collections.

The results for this monitor on Staging or Production can now be accessed using the new Collection tiles. Each of the tiles also show the recent results for the monitor in the Collection.

Changes to notifications

The Synthetic Testing and API Monitoring email, Slack, Teams notifications are now updated to reflect the Collection name as well now. This allows you to get more context about a monitor reporting a failure or recovery and prioritize accordingly.

Other updates include:

  • Inconsistent use of shadows vs. borders for tile and card layouts have been fixed. We will consistently use bordered flat layouts.
  • bug fixes

· 2 min read

Today we are announcing the availability of No-Code Editor for defining Synthetic Tests in DevRaven.

Previously, adding a new synthetic test required users to directly define the Javascript code for executing Playwright tests. However we received feedback from our users that directly writing code is hard and some users are not familiar with Javascript and use other languages for automating tests.

Based on this feedback, we added support for No-code Editor which offers ready-to-use actions that can be composed to create an end-to-end flow.

Browser interactions, assertions, miscellaneous actions and custom scripts can be added part of the flows with a simple click. Steps can also be dragged and dropped to change the execution sequence. For complex scenarios, we continue to support the full Scripting Mode.

Editor Actions

We also published content including video for users who prefer to use a browser based recorder to generate the code. Refer Recording Tests for more details on how to use recorder for generating Synthetic tests.

Other updates include:

· One min read

Today we launched support for Multi-Factor Authentication (MFA) for all DevRaven user accounts.

Any user can optionally enable MFA for their user account to prevent unauthoized access to their account due to weak/leaked passwords.

User accounts with MFA enabled are required to enter MFA token during the local authentication (username/password based) login process.

Please note, you will not be prompted to enter MFA token if you use a third-party login provider (such as Google) to access your DevRaven account.

Refer the Multi-Factor Authentication documentation for more details about this feature.

We also pushed few UX improvements to Monitor Scheduling: Schedule changes now require the status to be enabled. This should prevent unintentionally leaving the scheduler in off state when scheduling monitors.

We also published more content about Continuous Web Page Monitoring and Multi-environment monitoring, including video guides!

· 2 min read

I am very excited to announce our newest feature, Collections. Collections allow you to create lists of API tests and browser tests which can be executed in a sequence.

Collections allow you to orchestrate tests against your web apps or APIs and make it possible to monitor complex user scenarios. Add tests to a Collection, drag-and-drop to change the order and perform operations as invoked by your real end-users.

Here are a few possibilities:

Workflow Monitoring

Monitor your workflows my composing API tests and Playwright based browser tests with optional delay between each step. Example scenarios such as:

  • Monitor your long-running workflows - ensure your end-to-end flows are working as expected and no regressions are introduced with newer changes
  • Monitor your ML training jobs - ensure your training servers are up and running
  • Monitor integrations - perform operations on your source service and assert the expected output on the target service.

The video walks you through a quick example for setting up workflow monitoring.

Monitor multiple environments

Create monitors or tests for your application functionality and continuously run them against your dev, staging, pre-prod and prod environments. By associationg Collections with specific Environments, the same tests can run on all your environments and any regressions in existing features can be identified.

Refer Multi-environment monitoring for more details.

We will a producing more content and videos discussing varieties of use cases in the coming days. Subscribe to our YouTube channel to receive our updates.

· One min read

I am happy to announce the support for Opsgenie integration. All DevRaven Workspace (including free plan Workspaces) can use the integration without additional charges.

The integration also supports dispatch of events from one monitor to multiple Opsgenie Teams. Documentation with step-by-step instructions for enabling the integration is available here

Few other updates that were rolled out include:

  • An example recipe for using Jest's expect for assertions
  • Monitoring results page now show the time taken for execution. Please note that if you have multiple monitoring locations enabled for a monitor, this value can be for any monitoring location.
  • few minor bug fixes