Program documentation is constantly updated with every new build. It is also available online.
Please help us make it better. If you find any topic incomplete or missing - let us know. Click on the small icon at the top right corner and send us an anonymous comment.
In this chapter, we explore the foundational principles of network monitoring through the concept of the monitoring pyramid. We introduce the three essential layers, Observability, Monitoring, and Notification, explaining their distinct roles and interconnections.
Describing key monitoring layers and their role
Effective monitoring is the cornerstone of maintaining a healthy and efficient infrastructure in network management. The concept of monitoring can be visualized as a pyramid, where each layer builds upon the previous one to create a comprehensive strategy.
At the foundation lies observability, which involves collecting monitored data and events. This layer is crucial because information that is not observed is lost, and extensive data collection is essential for thorough analysis. Observability means gathering detailed insights about the system's behavior without immediately assessing or acting upon it.
The second layer is monitoring, which begins with alerting. This is commonly misunderstood; alerts are not merely notifications, but events the monitoring system observes and can act upon. These alerts, particularly those categorized as warnings or informational, provide a detailed picture of network behavior over time. They are vital for automation and triggering corrective actions, helping to maintain network stability and performance.
At the pinnacle of the pyramid is the notification layer. Notifications should only include critical alerts concerning crucial parts of the network infrastructure, demanding immediate action. This distinction is significant as many users mistakenly disable monitoring instead of just the notifications, analogous to "cutting off the leg because the toe hurts."
Understanding and implementing this pyramid approach ensures a robust and effective monitoring system, enabling proactive management and swift issue resolution.
Types of data collected across the network for a comprehensive overview
Data collection, often called observability in modern network management, forms the base of the monitoring pyramid. This layer continuously gathers various data and events from across the network. Observability is critical because it provides the raw information for analysis, troubleshooting, and decision-making. As the saying goes, "You can't manage what you don't measure." Collecting more data than initially needed ensures that no crucial information is missed, which can be invaluable for future analysis and understanding of network behavior.
A robust data collection strategy involves capturing a wide range of information, including:
Network Traffic: Data packets traveling across the network help understand bandwidth usage, identify bottlenecks, and detect potential security threats. Monitoring network traffic is crucial for maintaining network performance and identifying unusual patterns that might indicate issues such as congestion or malicious activity.
System Performance Metrics: CPU usage, memory utilization, disk I/O, and other performance indicators from servers and network devices. These metrics are essential for assessing the health and efficiency of the infrastructure. Monitoring these metrics helps predict potential failures and optimize resource allocation.
Application Logs: Logs generated by applications running on the network. These logs provide insights into application behavior, errors, and user interactions. Application logs are invaluable for diagnosing application-specific issues, understanding user activity, and ensuring application security.
User Activity: Data on user access and actions within the network. Monitoring user activity is crucial for security monitoring and compliance, helping detect unauthorized access, track changes, and ensure user actions align with organizational policies.
Status Data: Information describing the state of various elements within the network and beyond, such as IoT devices. Status data can be determined internally, like the availability of a service, or received and checked from external sources. This data is essential for understanding the operational state of devices and systems, ensuring they function correctly, and detecting any deviations from expected behavior.
Collecting these types of data ensures a comprehensive view of the network's operation, enabling proactive management and swift resolution of issues. Organizations can maintain optimal network performance and security by integrating network traffic, system performance metrics, application logs, user activity, and status data into a unified monitoring strategy.
Implementing a comprehensive data collection strategy provides several key benefits:
In conclusion, the data collection layer is the foundation of effective network monitoring. By prioritizing comprehensive data gathering and implementing robust storage and retention practices, organizations can ensure they have the necessary information to maintain network health, proactively address issues, and support informed decision-making.
Identify common misconceptions about alerts. Learn about their role in network management and workflow automation
Monitoring, often associated with alerting, is the second layer in the monitoring pyramid. This layer involves the continuous observation of the network and the generation of alerts based on predefined conditions. It is crucial to distinguish alerts from notifications:
Alerts can be categorized into three main types, each serving a distinct purpose:
Effective alert management involves careful configuration and ongoing adjustments to ensure relevance and accuracy:
Implementing a well-structured alerting system provides several key benefits:
In conclusion, the monitoring/alerting layer is essential for maintaining network health and performance. By effectively configuring and managing alerts, organizations can ensure timely responses to potential issues, automate corrective actions, and gain a comprehensive understanding of their network behavior. This approach enhances the ability to maintain a stable and efficient network infrastructure.
Criteria and best practices for effective notifications in network management
Notifications represent the top layer of the monitoring pyramid, where the most critical information is communicated to human operators. Unlike general alerts that can trigger automated responses, notifications are reserved for critical alerts that require immediate human intervention. These notifications ensure that relevant personnel promptly address issues impacting the network's core functionality.
To ensure the effectiveness of notifications, it is essential to adhere to strict criteria:
Implementing notifications effectively requires adherence to best practices designed to maximize their utility and minimize potential drawbacks:
Avoiding common pitfalls in the notification process is essential for maintaining an effective monitoring strategy:
In conclusion, the notification layer is critical to an effective monitoring strategy. Organizations can ensure their network infrastructure remains stable and resilient by focusing on critical alerts, adhering to best practices, and avoiding common pitfalls. Properly configured notifications enable timely human intervention, safeguarding the network's core functions and maintaining overall system performance.
Case studies and research on the layered monitoring approach for building the network monitoring strategy
alertingbest practicesmonitoringnetcrunchnetwork strategynotificationobservabilitypyramid model
The layered monitoring approach is grounded in extensive research, case studies, and established industry standards. Here are some key references and insights supporting this approach:
Research on Monitoring Best Practices
Effective network monitoring relies on comprehensive data collection, proactive alerting, and targeted notifications. Keysight’s white paper on network monitoring emphasizes the importance of technologies like network packet brokers to filter and manage data efficiently.
Keysight White Paper – Best Practices for Network Monitoring
Case Studies Demonstrating Successful Implementation
Apriorit discusses how detailed monitoring and proper data analysis improve network performance and issue resolution.
Apriorit Case Study on Network Monitoring
Industry Standards and Guidelines
Established frameworks like NIST SP 800-137 and ITIL promote a structured monitoring strategy emphasizing data collection, alerting, and notification.
NIST SP 800-137 – Continuous Monitoring Guidelines
ITIL Best Practice Solutions
Adopting a layered strategy yields several key advantages:
Comprehensive Monitoring and Data Availability
Extensive observability ensures critical data is captured for deeper analysis and anomaly detection. This supports informed, strategic decisions.
Keysight White Paper – Best Practices for Network Monitoring
Facilitates Proactive Network Management
Monitoring and alerting detect real-time issues and trigger automated responses to minimize impact.
Apriorit Case Study on Network Monitoring
Efficient Response and Resolution
Critical notifications are directed to the right personnel to ensure rapid intervention and minimal downtime.
NIST SP 800-137 – Continuous Monitoring Guidelines
In summary, the layered monitoring model—comprising observability, alerting, and notifications—is a validated, research-backed framework that enhances the ability to maintain healthy network infrastructure.
Links to books, articles and industry guidelines
Read how NetCrunch goes beyond typical network monitoring to deliver a comprehensive service-based monitoring solution for industrial monitoring, smart city monitoring, IoT, and physical security systems
Network monitoring has evolved beyond traditional approaches to encompass a service-based model in today's interconnected world. This shift is driven by the increasing variety of devices and systems that connect to networks. NetCrunch exemplifies this modern approach by offering the capability to monitor virtually any device or service, from coffee machines to traffic lights, lab equipment, and physical security devices like alarms and cameras. Here's how NetCrunch goes beyond typical network monitoring to deliver a comprehensive service-based monitoring solution.
Traditional network monitoring focuses primarily on ensuring the performance and availability of network infrastructure—servers, switches, routers, and similar hardware. While essential, this approach is no longer sufficient in an era where an ever-growing array of devices and services are network-connected. NetCrunch expands the scope of monitoring to include:
NetCrunch's flexible architecture allows it to monitor any device or service that can connect to a network, treating them as services. Several key features facilitate this service-based approach:
Custom Scripting and Telemetry: NetCrunch allows users to write custom scripts and send telemetry data to the platform. This capability is critical for integrating unique or non-standard devices into the monitoring framework.
REST Protocol Integration: Using the ubiquitous REST protocol, NetCrunch can receive data from virtually any device or application. This open and flexible approach ensures that NetCrunch can adapt to various monitoring needs and device types.
Expanding Protocol Support: With each new version, NetCrunch adds support for more protocols and extends its abilities to connect with an increasing variety of device categories. This continuous enhancement ensures that NetCrunch remains compatible with emerging technologies and devices.
Monitoring HVAC systems, lighting, and security devices in smart buildings. NetCrunch can provide real-time data on energy usage, environmental conditions, and security alerts.
Supervising the performance and status of industrial equipment and sensors. This includes real-time monitoring of production lines, equipment health, and environmental conditions in manufacturing facilities.
Monitoring critical lab equipment and environmental conditions. NetCrunch can ensure that lab conditions remain within specified parameters, preventing costly equipment failures or compromised experiments.
Managing and monitoring public infrastructure such as traffic lights, street lighting, and public transportation systems. NetCrunch can help cities optimize traffic flow, enhance public safety, and improve energy efficiency.
NetCrunch's commitment to expanding its monitoring capabilities ensures it remains a versatile and future-proof solution. As new devices and technologies emerge, NetCrunch's ability to integrate and monitor these innovations ensures that organizations can maintain comprehensive visibility and control over their entire network ecosystem.
In conclusion, NetCrunch's service-based monitoring approach allows it to go beyond traditional network monitoring, providing comprehensive oversight of various devices and services. By leveraging custom scripting, telemetry data, REST protocol integration, and continuous protocol expansion, NetCrunch offers a robust solution that adapts to the ever-evolving landscape of network-connected devices. This flexibility and extensibility make NetCrunch an invaluable tool for modern network management.
Discover all of NetCrunch capabilities, concepts, and components.
NetCrunch employs unique concepts for data organization and monitoring settings management to monitor today's complex networks.
Overview of NetCrunch Server architecture. Learn more about Monitoring Engines, NetCrunch Consoles, databases, additional tools, and critical concepts of advanced network visualization.
NetCrunch is a comprehensive system consisting of many components that communicate with each other. Most of them work on the NetCrunch Server, and configuration can be done with the Administration Console that users can run remotely (recommended). An additional NetFlow Collector with its database is also a part of the NetCrunch Server environment. The monitoring engine is responsible for performing monitoring tasks - the same set of processes included in the Monitoring Probe. The Dashboard server in the form of GrafCrunch Server can run on another machine.
The server works best on a dedicated machine (virtual or physical) with the appropriate resources assigned. If you want to process Gigabytes of data, you need SDD disks and a multicore machine. Read more in System Requirements.
NetCrunch Server also works well with vSphere Fault Tolerance, which provides continuous availability for NetCrunch Server.
The complete list of NetCrunch services is as follows:
Monitoring Engine,
NC Services
and NC Event DB.
The monitoring engine process is the monitoring probe integrated into the server. It collects all monitoring data. It runs as a separate entity and manages separate engines for monitoring Network Services, SNMP, Interfaces, OS Monitors, Virtualization, and hundreds of sensors.
Operating system monitoring for Windows, macOS, Linux, Solaris, BSD, ESXi, or Hyper-V depends on appropriate node settings. See: Automatic Monitoring and Organizing.
You can configure NetCrunch with the Desktop Administration Console, which you can install on any Windows system.
The console can utilize three different connection encrypted methods:
We recommend using an SSL certificate for the NetCrunch Web Server, even for local use. This allows your Desktop Console to establish a secure SSL connection as well. For remote access, you can safely use the console over the internet through NetCrunch Connection Cloud.
The console caches large amounts of data and transfers only changes over the network, ensuring that updates appear instantly without requiring a manual refresh. It also supports the creation and saving of complex screen layouts, including multi-screen setups.
NetCrunch comes with a fork of the open-source project Grafana v7, one of the top open-source performance visualization projects. Grafana dramatically increases the possibilities of creating live performance dashboards and allows you to present data from various sources. Grafana has a separate installer and integrates with NetCrunch. It gives you an easy way to create dashboards from multiple NetCrunch Servers and other sources that Grafana supports.
It is a modern HTML console that allows instant access to the server. It requires browsers not older than 1 year. You can manage access to the console through user accounts and access rights profiles.
Web Console provides mostly browsing capabilities. You might need an Administration Console to edit monitoring configuration and monitoring policies. Graphical Data views can be edited in Web Console, though.
NetCrunch was initially designed to monitor hundreds of devices and thousands of parameters. However, it scales seamlessly to monitor thousands of devices and hundreds of thousands of parameters using a single server. Our approach to scaling emphasizes both performance and ergonomics.
The policy-based configuration makes managing complex infrastructures straightforward. Rather than setting individual alerts and reports for each monitored node, which can be time-consuming in other programs, NetCrunch automatically applies these settings based on predefined policies. This significantly reduces the time required to configure and manage each node.
Network Atlas is a central database containing all your network data. It's organized by the hierarchy of the Atlas Node Views.
It contains all your network data and helps you organize it into various views. Many of the views are created automatically.
The fundamental element of the Atlas is a network node—a single-address network endpoint. The Atlas Tree shows the hierarchy of all views and helps you quickly recognize the status of each element.
Atlas Node View shows various aspects of the group of nodes in the Network Atlas and consists of multiple pages such as nodes, maps, dashboards, and others.
Atlas begins with a top root view of all nodes. This view shows top-level dashboards such as Status, Top Charts, and Flows.
The rest of the views are divided into sections:
This section consists of IP network views/maps. Each network can be periodically re-scanned to reflect its current state, and you can create a custom graphical map for each view. By default, node view shows node icons automatically arranged by device model and OS name.
We introduced the concept of sites (aka address spaces) to prevent confusion when monitoring nodes with the same network addresses.
When two locations use the same private network address, they create two distinct sites.
This section contains views regarding network topology. It includes logical (routing) and physical connection maps (layer 2).
The view shows connections between IP networks and devices, providing these connections (routers).
The top-level view shows connections between switches, and then each switch port mapping is represented on a separate view. Each segment (single switch) view automatically presents the traffic summary on each switch port.
NetCrunch offers views of ports and interfaces of the switch and provides live status for the particular interface.
This section allows you to organize your network data in any way you need. It contains both user-created views and predefined automatic views.
Based on typical customer atlases, we prepared many automatic (and dynamic) views for you, such as:
The views are dynamic, which means they are automatically updated as needed.
Graphical views are designed to present various performance and status data in graphical form. They can be diagrams or maps, where you can put many small elements. If they do not fit on the screen, you can drag and zoom as you would on regular maps.
Another option is to create a panel with given proportions and put elements inside it. Such a panel will fit a screen of a given proportion regardless of size. Panels are not scrollable and are always scaled to fit into available space.
Add some screenshots and link to the setup
Monitoring Dependencies reflect network connections and allow for preventing false alarms and disabling monitoring of unreachable network components.
NetCrunch allows setting dependencies upon node routes, virtualization hosts, and known switch Layer 2 connections.
Monitoring Pack is a group of performance parameters and events monitored and collected for the reports.
Monitoring Packs can be assigned to a node automatically (by a specific rule like for every Windows Server or for every Cisco switch) or manually. NetCrunch comes with many predefined Monitoring Packs.
Many predefined Monitoring Packs are automatically assigned to nodes based on the node Device Types' setting. The setting can be either automatically discovered or set manually.
Setting a proper Device Type is one of the essential tasks in configuring NetCrunch.
Device types will likely be set automatically for network components retrieved from Active Directory. NetCrunch can also automatically discover many SNMP devices.
Other devices, such as printers or Linux machines, need proper Device Types to monitor.
When you need to monitor a new macOS device, there are only two easy steps to do:
Then, the node will automatically receive a macOS Monitoring Pack.
It's that simple.
Alerts are an essential part of the monitoring program and one of its fundamental use cases. NetCrunch allows advanced alert processing, including correlation, conditional events, conditional actions, and escalation.
To clarify, whether we watch or not, events happen. "Event" becomes an "Alert" as we assign some reaction to it (it becomes an element of interest).
The simplest (default) action stores information about the event in the NetCrunch Event Log. We can assign a different list of actions to each event. The actions can include a notification (email, SMS texting) or some corrective actions, like executing scripts or programs (also on a remote machine). NetCrunch executes actions after the alert starts and when it's closed (finished).
As a monitoring program, NetCrunch is a primary source of status events and performance metrics alerts (counters). The program can also monitor external events. It matches incoming events with rules and triggers alerting actions for them. This feature allows you to trigger alerts and actions on SNMP traps, syslog messages, text logs, or Windows Event Log entries.
As many alerts are short-lived and can be self-corrected (like connection or power loss), administrators should concentrate on existing problems instead of constantly looking into the log.
NetCrunch simplifies alert management by correlating all internal alerts so they disappear from the Active Alerts view if closed.
The program allows for correlating external events (SNMP Traps, syslog, etc.) by defining the list of closing events for each external alert.
Simple alerts work when alerting conditions are met, such as "node is down" or when some external notification has been received.
What about something that did not happen, or is it not happening regularly? You can solve such problems with conditional alerts, which allow more complex scenarios such as: notifying when the syslog message was not received or if an event happened in a specified time range.
Available conditions:
Advanced correlation allows you to trigger events only if multiple events (from different nodes) have happened within a given time range or all are active simultaneously. Active event correlation requires all correlated alerts to be in an active state. This feature easily allows you to define an alert when two redundant interfaces are down.
In response to an event, NetCrunch can execute a sequence of actions. Actions can be executed immediately or with a delay (if the alert is not cleared), and the last action can be repeated. For example, you can send a notification to a particular person and then execute a server restart operation if the event remains active after some time.
See: Alerting Actions
Each action can be limited to run only if a triggering network node belongs to a given atlas view (these can be created by rules or manually) or within a given time range. This ability allows you to create flexible alerting scripts, such as sending different notifications depending on the node location. Alerting scripts can be used for multiple alerts, so you can limit actions to executing only when an alert is of a given severity.
NetCrunch uses various techniques to avoid false alerts or protect against alert floods, sometimes caused by device malfunctions. The program waits several seconds for a device to send a syslog message or SNMP traps to NetCrunch. If the same message appears several times, it won't trigger multiple alerts. NetCrunch uses an event suppression technique to detect false events caused by intermediate connection failures.
IP Tools is a set of network monitoring tools that allow testing the availability of devices and network services on a host, scanning ports, and checking the routes of test packets or connection bandwidth.
A tool for accessing NetCrunch performance data. You can analyze trend charts and data distributions for a given time. You can compare multiple parameters on a single chart.
This program allows compiling MIB files to extend NetCrunch's MIB library.
Allows for viewing and managing various NetCrunch reports.
atlasconsolemonitoringserver
architecturemonitoringmonitorsnetcrunchpackssensorsservices
NetCrunch supports multiple types of monitoring techniques, each suited to different levels of control, flexibility, and scale. This topic explains the key architectural types — network services, monitors, and sensors — and clarifies when to use each for best results.
NetCrunch defines three core monitoring categories:
Each category serves a different purpose and optimizes different aspects of the monitoring workflow.
Network services operate at the application protocol level, not just the transport level.
These are useful for confirming basic service availability and performance without requiring full sensors. They’re widely used in discovery and network-level health checks.
Monitors are high-level modules that use shared configuration to efficiently manage a wide range of systems.
Monitors are powerful building blocks. They act as gateways for monitoring packs, enabling broad observability with minimal configuration effort.
Sensors are individual monitoring objects applied directly to nodes or templates.
Sensors are perfect when monitors are not available or when fine-grained control is required. They can monitor APIs, run remote scripts, or analyze logs from external systems.
Situation | Recommended Monitoring Method |
---|---|
Verify web server availability | Network Service (HTTP) |
Monitor CPU, memory, disk on all Windows nodes | Windows Monitor + Monitoring Packs |
Track backup job status from Veeam | Veeam Sensor |
Test REST API for JSON metric | REST HTTP Sensor |
Confirm that DNS resolves a domain | DNS Network Service or DNS Query Sensor |
Some legacy network services (e.g., CHARGEN, FINGER, QDAY) remain for compatibility but may be deprecated in future versions. These services reflect older protocols and rarely-used checks.
Future monitoring logic favors monitors and sensors, which offer more flexibility, scalability, and integration options.
NetCrunch separates monitoring into three architectural types:
Choosing the right type depends on your scale, granularity needs, and monitoring strategy.
The basic requirements are a 64-bit Windows Server, 2 processors, and 3.5 GB of RAM—an SSD driver is also welcome. NetCrunch is designed to run efficiently on both virtual and physical server machines.
NetCrunch must be installed on a 64-bit Windows Server (Windows Server 2016, 2019, 2022, 2025). It has a web server and an embedded SQL database for storing and monitoring event data.
NetCrunch can be installed on a virtual machine, provided you assign at least 4 cores and 4 GB RAM.
More processors are better for monitoring 1000+ nodes; the recommended number in such cases is at least 8 CPU cores.
Monitoring a large volume of performance metrics (100,000 network interfaces) requires additional RAM (500,000 performance metrics will need an extra 4GB).
The other important component is the hard drive. We strongly recommend using SSD drives.
The Architecture and Concepts section explains why this is so important.
NetCruch Console runs on 32-bit or 64-bit Windows 10 or later systems with at least 4 GB of RAM. It requires a 24-bit color depth and high resolution. It should be run on at least Full HD screens or multiple monitors. The console also works excellently with touchscreen Windows tablets.
Web Console is compatible with modern, evergreen browsers, including the latest versions of Chrome, Opera, Edge, Firefox, and Safari.
NetCrunch keeps part of the data in memory, while some are written to trend log files (NetCrunch opens thousands of them), and other data goes to an SQL database. A problem can occur when NetCrunch writes files containing a snapshot of in-memory data and antivirus software tries to access the file simultaneously.
Antivirus software sometimes causes high disk and processor utilization and may prevent NetCrunch from accessing data.
We know that sometimes you can't change the company policy, and after Solorigate, it seems even better practice to run security software on servers. In this case, you must exclude all data directories where NetCrunch writes data.
Our experience shows that servers sometimes behave strangely and uniquely, depending on the antivirus vendor.
Please also note that disabling antivirus software resembles disabling stability control in your vehicle—it's only partially off. The hooks installed in the system are still in place and unpredictably change the system's behavior. The other problem is that antivirus software might sometimes cause 100% utilization of the single processor.
Depending on the processed data, NetCrunch can heavily utilize a server machine. Avoid competing for resources with other programs. NetCrunch contains many servers, such as the database, monitoring, web server, etc., so we are already putting a heavy load on a single machine.
In some cases, NetCrunch cannot process all data (events) due to hardware limitations. Remember that a single machine's speed is limited by its slowest components (hard drive, memory, or lack of cores).
Assign the machine an appropriate amount of processing time. Memory must also be physically available, as disk swapping should not occur. You need to reserve at least 4 cores and 4 GB. We recommend doubling these numbers for more than 100 to be monitored.
NetCrunch is a real-time Network Monitoring System.
NetCrunch can monitor nearly anything: devices, applications, systems, databases, and files. The program can be extended using scripts; data from various sources can be sent to NetCrunch or polled from files, databases, or websites.
There are many different usage scenarios for NetCrunch. In general, NetCrunch retrieves and processes three types of data:
The server allows you to set various conditions to filter incoming events or set alerts on performance counters. You can even create new calculated counters. See the Managing Calculated Performance Counters topic for details.
NetCrunch can be used just for network monitoring, where we mainly pay attention to SNMP devices, such as printers, switches, routers, cameras, and others. NetCrunch supports SNMP v1/v2c/v3, including encryption and authentication.
NetCrunch monitors the availability of over 70 predefined TCP/UDP network services, including DNS, FTP, HTTP, POP3, SMTP, and more.
The program can monitor network service performance by counting the number of packets sent and received, calculating response times, and calculating the percentage of packets lost and received.
The program checks connectivity, validates service response, and measures response time for each monitored service. For each sensor, the program allows monitoring of various conditions (for example, the text contains some pattern, a file exists, and so on) and performance metrics (such as response time or data size).
You can create custom service definitions or duplicate an existing definition and change its port. Services support TCP, UDP, and SSL connections. Response patterns can be defined as text, binary data, or regular expressions.
NetCrunch determines node up/down status based on network service status and other monitors (in the case of servers). When a node is down, only the leading service is being monitored. So, a node is considered "down" when no services respond, and the node is considered "up" when the leading service responds.
DNS is the most critical service in a network. Without it, nothing works at all. Therefore, monitoring the DNS service to check its availability is an obvious task for a monitoring system. However, availability monitoring only verifies whether the service is responding and what its response time is.
In addition to pure availability monitoring, NetCrunch allows you to verify DNS responses to given queries, which can enable you to discover unexpected (unauthorized) DNS changes.
NetCrunch supports switch and router monitoring, including the status of network interfaces, monitoring errors and discards, and bandwidth monitoring. It allows traffic to be monitored on interfaces, port mapping, and the creation of Layer 2 graphical maps.
NetCrunch allows you to monitor Cisco IP SLA operations. The program tracks the status of operations and also their performance metrics. Cisco IPSLA allows you to monitor VOIP jitter and other protocols and parameters.
SNMP is very ubiquitous, but implementation varies. NetCrunch contains the MIB compiler, which allows you to add vendor-specific MIBs.
Since basic MIBs have only been partially defined in RFC documents, vendor MIBs are sometimes tricky to compile. If you have no experience compiling MIBs and find it difficult, please ask AdRem support for help. We will try to help you, and if the device is popular on the market, we can add it to the set of pre-compiled MIBs. Please note that NetCrunch's built-in database contains more than 8,850 vendor MIBs already.
NetCrunch supports agentless monitoring of the major operating systems, including Windows, macOS, Linux, BSD, Solaris, and VMWare ESXi. Additionally, the Windows system supports application monitoring by monitoring their performance parameters and service status.
You can also use SNMP to monitor these systems, but please be advised that using SNMPv2 can create a security loophole in operating systems, as SNMPv2 transmits data in plain text.
NetCrunch allows you to monitor all Windows performance counters, including disk counters remotely. The list of available counters depends on the particular system and applications installed. Nine different trigger types can be used to set alert triggers on counters.
Monitoring Windows services is essential for monitoring most applications installed on Windows Server. The most frequent alert set on services is Service is not running. NetCrunch also offers a Windows services view in the node status window, allowing remote service control.
NetCrunch can remotely gather, filter, and analyze data from multiple Windows machines using WMI.
The program allows you to define simple alert filters to convert event log events into NetCrunch alerts. These filters are automatically converted into complex WQL queries.
NetCrunch includes 16 WMI sensors. WMI Perform resembles perfmon using WMI protocol, WMI Query object allows you to write your query, and then you can add alert triggers on result object properties. Process and Process Group summaries are handy for monitoring processes and their resources. Using the Process Group Summary sensor, you can easily track the total resources used by a web browser or other program using multiple process instances.
NetCrunch can collect hardware and software inventory information from Windows computers. The program shows detailed information about each machine and lists installed fixes. NetCrunch allows you to compare each audit and show hardware and software changes. The program includes a software summary view for multiple nodes.
This type of monitoring is available for Linux (and other Unix family systems) and Windows. These file sensors allow you to monitor file presence, size, and if and when it was modified. It can also search file contents, find new text log entries, and convert them into NetCrunch alerts.
The Folder sensor allows you to monitor specific folder contents, such as when a new file is added or if any files are removed.
These sensors support FTP/s, HTTP/s, SSH/bash, SFTP, and Windows/SMB protocols.
NetCrunch can track over 100 performance counters to determine the health of Linux servers running kernel 2.4 or newer. The program has been tested to monitor the following Linux distributions: CentOS, RedHat, Fedora, Novell OES, Ubuntu Desktop, and Server.
NetCrunch also offers fully integrated Mac OS monitoring. All macOS versions are supported, including the latest one.
The most important parameters being monitored:
NetCrunch supports ESXi versions 5.5 and later. It can connect directly to ESXi servers or through the vCenter server. When NetCrunch works in vCenter mode, and vCenter becomes unavailable, it can automatically switch to direct ESXi monitoring if you provide proper credentials for each ESXi server.
NetCrunch comes with pre-configured Automatic Monitoring Packs to monitor ESX when the device type is set to ESX.
NetCrunch allows monitoring of mailboxes (IMAP or POP3), checking email content (extracting data or events from emails) using the Data Email sensor, or checking full mail server functionality by checking to send and receiving control email (Email Round-Trip Sensor)
NetCrunch offers two sensors. The first allows checking a single-row answer from the SQL query, which can be treated as a status object. The second can interpret multiple rows as a list of metrics. This way, NetCrunch can monitor database connectivity and authentication (with empty SQL query), query execution time, and query results, which can be a single row representing a status object (so you can track changing the state of properties) or metrics that can be tend kept for trend or used for performance triggers. NetCrunch natively supports Oracle, SQL Server, MySQL, MariaDB connections, or any ODBC (system) source.
Every web sensor in NetCrunch can report an invalid certificate. NetCrunch also contains a separate SSL Certificate sensor, which can be applied to validate any SSL/TLS-based protocol certificate. The sensor can be used for any TLS-based service, not just HTTP/S.
NetCrunch contains universal sensor monitoring device uptime, connecting to the device using WMI, SSH, or SNMP protocols.
The program allows to monitor printers using a printer sensor, which accesses all printer statuses and metrics.
The sensor checks the RADIUS protocol response and availability.
NetCrunch provides multiple sensors for Amazon AWS, Azure, and Google monitoring cloud services. You can add each sensor by creating a Cloud Service node.
Read more about Cloud Monitoring
NetCrunch can collect, store, and detect changes in device configuration. The configuration is stored as a text file that can be later used for editing or restoring purposes. The sensor and code are based on the Oxidized open-source project, ported to a NetCrunch environment.
Similarly, NetCriunch collects the hardware configuration of Windows machines. The hardware configuration summary is available in the Nodes
tab view.
NetCrunch includes an advanced Web Page monitor that can load and render dynamic web pages containing Javascript as if a browser loaded them. It also allows you to check pages requiring the login (supporting standard HTML or custom login forms).
Available Web Page alerts:
Available performance metrics:
Read about monitoring Windows, macOS, Linux, BSD, Solaris, and ESXi systems.
NetCrunch Monitoring Packs allow efficient management of monitoring settings. You can use them to create monitoring policies by setting node filters. They can also be assigned manually to the node (or multiple nodes using multiselection). Currently, the program includes more than 223 ready-to-use Monitoring Packs for monitoring devices, applications, and operating systems.
This sensor is more suited for sending REST requests, so it simply retrieves data over HTTP and checks the response. It also allows you to check response content. It supports GET, HEAD, and POST requests.
Allows defining a sensor on the node to receive data from an external source (device, script, app). Data can be sent using the REST API, and you can set alerts on collected metrics and status objects.
Read how to send data to NetCrunch and create a custom monitor. You can easily turn any application or script into a NetCrunch agent.
This topic provides instructions for adding monitoring targets to NetCrunch, including nodes, sensors, and data collectors. It covers configuring network services, operating systems, SNMP, virtualization platforms, and setting up alerting rules and dependencies. Additionally, it guides users on adding new nodes and using monitoring templates for efficient management.
applicationcloudconfigmonitoringnetworkrest
Explore a hierarchical data schema to organize monitoring across devices, sensors, services, and other monitoring targets, providing detailed insights into network health.
TODO: Finish
NetCrunch data schema hierarchical structure consists of several status objects. Top-level objects are nodes of various types, and then lower are other objects such as monitoring packs or sensors.
Object state names are designed to reflect their current status accurately:
The most common node type represents a device or virtual machine with an IP address.
The object represents the remote device without an IP address. So, you can send data and events using REST protocol to it.
The object represents a single cloud service monitored. NetCrunch provides ready-to-use sensors to monitor over 30 cloud services from Azure, AWS, Google, and others.
It represents a remote monitoring engine, but it doesn't represent the device running the probe. You need to add it to the probe separately for monitoring.
An internal element provides generic access to thousands of metrics through SNMP, WMI, VMware, etc. Monitors need to specify the monitoring target further. Monitors utilize monitoring packs for specific monitoring elements. Monitors are too generic to rely on their status.
Each monitoring pack is a group of alerts or metrics that must be collected or monitored for a specific element, such as an application, service, or hardware. NetCrunch provides over 260 packs for monitoring SNMP, Windows, Linux, macOS, BSD, Solaris, and VMware.
Monitoring Pack is a group of performance parameters and events monitored and collected for the reports.
Network services are sensors specialized in protocol checks. They are lightweight, check the connectivity and response time, and verify the response from a given application. Network services are similar to IP SLA or NQA operations but are monitored from the NetCrunch Monitoring Engine (Server or Probe).
Sensors are focused on specific monitoring needs such as monitoring a process, text log file, pending update, camera, SQL query, or web page.
Monitoring Sensor is a software module focused on monitoring a single object, service, or device (web page, file, folder, query, etc.).
Sensor data objects are specific to a sensor. For a camera sensor, it's Snapshot Image,
the last frame collected from the camera. Other objects might contain some data collected by the sensor. Sensor objects represent the status of the monitored object and specific data and might have a different status than the sensor itself. Other examples are the Web Page
and Process
objects. Currently, when the tooltip for the object is displayed, it displays data in a JSON form.
Alert - the condition being watched for action to react to potential danger or get attention.
The status of the alert is designated by its severity level. Informational and minor alerts are often considered ok
as they do not require attention.
This object is only available if you add an alert on event for SNMP variable value
to a given node. The objective is always a success
state. If you need a stat,us look athe t alert associated with this object.
First, you need to enable IP SLA or NQA monitoring on a node and add monitoring operations. Then, you can refer to these operations' status.
Technically, this is a node - a top-level object. It represents the summary (calculated) status of objects of any type, including Composite Status. See more in Composite Status
business statusip slamonitornetwork servicenqa.businessobjectspackprobereceiversensorsensor objectserviceslasnmpsnmp value
You don't have to set up views and maps by hand or node by node. NetCrunch takes care of these tasks automatically.
One essential part of the NetCrunch configuration process is setting proper Device Types for all nodes. NetCrunch detects a Device Type whenever possible by getting information from SNMP or Active Directory. If it can't, the Device Type can be set manually. Currently, operating system monitoring no longer depends on the node Device Type.
NetCrunch allows OS monitoring to be enabled in the node settings. The monitor can be enabled automatically if the device type is detected. Otherwise, you must manually enable the respective OS monitor for monitoring Windows, Linux, macOS, Solaris, BSD, or ESXi systems.
Settings NetCrunch System Monitoring
SNMP monitoring depends on the profile you select for each node. The profile specifies the protocol's version and protocol-specific settings, such as community or username and password (v3).
To receive SNMPv3 traps, you must create SNMP notification profiles to provide authentication and encryption parameters. The profiles are matched with a trap by the User field. SNMPv1 and v2 traps do not need any profiles. All incoming traps are visible in the External Events window, simplifying the definition of alerts for incoming traps.
NetCrunch has many automatic Monitoring Packs configured to be added to nodes when they meet certain conditions.
For example:
Device Class should be "Hardware Router" or
a Switch Manufacturer name contains "Cisco."
Operating System equals "Windows Server."
Network Service List contains LDAP or "Secure LDAP" services.
You can view the complete list in the Monitoring Packs article.
Networks are dynamic, and new devices connect over time. NetCrunch can automatically add them to the Network Atlas and start monitoring them.
NetCrunch can run the auto-discovery process for each IP network and Active Directory container. All discovered nodes can be added automatically, or the program can display the results in a Server Tasks Notification Window so you can later decide which nodes to add.
The program automatically runs service discovery when the node is added to the Atlas. It's only checking the list of services set in Settings Monitoring Auto Discovered Services
The program also detects device type based on the information read from SNMP or Active Directory. If the device type can't be detected, it should be set manually.
The program also discovers ESX/i machines.
Read more about: Auto Discovery
NetCrunch automatically identifies every network service's status on the node, and at least one service must be monitored. If any service is not in the OK (green) mode, the node's status goes to the Warning state (yellow). When no service is responding, or all monitoring engines are DOWN, the node becomes DOWN (red).
The only service marked as* leading* is monitored when a node is in a DOWN state. If the service responds again, then the monitoring of all others starts again.
The view's status is calculated based on all node statuses included in the given view. If any node is in the Warning or DOWN state, the View Status is Warning.
When all nodes are DOWN, the view is DOWN, and when all nodes are OK, the view is OK.
Based on typical Atlases used by our customers, we prepared many automatic views in NetCrunch. You can delete or edit them if they don't meet your data organization's needs. Grouping nodes makes creating reports and watching the status of a given group easy.
Each group's status is calculated based on the included elements (nodes or map links).
The dynamic view presents a group of nodes based on the filtering criteria (query).
Predefined Dynamic Views:
In addition to views, there is another level of grouping: The Dynamic Folder. It creates Dynamic Views automatically.
For example, we can quickly create the following hierarchy:
List of predefined and automatic folders:
The views are dynamic, which means they are automatically updated as needed.
Link to how to create automatic map layouts
automaticconfigurationmonitoringpacks
You can customize NetCrunch to suit your needs. Consider adding more MIBs and counters, creating views, and incorporating new device types, node fields, and node icons. Think about adding new background images for views, modifying notification message formats, defining new calculated counters, extending the node database, and developing alert action scripts.
You can: add more backgrounds, icons, device types, MIBS, Monitoring Scripts, Calculated Counters (Virtual), SNMP Views, Notification Message Formats, Additional Node Fields,
NetCrunch comes with many predefined resources, like pre-compiled MIBs, Monitoring Packs, icons, etc. However, this might not be enough in the specific production environment, so we let you extend these resources accordingly.
Now you can find any MIB for your device on the internet. You can find collections of more than 65 thousand MIB definitions.
NetCrunch has its own MIB compiler, allowing you to compile any MIB. Because some MIBs contain bugs, they may be cumbersome to compile.
If you have any problems compiling MIBs, please let us know, and we will try to compile MIBs for you.
NetCrunch contains definitions of forms and tables displayed upon SNMP data from a device. There are groups of specific, detailed views for certain devices like printers, switches, or anything else.
Using these forms, you can change SNMP variables. You can also create custom view forms and tables using SNMP View Editor.
rlinks##SNMP-view-editor
Network nodes in NetCrunch are kept in an in-memory database and stored in XML files.
You can easily extend node data by adding additional fields.
It will allow the classifying and grouping of the network data to suit your needs.
The feature allows the creation of dynamic views based on these additional fields and using them to manage an alerting action as well.
what is kept where??
Sometimes the device returns parameters that need to be calculated before further processing. For instance, the data needs to be divided, or we want to have percentages instead of raw values. You can solve such a problem by adding a Calculated Counter calculated upon the given expression. See Managing Calculated Performance Counters.
You can extend NetCrunch monitoring using external scripts or programs or create a custom parser for data processing. The script can be run locally on the NetCrunch Server or remotely on other machines. You can also poll data using HTPP or let the parser process data received from a remote location using REST API. Read more in Sending Data to NetCrunch
In NetCrunch, you can create your views of node groups that make managing alerts and reports easier. You can create your graphical views with customized icons and widgets.
There are separate tables for each node group view showing data specific to a given Monitoring Engine. You can customize each view easily.
The Administration Console can be very flexible: it can run on multi-monitor systems, and you can create a custom multi-monitor layout.
As a second option, you can divide space and dock several windows to be visible on a large monitor. The layout is automatically stored, so NetCrunch will bring all windows to the same place when you run the console next time.
Most events contain a large number of details describing the event context. You would not be able to see all of them in the text message on your smartphone or even in the email. You can create message formats suitable for notification types and specific alert types. Include the information you need. The program automatically creates HTML emails (you can also customize them) when sending an email, and it uses plain text for SMS/text messages.
add link to window
You can easily integrate NetCrunch with existing management systems to extend its capabilities.
If you are already using some management system and would like to team it with NetCrunch as an extension of the overall network management system.
There are several ways to do it.
If your system uses SNMP, the best way would be to use SNMP. You can monitor and collect data in NetCrunch and send alerts using SNMP traps to the external system. NetCrunch can automatically define SNMP traps for each alert defined, so after reading NetCrunch generated MIB, you can successfully receive these traps in an external system.
(hamburger menu)Settings Alerting & Notifications Alerting Scripts Add Alerting Script Add Logging Write to unique file This is most effective method of transferring events from NetCrunch to external program. You can use alerting action which can write each alert to the separate, uniquely named file.
The action allows using (write) some program periodically scanning the folder and importing event files to the external system. In this way, the disk is used as the queue for alerts. Each event can be stored in XML format.
(hamburger menu)Settings Alerting & Notifications Alerting Scripts Add Alerting Script Add Logging Trigger Webhook
Webhook is a simple action that sends an HTTP request to a given URL with all event data as JSON or XML.
Unlike previous methods, this method should be used only for a small number of events, as it requires running the process for each transferred alert, which can be quite slow.
(hamburger menu)Settings Monitoring Export NetCrunch SNMP MIB
After defining all alerts, you might decide to generate NetCrunch MIB, containing SNMP trap definitions for all NetCrunch-defined alerts.
You can also periodically export performance data from NetCrunch and import it to the other systems.
Add a description of user actions like executing a program or writing a unique file.
It makes agentless Windows monitoring much easier.
NetCrunch runs on Windows Server and needs to integrate with Active Directory in order to properly access other Windows machines in the domain.
Starting from Windows 2003, every newer Windows Server version takes security settings to a higher level. This makes it impossible to access Windows machines without explicitly setting access rights.
The default option is to run the program on the local system account (which makes easy accessing local resources and communication between server components) and set up default credentials, common profile, or credentials for each machine (it's easy as you can use multi-selection to do it). The only drawback is that it might cause a security warning on some server systems, as the process running on the local system account attaches first as a machine object (machines are separate objects in the AD), and then it logins with given credentials.
You can run NetCrunch Server services on a domain account that is a member of a local Administrator group of each monitored computer (including the server running NetCrunch Server). Sometimes it can be a good solution, but it can be also hard or impossible to configure. It requires modifying AD security policies (remember they need time to replicate).
You will still be able to access other Windows machines by giving local credentials for them, but it means that you have to input the necessary settings for each of them individually
NetCrunch can use Active Directory user accounts as NetCrunch users. This allows keeping single passwords and makes management easier.
All you need to do is tick the Active Directory User checkbox when adding a new user. The username should be in the format:
<Domain>\
You can manage access to NetCrunch by assigning access profiles to Active Directory groups.
Member of such group will be able to login to NetCrunch with AD credentials, and he will receive the NetCrunch account automatically. NetCrunch will assign an access profile according to the group setting.
If the user is a member of multiple AD groups, he will receive his profile according to the order of groups.
active directoryadad groupsgroupsintegration
Learn about monitoring probes that expand on-premises monitoring into distributed environments across multiple sites and address spaces
NetCrunch allows distributed monitoring using Monitoring Probes. The probe is a single agent installed remotely and can monitor an isolated network without additional agents.
Monitoring Probes can also be installed at the same place as NetCrunch Server, and you can offload the monitoring part of the network to additional probes with a single click.
We designed distributed monitoring to be seamlessly integrated into the existing monitoring concept. If the node is located in a separate address space, its address gets an additional suffix with its name.
the above diagram has been created in NetCrunch
For example:
192.168.0.234
, it means it's the network address local to NetCrunch (Local - NetCrunch Server
)192.168.0.234@New York
is the address located in the New York
address space$$
Site is a group of private networks usually behind NAT. When two locations use the same private network address, they create two distinct address spaces.
Monitoring Probe is a monitoring agent software installed on a separate machine to increase the monitoring capabilities of the server or monitor a remote location within isolated networks otherwise not accessible by the primary monitoring system. It connects to the parent system.
The monitoring probe provides all monitoring and scanning features except the flow collector.
You can add sensors to the node and install the probe agent anytime.
NetCrunch monitoring probe uses a native client protocol connection to NetCrunch that uses AES256 encryption with the Diffie-Hellman key exchange algorithm. Because the probe connects to the server, it can be located behind NAT and use the dynamic IP address.
Invalid Reference @data-sensor
So the question is how you can use it. You need to add the Data Receiver sensor and send data to it. Read more about Sending Data to NetCrunch. You can build your agent script (in any language) or use the existing one to create a parser for data format.
address spacedistributedmonitoring proberemote proberemote sensor
Learn how clients and remote components connect to the NetCrunch Server — locally, through HTTPS, or via the NetCrunch Connection Cloud (NCC). Understand supported protocols, security levels, and available connection types.
NetCrunch supports various connection types to enable access from consoles, browsers, remote probes, and external systems. Depending on your environment, you can use direct TCP, HTTPS, or NCC to securely connect.
NetCrunch allows connections from several types of clients:
The NetCrunch Administration Console is the most powerful and optimized client, designed for fast, real-time interaction. It caches a large amount of data locally and synchronizes only the changes with the server, providing high responsiveness and reduced load.
It supports fine-grained access control, so it is no longer limited to administrators. Users with proper access rights can perform limited administration tasks — such as managing specific nodes or views — without requiring full administrative privileges.
12009
)wss://your-server
)YZ-123-456
) as the connection keyThe Web Console provides access through modern HTML5 browsers and is ideal for remote visibility and lightweight tasks.
The NetCrunch Probe is a remote monitoring engine designed to operate behind NAT/firewalls in branch offices or customer sites.
External data can be pushed into NetCrunch through:
NetCrunch supports multiple access paths. Each has a different level of security and use case:
wss://
)Use @<license-id>
as the server address when connecting (e.g., @YZ-123-456
).
The following diagram summarizes the three common deployment scenarios:
Diagram created using NetCrunch Graphical Views.
NetCrunch can monitor nearly anything: devices, applications, systems, databases, and files. The program can be extended using scripts; data from various sources can be sent to NetCrunch or polled from files, databases, or websites.
A reliable platform for secure remote connections to NetCrunch Server
Read how to send data to NetCrunch and create a custom monitor. You can easily turn any application or script into a NetCrunch agent.
Azure API Management sensor allows you to monitor the performance and behavior of the Azure API Management service. Azure API Management is a reliable, secure, and scalable way to publish, consume and manage APIs running on the Microsoft Azure platform.
This sensor monitors Azure Cosmos Database Service using Azure Monitor metrics.
The sensor monitors the estimated cost of resources in Azure subscription, and the progress of expenses within budgets defined in Azure subscription.
Azure Insights Components Sensor monitors Azure Insights Components resource, using Azure Monitor metrics. Application Insights is a feature of Azure Monitor and an extensible Application Performance Management (APM) service. You can use it to monitor your live applications. It will automatically detect performance anomalies and includes powerful analytics tools to help in diagnosing issues and understanding what users do with the app.
Azure Logic Apps sensor allows you to monitor the performance and behavior of the Microsoft Cloud technology called Azure Logic Apps. Azure Logic Apps is a service in Microsoft Cloud that allows you to schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.
Azure Server Farm Sensor monitors Azure Server Farm resources, using Azure Monitor metrics. The Microsoft Server Farm simplifies the provisioning, scaling, and management of multiple servers for administrators and hosting providers.
Azure Service Bus sensor allows monitoring the metrics of the cloud apps connected to the Service Bus namespace. Azure Service Bus is a messaging service on the cloud used to connect any applications, devices, and services running in the cloud to any other applications or services. As a result, it acts as a messaging backbone for applications available in the cloud or across any device.
Azure Storage Account Sensor monitors Azure Storage Accounts service, using Azure Monitor metrics. Microsoft Azure Storage Accounts service allows to create a group of data management rules and apply them all at once to the data stored in the account: blobs, files, tables, and queues.
With the Azure Web Site sensor, you can monitor the usage and performance of the web applications that are deployed to the cloud with Azure Web Apps. Azure Web Apps is a managed cloud service that allows the deployment of a web application and makes it available to customers on the Internet in a very short amount of time.
The Alarm Sensor monitors the status of Amazon alarms with CloudWatch API.
AWS Auto Scaling Sensor monitors the parameters of AWS Auto Scaling Group. AWS Auto Scaling automatically adjusts capacity to maintain steady, predictable performance for your applications.
This sensor allows you to monitor the performance and behavior of the databases managed in Azure SQL DB. Azure SQL DB is Microsoft’s cloud database service, that enables organizations to store relational data in the cloud and quickly scale the size of their databases up or down as business needs change.
The sensor monitors the estimated cost of services in the AWS cloud and the progress of expenses within AWS Budgets.
The sensor allows monitoring of key metrics of AWS Elastic Block Store.
AWS EC2 Sensor allows monitoring usage of Amazon Cloud service Elastic Compute Cloud (EC2) Instance resources.
AWS ELB Sensor allows monitoring of the performance of the AWS ElastiCache service.
AWS ELB Sensor allows monitoring metrics of AWS Elastic Load Balancers.
AWS SQS Sensor allows monitoring of key metrics of AWS Simple Queue Service (SQS). SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.
This sensor allows monitoring of Microsoft OneDrive storage usage.
This sensor allows you to monitor the health of Microsoft 365 services.
See how to create an alert for Syslog messages, SNMP traps, Web Messages, and Windows Event Log entries.
The window allows to see all incoming traps and syslog messages and define alerts with a single click.
certificatecloudconnectionconsoledata receiverhttpweb
A resilient, secure platform for zero-configuration access to NetCrunch from any location — ideal for modern, zero trust environments.
NetCrunch Connection Cloud (NCC) provides a reliable, secure, and cost-effective way to access the NetCrunch Server from any location without requiring firewall changes or VPN configuration. Much like point-to-point VPNs, NCC establishes an encrypted HTTPS tunnel from the server to the cloud — enabling remote access without exposing any inbound ports.
Unlike traditional VPNs, NCC does not expose the internal network. Instead, it functions as a relay, brokering HTTPS connections from clients (e.g., Console, Browser, Probe) to the server.
The only requirement is that the server must be able to make an outbound connection on port 443.
NCC supports secure connection from:
@<license-id>
instead of an IP address https://ncconsole.net/rc/connect/<license-id>
in your browserThe <license-id>
is your NetCrunch license number, visible in the About box of the Console.
NCC is designed from the ground up to comply with strict privacy laws like the GDPR.
By meeting these criteria, NCC helps organizations stay compliant while maintaining top-tier network security.
To connect using NCC:
@YZ-123-456
)@<license-id>
in the server fieldhttps://ncconsole.net/rc/connect/<license-id>
The license-id
is your NetCrunch license number.
Excerpt from the NetCrunch Connections diagram — highlighting the Cloud Relay (NCC) communication model.
Diagram created using NetCrunch Graphical Views.
Overview of NetCrunch Server architecture. Learn more about Monitoring Engines, NetCrunch Consoles, databases, additional tools, and critical concepts of advanced network visualization.
NetCrunch can monitor nearly anything: devices, applications, systems, databases, and files. The program can be extended using scripts; data from various sources can be sent to NetCrunch or polled from files, databases, or websites.
Learn how clients and remote components connect to the NetCrunch Server — locally, through HTTPS, or via the NetCrunch Connection Cloud (NCC). Understand supported protocols, security levels, and available connection types.
DNS Query, System Uptime, RADIUS, SSL Certificate & SSH Remote Ping.
Reading this topic will make your Windows monitoring experience much better.
NetCrunch can monitor requests, pages, and data on the web.
The sensor enables tracking changes to device configurations and stores multiple backups of device configurations using the telnet
or ssh
protocol.
Azure API Management sensor allows you to monitor the performance and behavior of the Azure API Management service. Azure API Management is a reliable, secure, and scalable way to publish, consume and manage APIs running on the Microsoft Azure platform.
This sensor monitors Azure Cosmos Database Service using Azure Monitor metrics.
The sensor monitors the estimated cost of resources in Azure subscription, and the progress of expenses within budgets defined in Azure subscription.
Azure Insights Components Sensor monitors Azure Insights Components resource, using Azure Monitor metrics. Application Insights is a feature of Azure Monitor and an extensible Application Performance Management (APM) service. You can use it to monitor your live applications. It will automatically detect performance anomalies and includes powerful analytics tools to help in diagnosing issues and understanding what users do with the app.
Azure Logic Apps sensor allows you to monitor the performance and behavior of the Microsoft Cloud technology called Azure Logic Apps. Azure Logic Apps is a service in Microsoft Cloud that allows you to schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.
Azure Server Farm Sensor monitors Azure Server Farm resources, using Azure Monitor metrics. The Microsoft Server Farm simplifies the provisioning, scaling, and management of multiple servers for administrators and hosting providers.
Azure Service Bus sensor allows monitoring the metrics of the cloud apps connected to the Service Bus namespace. Azure Service Bus is a messaging service on the cloud used to connect any applications, devices, and services running in the cloud to any other applications or services. As a result, it acts as a messaging backbone for applications available in the cloud or across any device.
Azure Storage Account Sensor monitors Azure Storage Accounts service, using Azure Monitor metrics. Microsoft Azure Storage Accounts service allows to create a group of data management rules and apply them all at once to the data stored in the account: blobs, files, tables, and queues.
With the Azure Web Site sensor, you can monitor the usage and performance of the web applications that are deployed to the cloud with Azure Web Apps. Azure Web Apps is a managed cloud service that allows the deployment of a web application and makes it available to customers on the Internet in a very short amount of time.
The Alarm Sensor monitors the status of Amazon alarms with CloudWatch API.
AWS Auto Scaling Sensor monitors the parameters of AWS Auto Scaling Group. AWS Auto Scaling automatically adjusts capacity to maintain steady, predictable performance for your applications.
This sensor allows you to monitor the performance and behavior of the databases managed in Azure SQL DB. Azure SQL DB is Microsoft’s cloud database service, that enables organizations to store relational data in the cloud and quickly scale the size of their databases up or down as business needs change.
The sensor monitors the estimated cost of services in the AWS cloud and the progress of expenses within AWS Budgets.
The sensor allows monitoring of key metrics of AWS Elastic Block Store.
AWS EC2 Sensor allows monitoring usage of Amazon Cloud service Elastic Compute Cloud (EC2) Instance resources.
AWS ELB Sensor allows monitoring of the performance of the AWS ElastiCache service.
AWS ELB Sensor allows monitoring metrics of AWS Elastic Load Balancers.
AWS SQS Sensor allows monitoring of key metrics of AWS Simple Queue Service (SQS). SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.
This sensor allows monitoring of Microsoft OneDrive storage usage.
This sensor allows you to monitor the health of Microsoft 365 services.
Google Analytics sensors allow monitoring of various metrics provided by Analytics Reporting API.
cloudconnection cloudnccsecure connectionvpnzero trust
NetCrunch licensing is node-based and allows monitoring of switch port interfaces whose number equals or is lower than the number of nodes.
For instance, with a license for 500 nodes, you can monitor 500 nodes AND 500 interfaces. If you want to monitor 500 nodes and 700 interfaces, you'll need 500 node licenses AND an additional 200 interface license for the excess.
If you do not monitor switches only, then you only look for the total number of monitored nodes (devices). In other cases, If you want to monitor layer 2 connections of your switches, then you need a license covering the total number of switch ports that you wish to monitor that exceeds the total number of switches.
NetCrunch licenses are additive. This means that, for example, if you purchase a 100 node/interface license and then decide to get a 50 node/interface license, you will have a total of 150 nodes/interfaces license. You can purchase an additional node pack at any time and add it to your NetCrunch - no need to reinstall or software restart, you just need to refresh the license.
By default, interface monitoring is enabled only on SNMP devices. However, interface monitoring can be enabled on all monitored nodes, if desired. Interface monitoring can be adjusted on a per-node basis within NetCrunch preferences.
By default, only All Active type interfaces are monitored. 'Other' or 'Loopback' type interfaces do not count for license size unless you adjust the monitoring schema manually.
It's possible to create an interface monitoring schema to include only interfaces that match the filter you need.
To change the default schema or create additional schemas, go to (hamburger menu)Settings Monitoring Interface Monitoring Settings
interfacelicensingnodeport
Everything you should know about configuring NetCrunch, including Initial configuration, alerts and reports.
Configuration of NetCrunch can be easy or hard - depending how you start. There are many configuration possibilities, and if you know which one to choose in a given situation - everything becomes simple.
This article will help you get through network discovery and the initial configuration process.
To monitor your network, NetCrunch must know device addresses, their names, and how to connect to them (credentials, types, etc.). The Network Atlas is a database holding all that information, so that the first step will be adding nodes to your Network Atlas.
You can add nodes from a file, and then you can do the rest of the setup manually. It's even possible to create an empty Atlas and add every node manually, but it's unlikely you would need that feature.
Nodes can be added from the text file, containing either names or addresses of nodes - one per line.
What to do after importing nodes? So manual configuration topic.
tabDiscoveryMethod
The program searches Active Directory and adds all devices, including other operating systems like macOS machines.
The program scans only the Domain, where the NetCrunch Server machine object is located.
All machines from Active Directory will be added even if they are currently not connected to the network.
The program will scan a given range of network addresses using ICMP (ping) packets. Only connected and responding nodes can be discovered in this way, so results may vary depending on when you perform the discovery.
tabNetworkSelection
NetCrunch fills the list with known networks; you can add more networks as desired.
The program uses ICMP PING packets to discover nodes in a given network, and only nodes responding to ICMP PING packets are added.
During the discovery process, NetCrunch finds connections to neighborhood networks. Enable this option if you want NetCrunch to follow these links automatically. You can limit the scanning depth by specifying the number of maximum hops to remote networks.
The program will try to find and add as many devices as it can find. It will look into SNMP and use PING sweep.
The program will skip workstations.
Build a filter query that will include desired devices. The program will try to communicate with any given SNMP profiles and determine which profile needs to be used for the device.
tabSNMPMode, tabSNMPSettings
This is the point where you should enter all the SNMP profiles used by your SNMP devices.
If you fail to do this now, you can enter them later, but you will have to assign them to each device one by one.
If you enter the SNMP profiles now, the program will automatically determine which profile is used by which device, and devices will be marked as SNMP manageable. Additionally, device types should be discovered (by sysObjectId SNMP variable or device SNMP name), and automatic monitoring will be set.
SNMP Discovery assumes that you use only one SNMP port for all devices in your network. By default, SNMP agents respond on a port 161. If you use different ports for different devices, you have to set them up manually.
add a link to the topic about managing SNMP profiles
tabNetworkServices
NetCrunch recognizes and can monitor about 70 network services. Discovering all of them might be time-consuming and might generate unnecessary network traffic. As a result, we decided to include 14 common services by default, and you can add more services as desired.
Here you can also set the initial parameters for each monitored network service.
You can decrease timeouts if you monitor network devices in a network with low latency. Default values should work great, even for internet connections.
After NetCrunch discovers all your devices, it starts discovering network services on each device in the background.
This is a time for further configuration: automatic Monitoring Packs, setting credentials, setting NetCrunch users and administrator profiles, and default alerting scripts.
Go to: Configuration Wizard
create atlasdiscoverynetwork discovery
Configure automatic Monitoring Packs, set credentials for different systems, set up NetCrunch users, and profile for the Administrator. Customize the default alerting script.
tabMonitoringAreas
Each Monitoring Pack contains the definition of alerts and data that needs to be collected by NetCrunch. NetCrunch contains predefined Monitoring Packs and bounds them to nodes upon filtering rules.
tabCredentials
To monitor anything, NetCrunch must connect to various systems using the proper credentials. You can set all default credentials (to be used if you do not set a specific one for the node) for supported operating systems (Windows, Linux, BSD, macOS, Solaris, ESX/i).
If NetCrunch runs in the Domain, it connects using NetCrunch Server account credentials by default.
See: Windows Monitoring Setup
tabAlertingScript
All predefined alerts use the Default Alerting Script. You can edit it later in detail, but you can make some preliminary settings here.
Go to: Configuration Tips
config-wizard
Read about types of views and how they organize your data.
Network Atlas is a central database containing all your network data. It's organized by the hierarchy of the Atlas Node Views.
The Network Atlas is a part of the Automatic Monitoring and Organizing concept. It is a central repository of all views, grouping network nodes by categories such as nodes from the same network, a single layer 2
segment, or nodes within the same area.
The fundamental element of the Atlas is a network node: a single-address network endpoint. Because many devices use multiple interfaces, they can be grouped, and then you can decide to monitor only the primary interface.
Read more in Devices with Multiple Network Interfaces
Atlas Node View shows various aspects of the group of nodes in the Network Atlas and consists of multiple pages such as nodes, maps, dashboards, and others.
The Atlas Views hierarchy helps you recognize each node group's status. A top-level (root) view of the Atlas contains all nodes, sub-views, and dashboards showing aggregated information for all nodes.
Below the Top Level Views are four main sections of the Atlas Views.
This section consists of IP network views/maps. Each network can be periodically re-scanned to reflect its current state. Network maps are usually automatically arranged, and the device model and OS name group nodes.
Each IP node is a part of some network view. Deleting a node from any IP Network View will cause deletion from the Atlas. Deleting the View will delete all nodes contained in the View.
Networks are grouped into sites. By default, local networks are those accessible directly by NetCrunch Server's network interfaces.
Sometimes, a node is specified by its name, and its address has not been resolved yet. In this case, it belongs to the empty network view.
Automatic node rediscovery - You can set (enabled by default) each network to be automatically scanned to find active nodes. The minimum rediscovery time is 1 hour. You can also specify the exclusions list.
Discover New Nodes - you can manually start the discovery of network nodes.
An automatically created view shows a map of logical connections between IP networks. The view is updated automatically when NetCrunch detects changes in topology or on demand. You can edit the map and adjust node positions. A routing map is automatically created for each address space.
A routing map can be automatically laid out as a graph or hybrid map.
This section contains a hierarchy of views showing Layer 2 connection segments. Each view is automatically arranged and shows the traffic summary for each switch port.
To start monitoring Physical Connections, you need to point the switches to be used to a network topology map. Switches must be defined in the Atlas and have correctly set SNMP profiles.
After you click on (hamburger menu)Settings Monitoring Physical Connections Monitoring, the Physical Segments Configuration Wizard will start. It will try to find switches that match the above requirements. If you can't find the switch but know it supports RFC 1493 MIB, please check if it's in the Atlas and its device type is set to Switch.
To create topology maps, NetCrunch primarily uses protocols such as STP and Cisco CDP, and LLDP, as a fallback, uses SNMP Forwarding Tables (RFC 1493). Make sure you have enabled CDP or LLDP to ensure the best results.
NetCrunch can create a layout depending on the topology, such as a tree, star, or graph.
This section allows you to organize your network data. Here, you can add your own Node Group Views and manage them using folders. You can create graphical maps for the view and create links between them. You can also create automatic folders to manage automatically created views.
The primary goal of NetCrunch is to monitor the network's state as it changes over time. It’s better to set up and configure views based on rules to maintain dynamic network relations.
Based on typical customer Atlases, we prepared some Automatic Views (aka dynamic views) for you:
All Automatic Views are dynamic; they are automatically updated as needed. By default, all empty views are hidden. These views can be modified, and you can treat them as examples of how easy it is to create dynamic views in NetCrunch.
You can create a set of automatically updated views as data changes. For example, you want a separate view of each city where devices are located.
You need to specify the node field for creating views, and you may also decide to make only separate views consisting of more than five elements. Otherwise, nodes will be placed on a single view.
As these views are automatically created and deleted, you can't manage alerts and reports using them.
The views are managed through specified filtering conditions (query).
The domain equals ad.adrem
graphical-views
Graphical Views allow creating live visual dashboards, maps, and diagrams to represent network and system status. They combine diagrams and widgets with interactive, responsive rendering that adapts to user roles and device type.
Graphical Views are flexible, real-time visualizations used in dashboards, maps, and diagrams. They support both panel-style dashboards and boundless map canvases, offering a unified visual experience.
Panel Views – designed with a fixed aspect ratio and responsive layout. Panels automatically scale to fit the display area, making them ideal for dashboards optimized for desktops, tablets, phones, or large displays. Two scaling modes are available:
Boundless Maps – scrollable and zoomable freeform canvases ideal for visualizing layouts without screen constraints. Background images (e.g., floorplans or logical schematics) can be freely positioned. These maps are designed for manual panning and zooming, useful when content intentionally exceeds screen size.
Dynamic Content – elements like shapes, lines, icons, and text can be linked to live NetCrunch objects such as nodes, services, or metrics. Their appearance dynamically changes based on object state (e.g., color, visibility, shape).
Theming – views automatically adapt to the user's light or dark theme preference. NetCrunch uses a single view definition across themes, unlike other platforms that require duplicating views per style.
Each view type supports specific layout and interaction patterns:
Used to build logical or status-oriented network diagrams. Objects can be positioned manually and connected using live status lines.
Image: Simple network diagram
Used for large, navigable visualizations with no fixed layout size. Suitable for: - Floorplans - Wiring diagrams - Device placement maps
Image: Operating boundless diagram
Users can add a background image and freely arrange visual elements without zoom constraints.
Responsive visual containers designed with a fixed aspect ratio. They scale to fit display dimensions while preserving layout. Ideal for: - Mobile dashboards - Status displays - Kiosk and wall-mounted views
Image: Simple dashboard example
Two smart scaling modes allow for optimizing visibility either by content or full panel size.
Use a background image (e.g., rack, datacenter, topology drawing) placed inside a fixed-aspect panel. The image can be scaled and positioned relative to the panel, with live elements overlaid.
Image: Sample floor plan with camera widget
Predefined region-based maps with geo-location support. Over 100 maps are available (e.g., countries, states, regions). Unlike embedded browser maps (e.g., Google Maps), these are vector-based contour maps. - Users can place elements based on coordinates. - Locations can be resolved via Google’s geolocation service. - Suitable for regional monitoring dashboards or location-aware topologies.
Image: Camera locations across the world
atlasboundless viewschartsdashboardsdiagramsgeo mapgraphicalgraphical viewsimage panelip networklayer 2mapsresponsive designsegmentsthemingviewview typesvisualization
Read about scheduling reports, the difference between an event and alert, Monitoring Packs, and message formats.
Monitoring Packs, Alerting Scripts, the Escalation Process, Inheritance (overriding alert definitions for specific nodes)
Although alerting and reporting serve different purposes, their settings are very similar.
NetCrunch manages alerts and data collectors in the same place through Monitoring Packs and Node Settings.
To collect data for reports, add data collector to specific Monitoring Packs or nodes. To create a report, you need to select one of the defined Report Scheduling Schemes (or define your new one) and specify the user or a group which should receive the report.
Read more about Customizing NetCrunch Reports
Event is a thing that happens or takes place, especially one of importance.
As we assign an event condition to be watched or received by the program, it turns into an alert containing a log of operations taken and the response to the event.
Alert - the condition being watched for action to react to potential danger or get attention.
In other words: the program is the alert guard watching for specified event conditions. When we decide to create a new alert, the default action is to write it to the NetCrunch Event Log. You can assign a common Action List to an alert or create custom sequences of each alert's actions.
Each Monitoring Engine defines its own set of events to watch. There are many predefined event conditions, mainly to track well-known object states such as Windows Services, Network Services, Nodes, etc.
There are many more events than defined in the software. For instance, when you monitor external Syslog events, you need to describe which ones you want to be NetCrunch events. If you decide to turn all Syslog messages into a single event definition, you won't set different alerts for different messages.
The most important types of events you can define are Event Triggers for Counters, which you can set on any performance counter value and allow you to set logic for observed counter values.
When you create a new event condition to set an alert, you can save it for later use and add it later to another node or policy. Both nodes (or Monitoring Packs) will share the same event condition. You can modify it for one node or all nodes sharing the same condition when you want to change it.
By default, NetCrunch saves all new rules as common definitions.
If you want to change this setting, uncheck Save as common definition
before saving a new event.
If you want to manage common definitions or remove unused ones, go to NetCrunchAlerting & NotificationsMonitoring Packs and PoliciesCommon Alerts
Setting alerts using Monitoring Packs: SettingsAlerting & NotificationsMonitoring Packs and Policies.
You can override or add alerts and Monitoring Packs to a node or multiple nodes by clicking on a node (or selecting multiple nodes) Node SettingsMonitoring
See Managing Multiple Node Settings
There are two main types of reports: aggregated for a group of nodes and single node reports. Both of them need data.
Data collection management is very similar to alert management. It needs to be specified for a specific node. You can do it through Monitoring Packs, Atlas Views, or set it directly in the Node Settings window.
Monitoring Pack is a group of performance parameters and events monitored and collected for the reports.
Automatic monitoring packs specify a node filtering condition, allowing you to automatically apply the Monitoring Pack to nodes.
Most predefined Automatic Monitoring Packs bind through a specifying operating system type and some additional conditions.
Active Directory
monitoring pack settings to node ifEach Automatic Monitoring Pack has an Exclusion List, which specifies nodes that should be excluded from the given condition.
You can add a Static Monitoring Pack manually to a node using Node SettingsMonitoring, or you can open the properties of the Monitoring Pack and click on the Assigned to page.
See the list of predefined Monitoring Packs
Settings Alerting & Notifications Monitoring Packs and Policies
In the NetCrunch Monitoring Packs & Policies window, you can find the Global group.
It contains a list of special predefined Monitoring Packs. Some apply to all nodes; some are Monitoring Packs that refer to globally collected data such as NetFlow traffic summary. When you modify the Node Status pack, be aware that each alert will be automatically monitored for all nodes.
When you add Monitoring Packs to the node (or if they've been added as automatic Monitoring Packs), the node settings become a sum of settings from multiple packs applied to the node.
You may override the settings for a specific node. Select a node (or multiple nodes) and open Node SettingsMonitoring and click on the desired Monitoring Pack, then you will be able to disable or override alert actions defined by the given Monitoring Pack. The automatic Monitoring Pack can be disabled on a particular node.
Actions are executed as a reaction to an alert. Actions are always grouped in the Action List sequence.
See Alerting Actions
The action list is the sequence of actions executed in response to the alert. It's grouped according to the execution delay time.
Some actions may be executed immediately, and others may wait several minutes to start. The last action in the list can be repeated until the alert is closed ( the issue is resolved). You can also define a list of actions executed when an alert is closed. Each action can have restrictions allowing to execute it only in certain conditions, such as alerts in a certain time range. The node only being a member of a given Atlas View or alert has to have a certain severity.
Settings Alerting & Notifications Message Formats
Event descriptions are very different. There are several fields common to each NetCrunch event, but most of the data come from various external sources like Syslog, SNMP traps, Windows Event Log, or different Monitoring Engines.
Defining a single message format for each event and notification target is hard. It's rather apparent that sometimes you might expect to receive an HTML email full of content and other times a short SMS with only the most essential info identifying the problem.
Another application for Message Formats is passing parameters to various external actions like executing a program or writing event data to a file.
Internally, NetCrunch uses the XML format for event representation. Although it's a text format, you can hardly call it a "human-readable" format.
You can see the default message format assignments for all actions in this window.
There are eight predefined message formats used by different actions:
Each action type has a default message format assigned. You can change the assignment by clicking on a format name in the Message Formatcolumn.
Switch to page Message Definitions. Here you can see message definitions grouped by Message Format. You can define a custom message format for a specific event or an event class for each message format.
We want to create a custom SMS text message for the Node State Event to include the Location field's value.
Add Parameter
to add the Location parameter. You can remove the fields you do not want to include in these alerts. alertcustomformatmanagingmessagereports
Read about what custom reports you can create and how it can be done.
Settings Resources Reports
Settings Alerting & Notifications Policies Monitoring Packs and Policies
You can add new reports to the Monitoring Pack or directly to the node. For instance, you may want to add a PING availability report for the specific node.
PING
This will add predefined [PING] Availability Report
NetCrunch contains several template reports, predefined reports, and the ability to create custom reports. Template reports are pre-defined reports that need only parameterizing.
revise this after making some changes in the program, currently, groups are totally useless
The report contains:
The report summarizes and compares a given number (10 by default) of top nodes.
The report contains a comparison of:
The report compares nodes with the lowest and highest average response processing time.
Response Processing Time is the estimate of the service's time spent generating the response. It's calculated by subtracting an average PING RTT from the service response time.
createcustomizereports
This report allows for presenting multiple counters' trends in one document. It shows a single-line trend chart for a given time range. You can choose to have 4 charts per page or just one.
This report is similar to a single node, but trends from multiple nodes (and the same counter) can be grouped on a single chart. You can customize the number of trends put on a single chart.
This report includes the list of monitored counters with their current values.
You can add a custom report for any counter you want to monitor for a given node or group of nodes.
You can have each report you add to a Monitoring Pack or Node Setting be automatically created and scheduled.
Settings Resources Report Scheduling Scheme
Each report can be scheduled using a predefined schema with criteria for each report type (daily, weekly, monthly). You need to specify the recipients. createcustomizereports
NetCrunch allows for the setting up of various threshold conditions on performance data regardless of origin. This works for all performance data channels, from SNMP data to data received through the REST API.
Performance Trigger generates an event upon the condition set on the performance counter value.
Thresholds trigger an event when a value is crossing a given border. Depending on the direction's change, it can be a rising or falling threshold condition.
A simple threshold specifies only the threshold value and the direction.
% Processor Utilization > 50%
% Free Disk Space < 10%
Hysteresis can be used to avoid generating too many threshold events on fast-changing values. This is done by setting an additional resetting value.
Thresholds can be configured based on the last value or the average value calculated over a specified time range.
This trigger generates an event when the counter has (or has not) the same value in the given period.
% Processor Time = 25 for the last 5 samples
This trigger generates an event when the monitored value changes from one value to another. You need to specify at least one value.
Network Card Error State changes from <any> to 5
The event can be reset when the value changes to the previous state or any other value.
This trigger generates an event if value exists or is missing (can't be read or received) in a given time range.
There might be no value until an error occurs, so the program can react to a value exists condition instead.
This trigger alerts you when the current counter value keeps growing or decreasing by a given value. You can also define the opposite condition when the counter is not growing or decreasing as expected. Delta is the difference between the last and the previous value.
Device Internal Timer Delta < 1
This trigger allows you to specify how a counter value can differ from the calculated average over a given period. The deviation can be set as a percentage or by absolute value.
This trigger simplifies configuration. Instead of two separate thresholds for a low and high boundary, you can specify a range. The event triggers if a counter value is in the range or outside the range. Additionally, you can specify the reset tolerance value.
This trigger allows setting thresholds for deviation from observed baseline data. The program collects baseline data over a week and stores them for reference. The baseline is calculated for each hour and each day of the week.
The user can specify the allowed deviation from the baseline value (by a number or a percentage).
Averages can be calculated only if more than 20% of data exists in the time range.
This trigger allows you to set an alert on a limit on the aggregated value of a given parameter.
For example, it might be the number of bytes transmitted
per day or week.
It works to counter-represent value per second or accumulate values (increasing).
Available periods:
Alert has no auto-close condition. For each period, a new alert will be generated. For this type of trigger, you can define Alert Correlation (by default, the alert will be closed after 2 days
averagebaselineeventhysteresislimitperformancerangethresholdtriggers
NetCrunch supports a flexible, role-based user system with access control for views, nodes, and features. You can integrate accounts with Active Directory, define granular access rights, and control notifications per user or group. This topic explains how to manage user accounts, access profiles, and notifications effectively.
access profilesactive directoryadmin accessdesktop consolenotification groupsnotification profilesoauthorganizationpassword resetshared useruser managementuser rightsweb console
NetCrunch allows fine-grained control over what users can see and do. This applies to both the Desktop Console and Web Console, with different capabilities:
Both consoles require users to authenticate. Access rights, personalization settings, and notification profiles are stored per user on the server.
NetCrunch includes a predefined Admin user that always has full access—similar to a root account. It cannot be deleted. Its password can be reset from the Desktop Console using User & Access Rights Manager → Change Password.
If the Desktop Console is unavailable or the built-in Admin account becomes inaccessible, you can reset the password using the NetCrunch Command-Line Interface (NCCLI).
To do this:
nccli.exe reset-admin-password
Replace your_new_password
with the desired password. This resets the password for the Admin
account immediately. You must run this command with administrative OS privileges.
This method ensures password recovery even when no console access is available.
Every standard user has:
You can assign users full or read-only access, or create custom access profiles for fine control.
You can create shared login accounts (e.g., noc-operator
) with restricted permissions. To prevent unauthorized changes:
This is useful for rotating operators, kiosks, or NOC stations.
Users can authenticate directly with credentials defined in NetCrunch.
NetCrunch supports full AD integration, allowing centralized account management.
You can link a NetCrunch user to an AD account via the Link with AD Account option. This syncs login identity and delegates password handling to AD.
Assign access profiles to AD groups to enable role-based control. When an AD user logs in:
You can control group evaluation order and priorities.
NetCrunch supports multi-tenant visibility via organizations.
<root>
, which sees all nodes).This model is ideal for MSPs, large enterprises, or restricted internal teams.
Access Profiles define what a user is allowed to do or see. They control:
You can create custom profiles and assign them to users or AD groups.
Each user can define notification rules for themselves or receive alerts via assigned Notification Groups.
Use groups to assign notifications by role or function (e.g., “On-call Tier 1” or “Network Engineers”). Users inherit group notifications and can override or disable them.
Password resets and security actions are managed in the User & Access Rights Manager.
NetCrunch provides a flexible, secure user model:
Whether you're managing a single admin or dozens of operational users, NetCrunch ensures each user has the right view, the right tools, and the right alerts—nothing more, nothing less.
Access Profiles in NetCrunch define what users can see and do in the system. They form the foundation of role-based access control and allow scalable, high-performance delegation of privileges to individuals and groups. This topic explains how to design and apply Access Profiles to achieve the right balance between security and usability.
access controlaccess profileaccess scopeactive directory integrationatlas accessnetcrunch permissionsnode accessrole-based accessserver rightsuser permissionsuser rightsview access
Access Profiles in NetCrunch represent reusable, role-based permission sets that define access to:
Unlike many systems that tie permissions directly to users (and slow down as the number of users increases), NetCrunch applies permissions through shared profiles. This makes access evaluation extremely fast, even in large environments, and simplifies administration.
Each user—whether local or AD-authenticated—can have one access profile, either assigned directly or inherited through group membership.
NetCrunch supports two core strategies for designing access:
This is the default model:
This approach is safer, easier to audit, and minimizes accidental overexposure.
In this model:
This approach can be useful in smaller environments or for quick setup, but it's less secure by nature.
Both strategies can be implemented in the same UI by adjusting the Atlas Defaults and View/Node Overrides in the profile editor.
Access Profiles should be treated as roles, not per-user settings.
Changing access rights in one profile instantly affects all assigned users, avoiding duplication and misalignment.
There is no limit to the number of access profiles you can define.
NetCrunch uses a hierarchical, path-based access model. Each object or operation can be granted:
Access rules are evaluated from longest path to shortest. This means more specific rules override broader defaults.
Node
→ basic access to the node Node > Events
→ access specifically to the node’s event view Atlas View > Nodes
→ access to all nodes within a view, but not necessarily to the view itselfAccess profiles can define rights in three scopes:
Controls access to core system features:
Set default rights for all nodes or views unless overridden.
Here you define specific overrides—either granting or denying access at the element or feature level.
This distinction is key: having access to a view doesn't mean the user can see every node inside it.
Currently, the Desktop Console requires the Server Administration access right to run. It is not restricted by view or node-level permissions. However, NetCrunch now supports delegated use of the Desktop Console—you can assign limited admin rights via access profiles, allowing trusted users to configure selected nodes or views.
The Web Console, on the other hand, fully honors access profiles for all Atlas objects and supports granular restriction. While it currently lacks some administrative features, they are planned for future versions.
Access Profiles in NetCrunch are lightweight, scalable, and built for performance. Designed around roles, not individuals, they allow fast permission evaluation even in large user environments. Whether you're using a deny-first model for strict control or an allow-all model with exceptions, NetCrunch keeps access control simple, auditable, and efficient—with no artificial limits on how many profiles you define.
See how you can create calculated counters from existing ones.
Settings Resources Calculated Performance Counters
We created Calculated Counters to create counters calculated from source counter data upon a given arithmetic expression. For example, we may require a counter representing a percentage of free memory, but a particular Cisco device delivers only raw memory values.
Calculated counters Extend counters from the given Monitoring Engine by arithmetic calculations.
After defining them, you can access them like any other counters from the given Monitoring Engine. For existing calculated counters, the counter expression can be edited. You can also add new calculated counters.
pgcPage1
Select from the list the monitor to which the calculated counter will be added.
pgcPage2
Each counter is defined by its Object, Name, and Instance. Since instances are specific to each object's performance counter, you should specify the object for the counter first.
Calculated Counters can inherit instances from source counters only if they extend the existing source object.
For example:
pgcPage3
To create a counter, add the desired counters and use an arithmetic operation to create an expression. When you add a new counter to the expression, the program automatically creates a variable name and puts it into the expression.
Example:
100 * (ciscoMemoryPoolUsed / (ciscoMemoryPoolUsed + ciscoMemoryPoolFree))
calculated counterscountersperformancevirtual counters
Monitor multi-interface devices with granular control and flexible grouping options
In NetCrunch, each IP Node represents a single network interface, not the whole device. This design enables granular monitoring, as each interface may run different services and require distinct monitoring profiles.
To simplify the view and reduce data collection on less critical interfaces, NetCrunch allows you to group multiple interfaces under one primary node:
There are several ways to combine multiple interfaces into a single logical device:
This approach keeps monitoring efficient and organized, especially for servers or infrastructure devices with multiple NICs.
device-groupinginterfacesmonitoringnetworking
Custom scheduling options for monitoring, alerts, and notification, allowing for precise control over various program activities.
The program uses time restrictions to define time conditions for various elements, such as node monitoring, alerting conditions, or notification schemes.
In recent versions, the time restriction scheme has been extended to allow different schemes to be set for each weekday. You can select a single time range by including or excluding a specific time range, providing greater flexibility in scheduling.
Tailored action restrictions for complex and situational action execution in response to alerts, such as time-specific activities, location-based notifications, and critical condition responses.
Each action can have its restrictions.
This makes action lists more flexible as you can limit the execution of the action to:
This allows for defining complex conditions and action lists.
Define two actions and set different times of the day or different weekdays.
You may want to notify different groups of administrators depending on the node location. NetCrunch creates Atlas views for locations. Also, you can create custom views based on custom fields (be it additional information like department or some organizational unit name) and assign each view to each action.
You might assign a restart action to a specific alert, but you might also decide to restart servers after a certain amount of time if the issue remains unresolved. This needs limiting action only to nodes being members of the Servers view. As each action has time after it executes since the alert started, you can decide to run restart after some time. Additionally, you might want to repeat these steps only on weekend nights.
Automatic and customizable event log views with a visual query builder for efficient filtering, enabling tailored analyses without a need for time-consuming text searches.
Several views are created automatically and can be selected from the drop-down menu. These views are generated for Monitoring Packs and Sensors, providing quick access to relevant information.
NetCrunch can store millions of events in a history event log. These events are fully searchable, but searching them in the text form would be very slow.
To make it easy, we added a visual query builder, which allows you to create filtering conditions for the database. Once you define your view, you can store it and select it later.
You can create views using many event fields. You can also search for event parameters.
We recommend creating a bracket condition and adding parameter names and their values as desired.
The view definition doesn't contain the time range, which is selected for each view separately.
Read how to change node settings, such as monitoring time and device type for multiple nodes, and how to manage the alerting and reporting settings differently.
You can always select multiple nodes, but if you want to manage nodes as a group, consider creating a separate Atlas view for the group.
Creating a new Atlas View by selecting multiple nodes and dragging them into the Custom Views section is effortless.
NetCrunch allows you to manage multiple node settings easily. Select multiple nodes in a table or on the graphical icon view, and then select Node Settings
from the context menu.
You can use existing Atlas views to narrow your selection.
For instance, if you want to select all Windows servers, you can view Atlas TreeCustom ViewsServer TypesWindows Server and press Ctrl+A
to select all nodes in the view. Then, you can select Node Settings
from the context menu or press Shift-F2
.
We strongly encourage you to create Monitoring Packs with the same settings for multiple nodes. Then, you can select multiple nodes and add your new Monitoring Pack to them.
On every single node, you can override settings assigned by Monitoring Packs. For instance, you can change actions (add some) or even disable some alerts.
When you select multiple nodes, however, you can only add or remove Monitoring Packs.
managingnodes
Read to configure emails and text messages (SMS) with NetCrunch notifications.
You can receive notifications about alerts and reports as emails or text messages (SMS). First, you need to set up the configuration parameters.
Go to the Options page and click `Notification' option.
You can use the built-in NetCrunch mechanism or you can define a list of external SMTP mail servers. The system also supports the TLS encryption protocol, if needed.
For alert notifications via text messages (SMS), you need to select the COM port used to communicate with the mobile phone, SMS settings and options related to the GSM device such as the PIN of the SIM card.
You may also need to enter AT+C commands if they require additional configuration.
You can even use a standard cable attachment, which after installation will be visible in the system as one of the computer's COM ports.
deviceemailgsmmodemnotificationoptionssmssmtp
You can easily extend the NetCrunch node database by adding custom fields to each node. This allows for creating views and controlling alerting action execution.
NetCrunch allows you to add custom fields to node properties. There are several predefined fields. Some fields (like Info1 and Info2) are there for compatibility with old versions. Fields can be used to organize data.
You can add these fields: number, text, date, time, or picklist.
Additionally, Text type can be converted to a picklist anytime and vice-versa.
Custom node fields can create dynamic views (defined by filtering conditions). Atlas View membership also controls alerts and action execution. See Action Restrictions
Read what configuration problems you should fix to make sure everything works fine.
NetCrunch does many things automatically, but it needs your input in some places. For instance, it's necessary to provide proper credentials to access operating systems and valid SNMP profiles.
Monitoring Issues are problems related to the monitoring process. They require your attention. In most cases, they are related to invalid or missing credentials.
NetCrunch uses Device Types to implement automatic monitoring. For example, if a device is not set to be a Windows type, Windows monitoring will not start or even be accessible for the node.
Open a network map and look for icons with a question mark, which indicates that they are missing a device type. NetCrunch automatically detects device vendors, which can help set the device type.
Only Windows (and other systems if added to Active Directory) and SNMP devices should have their device type set automatically. You must manually set their device type for other devices (those displayed as the "unknown device type").
Please check the Settings page to see what is enabled or disabled in your monitoring configuration.
For example, you can enable monitoring of the Physical Segments or NetFlow here.
NetCrunch uses node.js for secure SSL/TLS & HTTPS connection. Some of the certificate authorities might be not included. You can add additional root certificates to the external\Root Certificates
folder in the NetCrunch Server data directory.
The certificate must be in PEM format.
certificateconfigurationissuesroottips
Use templates for streamlined setup, allowing creation from existing nodes or scratch, customizable section inclusions, and easy application to IP nodes, with interactive sensor configurations requiring reference nodes for detailed settings.
[+] Monitoring Template
Node Monitoring Template is a settings node. Its sole purpose is to provide settings to other nodes. When parameters change, they propagate automatically to associated nodes.
You can create a template from scratch or choose the existing node as a source for the template. Additionally, you can assign the template to the prototype node.
The template can contain all sections or sections that can be excluded, so when the template is applied to the node, it won't override these parts.
Except for sensors, you can set up all node monitoring settings similarly, like on the real monitored node. When setting a sensor, you need to select a reference node. Many sensor configurators are interactive and need to pull some data from the actual node. For example, the WMI sensor may need to read WMI classes; the SQL sensor needs a list of databases.
You can use templates for IP nodes monitored by NetCrunch Server. Monitoring probe nodes are not supported. Using templates is even easier than creating them.
monitoringtemplates
Everything about monitoring configuration. Read how you can monitor various aspects of the network hardware and software.
Review basic concepts like Monitoring Engines, Node State Monitoring, and how to disable monitoring.
Monitoring Packs,
Monitoring Dependencies,
Event Suppression, Monitoring Issues
The main purpose of NetCrunch is to monitor your network. It also offers you documenting and organizing services.
So, everything in NetCrunch is well organized - the Atlas root is at the top, the views go below, then there are nodes, and finally, on the lowest level, there are various objects and their performance metrics and status objects.
Monitoring Engine is a software component responsible for the specific type of monitoring.
Monitoring Engines simplifies the monitoring configuration, as they are responsible for a certain monitoring channel.
For example, you might define many things being monitored using the SNMP Engine. Still, all of them share the same configuration parameters for a particular node such as SNMP port and profile to be used. A very similar situation happens with other operating systems where the configuration contains credentials for the connection.
If you are familiar with other concepts such as sensors or probes, please be advised that they are much more comparable to Monitoring Packs in NetCrunch.
The list of sensors is constantly growing.
See full list in Sensors
The central part of monitoring is determining the node state. Everything being monitored on the node depends on its state.
When the node is considered to be DOWN, the node's monitoring almost stops until NetCrunch recognizes that the node is responding again.
NetCrunch determines the node state upon monitoring of network services (see: Network Services Monitoring).
Read more about Monitoring of Network Nodes
When you go to Node SettingsMonitoring, you will get the list of monitors available (bound) to the node. There you can manage engine properties for the node.
Some monitoring packs use Device Type information (especially SNMP devices) to be enabled for a specific device only. In the case of the SNMP monitoring, the program automatically detects Device Type upon SNMP object data.
Read more in Automatic Monitoring and Organizing
Monitoring Issue is a problem related to the monitoring process, like missing credentials or improper response from the device.
We decided to make issues related to the monitoring process to stand out from other problems related to your network services and devices. They are usually related to wrong or missing credentials or Windows security settings.
See also: Configuration Tips
As the monitoring is enabled by default, it's more interesting to know how you can disable it. You can disable it at any level.
Atlas Properties
You can disable monitoring for the entire Atlas. You can disable it for some time (for maintenance) or schedule being disabled for a future time range.
IP NetworkProperties
You can go to IP Networks and disable monitoring of a specific Network indefinitely, or just for a given time period.
NetCrunch manages monitoring dependencies that should reflect connection dependencies. If properly set, NetCrunch automatically disables the monitoring of nodes depending on a certain connection. This helps to lower monitoring traffic and also prevents false alerts.
Read more in Preventing False Alarms
NetCrunch supports multiple structured data formats for monitoring input, sensor results, and external integrations. This topic explains the native data formats, internal data storage mechanisms, and how to send data into the system using these formats.
NetCrunch processes both input and output data using well-defined formats to ensure interoperability, performance, and clarity. You can send structured data using file-based sensors, telemetry nodes, or API endpoints. Internally, NetCrunch uses purpose-built databases to store event logs, trends, status states, and atlas definitions.
These formats are supported by file-based and script-based sensors, including Data File Sensor, Script Sensor, and Telemetry Node.
The most flexible and recommended format. It allows sending multiple counters and complex status objects with metadata.
{ "cpu.usage": 47.5, "disk.free": { "value": 13000, "message": "13 GB free", "critical": true, "class": "critical", "retain": true } }
Field | Description |
---|---|
value |
Numeric or textual value to display |
message |
Optional descriptive message |
critical |
Boolean indicator for alert |
class |
One of: normal , warning , critical , unknown |
retain |
Boolean to prevent data from being auto-cleared |
data |
Optional object for additional key-value data |
Same schema as JSON, but expressed in XML. Useful in legacy systems.
<values> <cpu.usage>47.5</cpu.usage> <disk.free class="critical" message="13 GB free">13000</disk.free> </values>
Lightweight format for numeric counters. Uses key-value pairs.
cpu.usage,47.5 disk.free,13000
If your data is not in a native format, you can define a custom parser.
Supported parsing methods:
NetCrunch uses purpose-built internal databases to separate concerns and improve performance:
Database | Description |
---|---|
Atlas | In-memory topology and configuration model; periodically saved as XML/JSON |
Event Log | SQL database accessible via ODBC; stores event history |
Status DB | In-memory NoSQL store; tracks current alert and monitor state |
Trend DB | Append-only store optimized for performance metrics; millions of records per day |
DocDB | RocksDB used for rich content like node notes and documentation |
Scenario | Recommended Format |
---|---|
Custom device sending metrics | JSON (via Telemetry Node or REST) |
Basic sensor script | CSV or JSON |
Logs or status checks | JSON with class , message , retain |
File-based integrations | JSON, XML, or CSV |
API input | JSON body with counters or status objects |
NetCrunch supports multiple structured formats and interprets them intelligently to update counters, trigger alerts, and display statuses. For best results, use JSON and include full metadata such as status class and messages.
Telemetry enables systems to push metrics and logs into NetCrunch without polling. This topic explains when to use telemetry and how NetCrunch supports it via Telemetry Nodes and the OTLP cloud gateway.
Read how to send data to NetCrunch and create a custom monitor. You can easily turn any application or script into a NetCrunch agent.
csvcustom parsersdata formatsintegrationjsonnetcrunch storagestatus dbtelemetrytrend dbxml
Read about node state evaluation and node monitoring settings.
As the node represents a single network endpoint (not the device), a primary monitoring subject, everything else in monitoring depends on the node state.
NetCrunch defines the following node states:
The program determines only a node and a service state by checking network services availability (PING) without tracking performance metrics.
In some cases, the status of the node can be determined in the second interval.
Read more about Network Services Monitoring
The monitoring of each node can depend on its parent node (network switch or router). This causes an automatic disabling of node monitoring in case the parent link or device is DOWN.
Read more in Preventing False Alarms
In large networks with remote intermediate routers, NetCrunch organizes monitoring by priorities. By default, the nodes being closer to NetCrunch Server and intermediate routers are monitored before others.
Event Suppression is the technique of preventing false alarms caused by network intermediate connection failure.
When a node is active in certain hours or days, you might limit monitoring to a given time range and weekdays.
Availability and performance monitoring of 70 network services such as FTP, HTTP, SMTP, etc.
The monitoring of network services is the basic monitoring type in NetCrunch. A node state is determined basically by the availability of network services. When a node is in the DOWN state, it's only monitored by a single network service.
Services monitoring checks basically:
NetCrunch sends a request, appropriate for a given service protocol, and then checks if the response matches the defined response. The process should be at least repeated 3 times to measure response time, and then the average response time is calculated. For each request, you can set an appropriate time to wait for a response.
Each monitored service provides the following performance metrics:
WTF - where is a precise description of counters?? Ask Piotrek
Leading Service is a network service designed to be checked as the only service when the Node is DOWN.
For some critical nodes, you might want to react in seconds instead of minutes. Then you can check for leading service monitoring Monitor in seconds interval.
The option will let the leading service be monitored more often than once per minute whenever a node is DOWN or alive. As an option, you can set NetCrunch to determine the node state solely upon the state of the leading service; otherwise, it can check other services immediately after the leading service fails.
NetCrunch allows the monitoring of network services by their respective default ports. When you need to monitor a service on different ports, go to SettingsResourcesNetwork Services and choose New Service. Then you will be able to duplicate the service with the new name and different port. For example, define HTTP_8080 for checking HTTP on port 8080.
In some cases, all you need is a simple TCP port connection checking without sending any further data. In order to create simple TCP port checking go to Settings Resources Network Services, click New Service and select a desired option.
As a third option, you might decide to create a full request/response checking service definition. In order to create such definition, go to Settings Resources Network Services click New Service and select Create from Scratch option.
As the node status depends on the services, each time a new node is added to the atlas, NetCrunch automatically discovers services running on the node. By default, NetCrunch is configured to check only a subset of all defined services.
You can manage the list of services being automatically discovered in Settings Monitoring Auto Discovered Services
To get some monitoring services to work properly, an additional configuration task should be performed.
NetCrunch must be able to open the 68 UDP (DHCP client) port to receive a response to the DHCP inform request. The monitoring will not be working if NetCrunch is running on a machine where the DHCP Server has been installed, as it uses the same port.
The IP address of the NetCrunch machine must be included in any checked DHCP Server Scope (WINDOWS DHCP Server tested). For example, if the machine where NetCrunch is running has the address 192.168.1.100, and the DHCP Server is on 192.168.88.10, then the server must have the scope with any range from 192.168.1.x addresses. A Linux-based DHCP Server must have the authoritative option enabled.
We need to clarify whether it is working or not.
To monitor MSSQL Express, the TCP/IP protocol must be enabled, and the server instance must accept a remote connection on TCP port 1433. Please refer to the MSSQL Express documentation for information about enabling the TCP/IP and allowing the remote connection to the server instance.
NetCrunch allows monitoring of SSH service using a protocol ver. 1 and/or ver.2. By default, the SSHv2 protocol is monitored on the network nodes. Use SSHv1 service in the case when the node supports the older version of SSH.
ftphttpmonitoringnetworkservices
NetCrunch supports many flow protocols as application monitoring using Cisco NBAR.
NetCrunch supports two technologies that allow network traffic monitoring. The first gathers information from switches about the traffic on particular ports, and the second can collect flow data from routers and switches.
Switches collect statistics about traffic on their ports, which are available in Physical Connections views. These views can give an overview of traffic between two switches or between the switch and servers.
You can see aggregated traffic information in the last hour or the last 24 hours. When you select a link on the map, you can open the Connection Status details and the traffic history.
There are many flow-export formats on the market today. Netflow is a Cisco trademark (aka cflow), but similar protocols include jFlow, rFlow, NetStream, AppFlow, and sFlow.
For the NetFlow suite of protocols, we often see version 5 (supported by most devices), some combined v5/v7 (the Catalysts), and some version 9 of the newer devices.
NetCrunch supports the following flow protocols:
NetCrunch collects and analyzes received flows for aggregation in the 15-minute and 1-hour ranges. This allows you to analyze data in a short period and store long-term performance trends.
Currently, NetCrunch supports single flow aggregation to receive data from multiple flow sources. However, they are aggregated together on a single dashboard.
Currently, Netcrunch flows can receive up to:
~ 3000 packets/sec
~ 35000 flows/sec (average 12 flows per packet)
NetCrunch is capable of receiving sFlow, but the protocol's statistical nature requires different processing and data aggregation over a much longer period.
Overview Flows
NetCrunch shows global flow traffic statistics on the top Atlas Dashboard. When you click on any chart bar, you can see traffic details for a selected element.
The view shows the current aggregated statistics from all flow sources sending data to NetCrunch.
NetCrunch allows you to analyze traffic using various criteria. The program allows you to create custom application definitions and supports Cisco NBAR technology for application monitoring.
The program also allows the creation of custom application definitions based on protocol and ports.
flowmonitoringnetnetflowswitchtraffic
NetCrunch provides complete SNMP monitoring, including support for v3, traps, and MIB compiler.
SNMP was developed on UNIX back in 1988 - so it's a pretty mature technology now. Despite various new protocols, it's probably the most widely implemented management protocol today. SNMP is defined in RFC (Request For Comments) by IETF (Internet Engineering Task Force) and is used everywhere: servers, workstations, routers, firewalls, switches, hubs, printers, IP phones, appliances...
Today, most devices support SNMPv2c. There are available agents for operating systems, but they can be a security loophole if they don't use SNMPv3. On the other hand, network virtualization helps separate traffic so that VLANs can separate older SNMP devices.
Top hardware devices usually support SNMP v3, which adds more security to the protocol (authentication and encryption). NetCrunch supports all SNMP versions: SNMP v1, SNMP v2c, and SNMPv3, including decoding traps. NetCrunch SNMP implementation supports SNMPv3 following encryption algorithms: DES, 3DES, AES 128, AES 192 and AES 256.
Please note that SNMPv3 is more expensive in processing power (because of encryption) on both sides - the device and SNMP manager application.
To avoid device overload, we recommend limit pending SNMP requests in SNMP Settings ( icon in SNMP section) on any nodes that use SNMP.
NetCrunch uses SNMP profiles to manage communities and passwords for SNMP. The profiles allow you to use the same SNMP security settings for multiple nodes. SNMP Profiles encapsulate settings necessary to communicate with the particular SNMP Agent.
Profiles define the SNMP protocol version and information related to protocol version and security settings: the community string (SNMPv2); or authentication user, password, and encryption to be used (SNMPv3).
Additionally, the profile allows specifying different protocols for reading and writing operations. For example, you can set up reading operations to use SNMPv2 and disable writing.
To receive and decode SNMPv3 traps, you need to define a separate SNMPv3 Traps & Trap Info profile.
Because SNMPv2 is much more efficient (it allows asking for multiple values at once), we recommend using SNMPv2 for monitoring.
To access SNMP data on a particular node, you must make it SNMP enabled by Node SettingsMonitoringSNMP. To create an alert on the SNMP counter value, you need to set up a Performance Trigger alert.
Select the node, open Node SettingsMonitoring, and you'll see the SNMP section grayed out. Please enable it. NetCrunch will use the default profile. Set the proper profile in SNMP monitor settings.
Now you can click on the Custom monitoring pack, add New Event for SNMP Performance Counter, choose one of Event Triggers for Counters, and select the SNMP variable as the counter.
We have the following options for SNMP counters:
Besides numerical values, you can also monitor text values returned by the SNMP agent. To do this, you have to select <New Event for SNMP Variable Value> and enter OID or select the object from the MIB database. Then you can check the following conditions:
Settings Alerting & Notifications Monitoring Packs and Policies
Select SNMP only from the restrictions drop-down menu when creating a new monitoring pack.
SNMP only describes the variables and get/set operations, so browsing the OID MIB tree is somewhat tricky. SNMP tables often refer to other tables and raw data is not readable.
NetCrunch SNMP Views allow for creating tables and forms that are more human-readable, allowing reading and entering SNMP data.
Additionally, views are automatically managed according to device type and supported MIB.
Check if the SNMP trap listener is enabled. Settings Monitoring SNMP Trap Receiver
Node SettingsMonitoring or Settings Alerting & Notifications Monitoring Packs and Policies
NetCrunch allows receiving SNMPv1 traps, SNMPv2c, and SNMPv3, including encryption. To turn a trap into an alert, you must define an alert on the node sending the SNMP trap message.
NetCrunch receives all traps and puts them in the External Events window. You can add the necessary node and trap with one click in this window, even if the node didn't exist in the atlas before.
To receive SNMPv3 traps, you need to define SNMPv3 Traps & Trap Info first. Otherwise, the program won't be able to decode trap data.
The profile also contains the field for 'Remote SNMP Engine Id,` which you have to set according to the SNMPv3 trap specification ([see RFC2570](http://www.ietf.org/rfc/rfc2570.txt)).
Read more in Receiving SNMPv3 Notifications
Settings Monitoring SNMP Trap Receiver
After receiving an SNMP trap, NetCrunch can forward it "as is" to another SNMP manager.
NetCrunch MIB compiler allows you to extend the NetCrunch MIB database used to select SNMP traps and variables during configuration and resolve OID to names for incoming SNMP data (traps).
It's an advanced multi-pass compiler that can set up module name aliases to compile otherwise incompatible modules.
mib compilersnmpsnmp trapsnmp viewstrap
Manage custom SNMP Views for specific device types to make displaying and setting SNMP data much more accessible.
There are two types of views.
The form
view can contain separate SNMP variables like sysLocation or sysUpTime, and the grid
view can represent an SNMP table, e.g., ifTable.
The Form view is used to display a set of SNMP variables.
name: Example type: form fields: - caption: Description var: type: oid ref: 1.3.6.1.2.1.1.1.0 - caption: Location var: type: oid ref: 1.3.6.1.2.1.1.6.0 writable: true
Add writable: true
attribute to make a field editable. To edit SNMP values, you need to have a read-write profile defined on the node.
NetCrunch can read the variable periodically every given number of seconds.
Add autoRefresh: 15
attribute to refresh the value every 15 seconds.
You can use the concatenated field to display two SNMP values in one field.
fields: - caption: Machine var: type: concat refs: - type: oid ref: 1.3.6.1.2.1.1.5.0 - type: oid ref: 1.3.6.1.2.1.1.6.0 concat: $1 in $2
The standard format for a concat
attribute is $1 $1, but you can add some text between the variables.
You can calculate the field value from two SNMP variables.
fields: - caption: Total ICMP packets var: type: expr refs: - type: oid ref: 1.3.6.1.2.1.5.1.0 - type: oid ref: 1.3.6.1.2.1.5.14.0 expression: "+"
Possible expressions:
+
sum -
subtraction*
multiplication/
division%
percent calculationThe Table view is used to display a set of SNMP columns. All specified columns must belong to the same SNMP table. You can join a column from another SNMP table, but the indexing of the rows in both tables must be consistent, or you have to specify the lookup reference.
name: Example Table type: grid columns: - caption: Description var: type: oid ref: 1.3.6.1.2.1.2.2.1.2 display: width: 20 sort: asc summary: count - caption: In Octets var: type: oid ref: 1.3.6.1.2.1.2.2.1.10 dataType: int display: width: 20 summary: sum autoRefresh: 15 - caption: Out Octets var: type: oid ref: 1.3.6.1.2.1.2.2.1.16 dataType: int display: width: 20 summary: sum autoRefresh: 15 format: type: convert format: from: B - caption: ifType var: type: oid ref: 1.3.6.1.2.1.2.2.1.3
To group data by a column, add groupBy
column caption attribute:
groupBy: ifType
Optionally you can set default column properties
display: width: 20 sort: asc summary: count
Add the writable: true
attribute to make a column cell field editable. A read-write profile must be defined on the node to edit SNMP values.
NetCrunch can refresh column data periodically every given number of seconds.
Add autoRefresh: 15
attribute to refresh all column rows every 15 seconds.
Use the concatenated column to display row values from two SNMP columns in one column.
columns: - caption: Machine var: type: concat refs: - type: oid ref: 1.3.6.1.2.1.1.5.0 - type: oid ref: 1.3.6.1.2.1.1.6.0 concat: $1 in $2
The standard format for the concat attribute is $1 $1, but you can add some text between the variables.
You can create a column in which each row contains a value calculated from the corresponding rows of two other columns.
columns: - caption: In + Out var: type: expr refs: - type: oid ref: 1.3.6.1.2.1.2.2.1.10 - type: oid ref: 1.3.6.1.2.1.2.2.1.16 expression: "+"
Possible expressions:
- +
sum
- -
subtraction
- *
multiplication
- /
division
- %
percent calculation
In SNMP, the values in a given column often refer to data from another table.
For example, the ipNetToMediaIfIndex(1.3.6.1.2.1.4.22.1.1) column of the ipNetToMediaTable(1.3.6.1.2.1.4) table contains the ifIndex of the interface on which this entry is effective. We need to display the name of the interface taken from ifDescr(1.3.6.1.2.1.2.2.1.2). To do this, we need to define the lookup reference:
- caption: Interface name var: type: lookup ref: 1.3.6.1.2.1.4.22.1.1 lookup: 1.3.6.1.2.1.2.2.1.2
You display columns from different SNMP tables if the indexing of the rows in both tables is consistent (that is, the second table is an extension of the first table).
Examples are the tables ifTable(1.3.6.1.2.1.2.2.1.2) and ifXTable(1.3.6.1.2.1.31.1.1).
In the example below, two columns will be displayed: ifDescr(1.3.6.1.2.1.2.2.1.2) from ifTable, and ifName(1.3.6.1.2.1.31.1.1.1.1) from ifXTable.
name: Join Example type: grid columns: - caption: ifDescr var: type: oid ref: 1.3.6.1.2.1.2.2.1.2 - caption: ifName var: type: lookup ref: index lookup: 1.3.6.1.2.1.31.1.1.1.1
You can add a unit symbol to the value:
format: type: str format: $1 Unit
You can multiply the value if necessary:
var: type: oid ref: 1.3.6.1.2.1.2.2.1.16 dataType: int scale: 0.001 precision: 2 dataType: int
You can limit the number of decimal places by adding the precision
attribute.
You can set up unit conversion automatically. For example, if the value is in bytes:
format: type: convert format: units: Digital from: B
The displayed value will automatically be changed to KB, MB, etc.
Possible input units: - bytes (B, KB, MB, GB) - bits (b, Kb, Mb, Gb)
Converting SNMP TimeTicks (hundredths of a second) to human readable time format:
- caption: Uptime var: type: oid ref: 1.3.6.1.2.1.1.3.0 dataType: time10msec format: type: time format: elapsed
snmp views
NetCrunch supports Cisco IP SLA technology.
Node SettingsMonitoringAdd IP SLA OperationIP SLA Single Operation
This sensor allows monitoring of the status of IP SLA operations on the Cisco devices. Select the operation previously defined on a Cisco device when you add the sensor. Operations are grouped by protocol type.
Additionally, you can set performance triggers (thresholds) on the following metrics:
Node SettingsMonitoringAdd IP SLA OperationIP SLA Multi-Operation
The sensor can monitor all operations of a given type, or it can select operations by their parameters:
Unlike a Single IP SLA Operation sensor, the operation configuration cannot be modified.
In the node status window, NetCrunch shows the status of all IP SLA operations defined on the Cisco device.
ciscoipslasnmp
DNS Query, System Uptime, RADIUS, SSL Certificate & SSH Remote Ping.
dns-query
Node SettingMonitoringAdd Monitoring SensorDNS Query DNS service is the most vital part of every infrastructure. NetCrunch allows checking for DNS query results and detects unwanted changes.
DNS sensor allows you to check if the DNS responses are valid. This helps identify problems in DNS or to check whether records have been altered.
This sensor allows you to send a query for a name or an address to a given DNS server (you should add it to the DNS server node). You can enter a name to check (a domain name), and you can define alerts to check if DNS records match expected results. Predefined sensor alerts include
Also, you can set an alert on Response time and Check time. You can query IPv4 and IPv6 addresses.
You can match a value of the following record types:
Each record can be tested for conformance with a given pattern.
system-uptime
Node SettingMonitoringAdd Monitoring Sensor System Uptime Sensor supports WMI, SSH/Bash and SNMP.
The sensor monitors device uptime measured in seconds.
Predefined sensor alerts:
Also, you can set an alert on Response Time and Check Time.
radius
The sensor checks the user authentication process and validates the response from the RADIUS server. If you need checking service responsiveness only, then you can use Network Services and RADIUS service from the list. See Network Services Monitoring.
Predefined sensor alerts:
Sensor measures:
The sensor allows setting alert on response conformance equals or doesn't equal given value.
ssl-certificate
The sensor can check any SSL/TLS connection (SSH and any other protocol working on TLS). It checks the SSL certificate expiration date and the certificate properties. The sensor allows the checking of all certificate fields.
Predefined sensor alerts:
The sensor shows warning status if the certificate is not authorized or the public key is too short. You can set an alert on the certificate field conformance and status object change for the certificate or public key.
Besides, you can set an alert on such metrics as Response Time
, Check Time
and a number of Days Until Certificate Expires
.
ssh-remote-ping
The sensor checks connectivity from a remote system by running Ping remotely by the SSH server.
It allows setting threshold on the following counters:
Predefined sensor alerts:
You can also add an alert when the Ping Response object status changed.
certificatednsradiusremote pingsecuritysshssluptime
Composite Status is an atlas node representing an aggregated state of the group of other status objects. The status depends on group type, which can be critical, redundant, or influential.
Although the above statement is perfectly valid and exactly describes how the Composite Status, it doesn't answer why and when you might need it.
Because the names of object states are different, we need to clarify them.
Down
or Critical.
OK.
Let's start with an example.
As the user, I want to see the internal system's status, which depends on two internet connections, DNS, AD, and the web server. Internet connections are redundant, but all other elements are critical to the system.
You might decide to create a Business Status Node and add each element to the appropriate group in such a case. As the node can be part of another Business Status, you can even build up a tree of statuses.
Besides placing such status on the dashboard map and alerting, you can document logical dependencies of process elements.
The overall status of the Composite Status is calculated upon element groups and is the highest status of each group.
The order of statuses: Unknown
OK
Warning
Error.
Warning
state, the group state is Warning.
Warning
or Error
state, then the composite status object is in the Warning
state. If all elements are in the Error
state, the composite status is Error.
Error
state, then the resulting state is Warning.
Even if all elements are in the Error
state, the whole group state will be at the most Warning.
businessbusiness statuscomposite
The sensor allows getting data from various sources and processes them to get metrics and status values.
FTP/S, HTTP/S, SSH/Bash, SFTP, Windows/SMB, TFTP
The sensor loads data from a file or URL using one of the predefined file formats (including custom formats). It can alert on retrieved metrics and status values.
Available alerts:
NetCrunch can receive counters and status objects at once. It supports JSON and XML formats that allow passing both counters and statuses, and additionally, it supports a rather simplistic CSV format for counter metrics.
This is a very straightforward format. Each counter must be a numeric value. Status objects can be represented by simple string values or by complex objects, including custom data. This data can be displayed later in the widgets and status window.
{
"counters": {
"crm/emails-in-1h": 10.2,
"crm/emails-out-1h": 231
},
"statuses": {
"AC" : "On",
"Power": "On"
}
}
Complex status objects may look like this:
{
"statuses" : {
"Disk C:" : {
"value" : "ok",
"message" : "Working fine",
"critical": true,
"data" : {
"type" : "SDD",
"upTimeSec" : 123431
}
}
}
}
The status object can contain fields such as:
Value - a status value that can be any string. When one of the standard values is used, NetCrunch will use the value for calculation in alert conditions. The field is required to recognize the status object; otherwise, the whole object will be treated as a text string. Standard values are: ok
, error
, warning
, disabled
, unknown
.
Name - the name of the object; it does not have to be unique. (optional)
<nc>
<counters>
<counter path="xobj/cnt.1">123</counter>
<counter path="xobj/cnt.2">245</counter>
</counters>
<statuses>
<status name="AC" message="Everything fine">OK</status>
<status name="Fan">
<value>OK</value>
<message>OK</message>
<critical>true</critical>
<data>
<type>High speed</type>
</data>
</status>
</statuses>
</nc>
This is a simple format to pass a list of counters and their values.
object,counter,instance,value
Additionally, you can pass only two values where the first one will be treated as a counter path and the second one as its value.
counterpath, value
Fan Speed,123
SensorB,Temperature,Room,22
SensorC,Temperature,Outside,15
SensorD/Temperature.Rack 1,26
As you can see, you can mix formats inside a single document.
You can make NetCrunch understand any format using Data Parsers, allowing parsing and analyzing external data. The result is always a list of counters and status objects.
See Data Parsers
customdataparser
NetCrunch delivers sophisticated printer monitoring capabilities by utilizing SNMP protocols, facilitating detailed tracking and management of printer performance.
The Printer Sensor in NetCrunch goes beyond simple supply-level monitoring. It tracks various printer statuses and can alert you to alarms and performance issues, ensuring a comprehensive oversight of printer health. It's most effective with printers that support Printer MIB v2, a standard that offers in-depth insights into printer status and performance metrics.
NetCrunch's Printer Sensor offers detailed and specific monitoring through several alert types:
The Printer Sensor also tracks the following metrics:
Limit Threshold
to receive an alert when the number of pages printed exceeds the threshold value within a specified period (e.g., day, week, etc.).NetCrunch includes pre-configured alerts for common issues, enabling immediate responses:
Intelligent Platform Management Interface (IPMI) is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independently of the host system's CPU, firmware (BIOS or UEFI) and operating system.
IPMI supports two types of authentications: one-key and two-key authentication. One-key authentication requires the user password. The encryption key consists of only 0's (default). Two-key authentication requires both the user password and non zero encryption key. NetCrunch sensors support both types of authentication. The encryption key can be set in the credential profile window.
basic-ipmi
IPMI v2.0/RMCP+
This sensor can monitor various hardware parameters such as system temperature, fan speed, power supply voltages, etc. The availability of these parameters depends on the hardware monitored and the IPMI configuration of the monitored device.
Common parameters:
Available parameters for the particular device will be displayed when selecting the counters when setting up the threshold. Some counter descriptions may include thresholds values.
Enter your username and password for the IPMI.
contains the following metrics:
generic-ipmi
IPMI v2.0/RMCP+
This IPMI sensor can monitor all possible parameters received from the monitored device. A filter allows specifying the exact parameters to be monitored. Parameters can be filtered based on their type, entity (i.e., location), and name.
Common numeric parameters:
The sensor also reads non-numeric parameters containing status messages.
Credentials
Enter your username and password for the IPMI.
ibm-imm-ipmi
This IPMI sensor is configured to monitor all essential metrics from an IBM IMM device.
Monitored parameters:
contains the following metrics:
hp-ilo-ipmi
This IPMI sensor is configured to monitor all essential metrics from an HP iLO device.
Monitored parameters:
contains the following metrics:
dell-idrac-ipmi
The sensor is configured to monitor all essential metrics from a Dell iDRAC device.
Monitored parameters:
contains the following metrics:
ipmi-log
The IPMI Log sensor can retrieve logs from the System Event Log (SEL), a non-volatile repository for system events, and certain system configuration information. Alerts can be set on new log entries that match user-defined expressions. Logs can be filtered based on their IPMI sensor type, event message, and event transition direction(event_dir).
A log entry with the following fields:
This indicates that the event was generated because a processor has become present (inserted). The same log entry but with the transition direction field set to "Deassertion Event" would indicate that the event was generated because the processor had become not present (removed).
Enter your username and password specified for the Intelligent Platform Management Interface.
basic ipmidelldell idracgeneric ipmihpibmidraciloimmipmiipmi loglogsensor
The Dell EMC Sensor is designed to monitor the health status and power consumption of a Dell EMC storage device. This is achieved using the Unisphere Management REST API. With this sensor, users can monitor vital information related to file systems, Logical Unit Numbers (LUNs), and storage pools present within the Dell EMC system.
Connection Profile: This allows you to specify details about how to connect to the Dell EMC system.
Authentication Profile: Define the authentication credentials required to access the Unisphere Management REST API.
dellemc
The parsing expression task lists named values (key-value pairs) for the given text. These fields allow later message filtering and can be treated as counters or status objects. Each expression requires declaring the list of variables that will be returned by the expression.
This is probably the simplest format to parse. Variable names describe each column, and the default separator is the comma.
This format is useful when data are encoded as a series of key-value pairs separated by one additional separator. For example:
from:John;val:10
In this case, Pair Separator would be ;
and Value Separator :
Another example can be ini encoding:
speed = 10 acceleration = 0
In this case, Pair Separator would be \n
and Value Separator =
Regular expressions are powerful and very useful in pattern matching. NetCrunch uses a Javascript implementation of regular expressions and matches variables by the order of matching groups.
We created a regexp pattern for a popular Log4J format.
(\d{4}-\d{2}-\d{2}) (\d{2}:\d{2}:\d{2},\d{3}) \[(.*?)\] ([^ ]*) +([^ ]*) - (.*)$
We assigned to the following variables (fields):
This allows decoding Apache logs using the Common Log Format. All you need to do is provide the format used to configure the given log.
Refer to Apache documentation for more.
NetCrunch supports XML paths for retrieving data from XML. Only paths returning single values are supported.
<doc> <amount>10.2</amount> <user name="John" id="123"/> </doc>
We can extract all variables using the following paths:
/doc/amount
//user/@name
//user/@id
You can select the element's attribute by selecting the node element using another attribute. In our example, we could get the user's name by selecting the element by its id.
//doc/user[@id='123']/@name
The XPath expression also supports DOM selectors (aka CSS selectors), which you can use to select node values. However, if you need an attribute value, you cannot specify it using a standard CSS selector. Therefore, we extended the syntax by adding the '|' pipe character, after which you can add the attribute name.
So, the following path will produce the same result as the XPaths above.
doc > amount
user|name
user|id
NetCrunch supports JSON Path paths for retrieving data from JSON data. Only paths returning single values are supported.
{ "doc" : { "amount": 22, "user" : { "name" : "John", "id" : 100 } } }
We can extract all variables using the following paths:
$.doc.amount
$..user.name
$..user.id
Javascript looks like the ultimate solution for cracking data. You can use regular expressions and the Javascript engine (V8) to implement the latest language standard.
Your script executes in strict mode in a separate and sandboxed NodeJS engine and is killed if it exceeds the time limit.
Script input is in the text
variable.
As a result of the script, NetCrunch expects the result
variable to be an object with properties matching declared variables (fields).
Let's imagine that we are receiving an email containing JSON. It will be quite simple to use Javascript to extract the data we need.
{ "probe_1" : { "temp" : 10 } }
// Parse input data from the
text
variable const data = JSON.parse(text);// set result or return result object result = { temperature: data.probe_1.temp }
The parser allows you to write custom scripts in Python language (v. 3.7.0) to crack text data. In addition to regular runtime (you can't load any external modules), we provided the most desirable utilities for parsing JSON and XML data formats. (JSON, ElementTree)
Script input is in the text
variable.
As the script's result, NetCrunch expects the result variable to be a Python dictionary object with properties matching declared variables (fields).
{ "probe_1" : { "temp" : 10 } }
# Parse data from input data = json.loads(text)
# set result properties result['temperature'] = data['probe_1']['temp']
domemailexpressionjsonpathlogparsingpythonselectortexttext logxpath
Data Parsers transform and analyze external data, returning NetCrunch counter metrics and status objects.
Data parsers allow the transformation of external data into NetCrunch JSON format, describing counters and status objects. Several sensors can use Data Parser to retrieve these data from external sources.
You can define a list of counters, status objects, and their respective paths. You can use either XPath or CSS selectors, depending on your preference. While CSS selectors are typically used to select DOM nodes, we have extended the syntax to include attribute names by adding a '|' pipe character after the selector.
<doc> <amount>10.2</amount> <user name="John" age="123"/> </doc>
//doc/amount
or DOM selector doc > amount
//doc/user[@name='John']/@age
or by extended DOM selector doc > user[name='John'] | age.
JSONPath is similar to XPath in that it has similar capabilities. For simple paths, you can use simple dotted paths.
{
"main" : {
"temperature" : 10.2,
"humidity" : 66
}
}
This is a simple example but very similar to the actual data provided by one of the public weather APIs.
main.temperature
$..humidity
JavaScript is a good solution for parsing data, as you can use regular expressions and the whole power of the JavaScript engine (V8), which implements the latest ECMAScript language standard.
Your script executes in strict mode in a separate and sandboxed NodeJS engine and is killed if it exceeds the time limit. In addition to regular runtime (you can't load any external modules), we provided the most desirable utilities for parsing JSON, XML, and HTML data formats.
Data Parser editor is a small development environment that allows you to test a script by running it in an actual NetCrunch scripting engine. You can provide test data and use console objects to log messages to the test console.
Your script receives two parameters:
The counter metric represents numerical values observed by NetCrunch. This is one of the fundamental building blocks of NetCrunch. The counter path describes:
<Object>/<Counter>.<Instance>
Process/% Processor Utilization.NCServer.exe
WebSite/Current Visitors
The Status Object describes the state of the monitored object. It can be simply a string or more complex data. Unlike for counters, there are no special requirements for the status object name.
"Disk C:" : {
"value" : "ok",
"message" : "Working fine",
"retain" : 5,
"critical": true,
"data" : {
"type" : "SDD",
"upTimeSec" : 123431
}
}
The status object can contain fields such as:
Value - a status value that can be any string, but if one of the standard values is used, NetCrunch will use the value for calculation alert conditions. The field is required to recognize the status object. Otherwise, the whole object will be treated as a text string. Standard values are: ok, error, warning, disabled, unknown.
Name - a name for the object, it does not have to be unique. (optional)
All result functions can be chained.
result.counter('Value', 10).counter('Value 2', 20);
result.counter(path,value)
Add counter value to the sensor result.
result.counter('% Utilization', 23.5);
result.counter(obj)
You can add multiple counters as a single object.
result.counter( {
"Temperature" : 33,
"Humidity" : 65
});
result.status(name,value[,message])
Set the status of the object:
result.status('Door 1', 'closed' ).status('Door 2', 'opened');
result.status(obj)
Set complex status object.
result.status('Light 1', {
value : 'ok',
data : {
color: 'red'
switchedOn : "2021-03-11T22:20:59.903Z"
}
result.error(message)
You can set the '@parser' status object, which will change the sensor status as its status is marked as critical. This way, if data are not what you expect, you can pass the error to the sensor.
result.error('Missing section')
result.warning(message)
Use it to change the sensor status set to 'warning'.
We assume you are familiar with ES6. You can use the older syntax as JavaScript is backward compatible.
It uses one of the simplest formats for data encoding. It's just a series of values without names, separated by a comma.
10.2, 30, 100
data.split(',').forEach((value,ix) => result.counter(`Value ${ix}`, value));
As natively, JavaScript does not offer an XML parser, so we added one. You can use XPath and CSS selectors to select elements or attributes.
NetCrunch includes a DOM parser. You can use it at a low level (please refer to xmldom.js on the Internet) by using the DOMParser constructor or by using a convenient DOM class we provided.
doc.nodes(xpath)
returns all elements matching the given XPath.
<doc>
<value name="Temp">10.2</value>
<value name="Fan Speed">100.2</value>
</doc>
Script
const doc = new DOM(data);
doc
.nodes('//value')
.map(e => ([e.getAttribute('name'), e.firstChild.data ])
.forEach(([name,value]) => result.counter(name, value));
The script above will return all values with proper names as counters.
doc.selectByXPath(path)
returns values of elements matching the path.
doc.valueByXPath(path)
returns a single (first) value regardless of the number of matching elements.
valueByCSS(path)
converts the CSS selector into XPath and calls valueByXPath to retrieve a single value. To select an attribute value, add the attribute name after the pipe |
.
For example:
val[name='Temperature'] | value
Allows you to select the value from the following element:
<val name="Temperature" value="23.5"></val>
cssToXPath(selector)
This function converts a CSS selector to an XPath. To select an attribute value, you can append the attribute name after the pipe separator.
For example:
val[device='Fan'] | rpm
Dom parser can also be used to parse HTML documents even if they do not conform to XML.
<doc>
<value1>13.10</value1>
<value2>345</value2>
</doc>
const doc = new DOM(data);
result
.counter('Values/Value 1', doc.valueByXPath('//value1'))
.counter('Values/Value 2', doc.valueByXPath('//value2'));
You can also use familiar CSS selectors using doc.selectByCSS
doc.selectByCSS('value1')
JavaScript has a built-in parser for JSON data and converts its native JavaScript objects.
Your script can use two methods to access the attributes of objects.
selectProperty(obj,path)
Select a property by a dotted path.
selectByJSONPath(obj,path)
Select the property element by JSONPath.
Let's try to parse data in a way that is similar to weather API services.
{
"main" : {
"Temperature" : 22,
"Humidity" : 66
}
}
const doc = JSON.parse(data);
result
.counter('Temperature', selectProperty(doc, 'main.Temperature'))
.counter('Humidity', selectByJSONPath(doc, '$..Humidity'));
The parser allows you to write custom scripts in Python language (v. 3.7.0) to parse data from your sensors. In addition to regular runtime (you can't load any external modules), we provided the most desirable utilities for parsing JSON and XML data formats. (JSON, ElementTree)
Data Parser editor is a small development environment that allows you to test a script by running it in an actual NetCrunch scripting engine. You can provide test data and use console objects to log messages to the test console.
Your script receives two parameters:
data - it contains input data. It can be text as a buffer object. It depends on the data type you expect from the source.
result - script result object that simplifies providing a list of counters and status objects.
The counter metric represents numerical values observed by NetCrunch. This is one of the fundamental building blocks of NetCrunch. The counter path describes:
Object - if left blank, the sensor identification will be used as the object. It describes the measured subject. The programming term describes the class of the measured subject—for example, Process, Disk.
Counter - The name of the metric. For example, Temperature, % Utilization%.
Instance - identifies an instance of the object. It's optional. So it can be disk id, process name, etc.
<Object>/<Counter>.<Instance>
Process/% Processor Utilization.NCServer.exe
WebSite/Current Visitors
The status object describes the state of the monitored object. It can be simply a string, or it can be more complex data. Unlike for counters, there are no special requirements for the status object name.
"Disk C:" : { "value" : "ok", "message" : "Working fine", "retain" : 5, "critical": True, "data" : { "type" : "SDD", "upTimeSec" : 123431
} }
The status object can contain fields such as:
Value - a status value that can be any string, but if one of the standard values used, NetCrunch will be able to use the value for calculation alert conditions. The field is required to recognize the status object; otherwise, the whole object will be treated as a text string. Standard values are: ok, error, warning, disabled, unknown.
Name - a name for the object; it does not have to be unique. (optional)
Message - this can be a message describing the state (for example, error message). (optional)
Received - a time when the status has been read. (optional)
Retain - how long status will be valid if no new status is received. After this time, the status becomes unknown.
Class - status class. If specified status history will be saved to the database. The class helps UI understand data associated with the status and display it appropriately.
Data - custom data. It can be any Dictionary object, except if the class points to one of the well-known classes, it must conform to the class data format.
All result functions can be chained.
result.counter(path,value)
result.counter('Value', 10).counter('Value 2', 20)
Add counter value to sensor result.
result.counter(obj)
result.counter('% Utilization', 23.5)
You can add multiple counters as a single object.
result.counter( { "Temperature" : 33, "Humidity" : 65 })
Simply set the status of the object
result.status(obj)
result.status(name,value[,message])
result.status('Door 1', 'closed' ).status('Door 2', 'opened')
Set complex status object
result.status('Light 1', { "value" : 'ok', "data" : { "color": 'red' "switchedOn" : "2021-03-11T22:20:59.903Z" }
You can set the '@parser' status object, which will change the sensor status as its status is marked as critical. This way, if data are not what you expect, you can pass the error to the sensor
result.error(message)
result.error('Missing section')
Test Data
10, 15, 4.5
Script
for id, val in enumerate(data.split(', ')): result.counter('Value ' + str(id), float(val))
Scripts that are using 'import', 'exec', 'eval', 'compile' are not allowed.
This sensor can retrieve data from various sources such as:
You can run scripts on NetCrunch Server machine or remotely using the shell on a target machine using SSH.
Read more...
The Data Sensor sensor allows receiving data from an external source (agent). Data can be sent as JSON (native NetCrunch format) and in any other format that can be transformed using NetCrunch Data Parser.
Read more...
It can send an HTTP request, including custom requests with URL query parameters. It also allows setting custom headers and cookies for the request. In combination with Data Parser, it can get any data from an external source.
Read more...
datadomhtmljavascriptjsonpathparserparsingpythonxmlxpath
The sensor sends a series of ICMP packets and calculates Jitter based on RTT.
Jitter is a typical problem of the connectionless networks or packet-switched networks. Each packet can be transmitted by a different path and arrive at different times from the emitter to the receiver.
Jitter means interpacket delay variance. When multiple packets are sent consecutively from a source to a destination, for example, 100ms apart, and if the network is behaving ideally, the destination should receive the packets 100ms apart. But if there are delays in the network (like queuing, arriving through alternate routes, and so on), the arrival delay between packets might be greater than or less than 100ms.
If the difference is positive, it is counted in positive jitter. A negative value is counted in negative jitter.
icmpicmp-jitter
The sensor can alert when the number of hops to the destination has changed
The sensor executes traceroute to host using ICMP Echo requests.
Traceroute - will be in the Warning
state if trace can't be completed (target address is unreachable or maximum number of hops has been reached)
traceroute
NetCrunch provides detailed monitoring of network interfaces for SNMP-enabled devices, combining flexible policy-based selection, full duplex-aware metrics, real-time delta calculations for errors and discards, VLAN and MAC mapping, CDP/LLDP-based topology view, and a best-in-class interface visualization and alerting system.
NetCrunch monitors network interfaces using a policy-driven approach that enables precise, automatic selection of interfaces to monitor. Monitoring is based on SNMP, allowing support for a wide range of network devices including switches, routers, firewalls, servers, and other equipment.
Each monitored interface provides detailed traffic, status, error, and topology-aware data, supporting advanced visualization and alerting capabilities.
When SNMP monitoring is enabled on a device, NetCrunch automatically adds an Interface Sensor to the node. This sensor uses a Monitoring Policy to determine which interfaces are included. Not all interfaces are monitored — typically, only those matching the configured policy (such as active, Ethernet, or IP interfaces) are selected.
Monitoring includes:
Interface selection is driven by Monitoring Policies. Policies use filter expressions to automatically include or exclude interfaces based on criteria like type, status, or description.
Common built-in policies:
Policies can be selected during node configuration at
Node SettingsMonitoringInterfacesEdit Sensor Settings.
NetCrunch prefers 64-bit counters when available (device must support SNMPv2c or SNMPv3). 64-bit counters prevent overflow issues on high-speed links (such as 1 Gbps, 10 Gbps, or faster).
When 64-bit counters are unavailable, NetCrunch falls back to 32-bit counters but provides warnings if overflow risk is detected.
Correctly identifying interfaces across device reboots is crucial. SNMP indexes (interface IDs) can change, but names like ifAlias or ifDescr usually remain stable.
You can customize the identification template using tokens:
$ifAlias
$ifDescr
$ifName
$ifIndex
Example template:
$ifAlias|$ifDescr|$ifName
This instructs NetCrunch to use ifAlias
first, then ifDescr
, then ifName
as fallback.
Template configuration is available at
Node SettingsMonitoringInterfacesIdentification Settings.
NetCrunch monitors a wide range of interface metrics. Deltas are calculated automatically to provide rate-per-second values.
Counter | Description |
---|---|
Admin Status | Configured status (enabled or disabled by administrator). |
Operational Status | Actual operational state (up, down, testing). |
Last Change | Timestamp when interface state last changed. |
Speed | Current or overridden speed of the interface. |
Bytes Received | Total bytes received on the interface. |
Bytes Received/Sec | Rate of bytes received per second (delta). |
Bytes Sent | Total bytes sent from the interface. |
Bytes Sent/Sec | Rate of bytes sent per second (delta). |
Errors | Total packet errors detected. |
Errors/Sec | Rate of packet errors per second (delta). |
Discards | Total number of packets discarded. |
Discards/Sec | Rate of packet discards per second (delta). |
Output Queue Length | Number of packets currently queued for transmission. |
% FD Bandwidth Utilization | Full-Duplex bandwidth usage as a percentage of interface speed. |
% HD Bandwidth Utilization | Half-Duplex bandwidth usage as a percentage of interface speed. |
Unknown Protocols | Total number of packets received with unknown protocols. |
Unknown Protocols/Sec | Rate of unknown protocol packets per second (delta). |
In real-world scenarios, the interface speed reported by SNMP might not match the actual usable link speed because:
To ensure accurate bandwidth utilization calculations, NetCrunch allows manual overriding of interface speed.
Configuration path:
SettingsMonitoringInterface Monitoring Settings
You can manually set:
NetCrunch tracks delta values for errors and discards:
NetCrunch displays VLAN membership and connected devices directly within the interface view.
All this data is visible without external tools or config parsing.
NetCrunch uses CDP and LLDP protocols to map physical Layer 2 links, including:
Interface links are drawn as network segments, and clicking them reveals:
Interfaces can be grouped for better navigation and analysis.
An upcoming release adds full grouping by VLAN across the Atlas — improving visibility into broadcast domains and tagged links.
NetCrunch includes alerting options for:
Threshold types include:
Alerts are policy-driven and can be reused across multiple nodes and groups.
NetCrunch doesn’t just read interface counters — it understands network topology and presents actionable, integrated insights.
Telemetry enables systems to push metrics and logs into NetCrunch without polling. This topic explains when to use telemetry and how NetCrunch supports it via Telemetry Nodes and the OTLP cloud gateway.
Telemetry is the process of automatically collecting measurements from remote systems and sending them to a central monitoring platform. Unlike traditional polling (e.g., SNMP, WMI), telemetry:
Telemetry is the right choice when:
NetCrunch implements telemetry collection using two key mechanisms:
A virtual node type that is used to represent any data endpoint that sends data via REST.
NetCrunch offers native support for OTLP (OpenTelemetry Protocol) for receiving telemetry from OpenTelemetry-compatible agents or exporters.
Logs:
https://otlp.netcrunch.io/v1/[serverId]@[sensorId]@[nodeId]/logs
Metrics:
https://otlp.netcrunch.io/v1/[serverId]@[sensorId]@[nodeId]/metrics
Traces are not currently supported.
Event for Received Telemetry Event
OTLP Field | Mapped to NetCrunch Parameter |
---|---|
log body | message |
service name | description (if available) |
attributes | Additional parameters |
OTLP metrics are converted to NetCrunch counters using a standardized path model:
OTLP Metric Element | NetCrunch Counter Path Segment |
---|---|
Service or scope name | Object |
Metric name | Counter |
Attributes | Instance (joined key=value CSV) |
Histograms will be supported in the future and stored in a dedicated high-volume time-series database.
Currently: not yet implemented.
Telemetry in NetCrunch enables modern, scalable monitoring for dynamic environments. Whether through the lightweight Telemetry Node or OTLP integration, you can now collect rich logs and metrics from virtually any platform, without needing traditional polling.
A Telemetry Node is a special NetCrunch node type designed to receive metrics, statuses, and events from external systems using REST or OTLP. It acts as the anchor for telemetry data and replaces the older REST Receiver node type with a unified, more capable implementation.
cloud metricslog ingestionnetcrunch cloudnetcrunch telemetry nodeobservabilityopen telemetryotlpotlp gatewaypush metricstelemetry
A Telemetry Node is a NetCrunch node type for receiving metrics, statuses, and events from external systems via REST or OTLP. It anchors telemetry data for cloud, IoT, or custom systems, and replaces the older REST Receiver with a unified, event-capable design.
In NetCrunch, every monitoring object must be tied to a node. The Telemetry Node is purpose-built for ingesting external metrics, statuses, and events—ideal for workloads and devices that cannot be polled or exist outside the direct network (e.g., cloud, scripts, IoT, embedded).
Common uses:
nodeId
and creates the default Telemetry Sensor (sensorId
).The sensor is always created and cannot be removed, only disabled.
sensorId
as the main identifier (not just nodeId
)Protocol | Endpoint Type | Data Format |
---|---|---|
REST (JSON) | Local/Cloud REST | NetCrunch JSON |
OTLP | OTLP Gateway | OpenTelemetry (HTTP/gRPC) |
All data is routed to the Telemetry Node Sensor (nodeId
+ sensorId
).
https://<nc-server>/api/rest/1/sensors/<sensorId>@<nodeId>/update
https://gw.netcrunch.io/tm/v1/<serverId>@<sensorId>@<nodeId>/update
{ "counters": { "system/cpu.load": 0.75 }, "statuses": { "uptime": { "value": "ok", "data": { "statusCode": 1 }, "message": "System running for 3 days" } } }
https://otlp.netcrunch.io/v1/<serverId>@<sensorId>@<nodeId>/metrics
https://otlp.netcrunch.io/v1/<serverId>@<sensorId>@<nodeId>/logs
Aspect | REST Endpoint | OTLP Gateway |
---|---|---|
Format | NetCrunch JSON | OTLP Metrics/Logs |
Protocol | HTTPS (JSON) | HTTP/HTTPS (OTLP JSON/Binary) |
URL | gw.netcrunch.io |
otlp.netcrunch.io |
Processing | Direct ingestion | Translated to internal format |
Notes:
Important: The endpoint for sending events is different for Telemetry Nodes compared to classic Web Message Sensors. Using the wrong endpoint means events will not be processed.
For Telemetry Nodes, events must be sent to the sensor endpoint:
https://<nc-server>/api/rest/1/sensors/<sensorId>/event
sensorId
) created for the Telemetry Node.For legacy Web Message Sensors (non-telemetry), events must be sent to the node endpoint:
https://<nc-server>/api/rest/1/node/<nodeId>/event
nodeId
).sensorId
. nodeId
.If you send events for a Telemetry Node to the node/<nodeId>/event
endpoint, they will not be processed.
Node Type | Correct Endpoint |
---|---|
Telemetry Node | https://<nc-server>/api/rest/1/sensors/<sensorId>/event |
Web Message Sensor | https://<nc-server>/api/rest/1/node/<nodeId>/event |
Always use the
sensorId
endpoint for Telemetry Nodes; usenodeId
only for classic Web Message Sensors.
curl -X POST https://gw.netcrunch.io/tm/v1/SRV-1@sensor42@node91/update \ -H "Content-Type: application/json" \ -d '{"counters":{"system/disk.freeMB":12800},"statuses":{"system/fan":{"value":"ok","message":"normal"}}}'
curl -X POST https://<nc-server>/api/rest/1/sensors/sensor42/event \ -H "Content-Type: application/json" \ -d '{"message": "failed login"}'
message
, description
, attributes
(additional params like user, time, etc.)Empty filters match all events. Applies to both Telemetry Node and Web Message sensors.
Telemetry Nodes deliver a push-based model for observability—ideal for remote/cloud/IoT data, with robust REST and OTLP support and direct event ingestion. They simplify integrating external or serverless systems with NetCrunch monitoring.
event endpointnetcrunchobservabilityotlp gatewaypush monitoringreceiver noderemote metricsrest apisensor updatetelemetry node
Agentless monitoring of operating systems, virtualization, logs, applications, and services.
NetCrunch supports monitoring many applications by Monitoring Packs. It's also easy to create agents for sending data to NetCrunch.
NetCrunch contains predefined monitoring packs for common Windows applications.
Invalid Reference @monitoring-packs:Symantec
You can look at various Monitoring Packs defined in NetCrunch. Any other application can be monitored in a very similar way - if it supports perfmon counters; otherwise, you can rely on monitoring Windows services and processor and memory parameters for specific processes.
This is another option for monitoring Windows applications and any other application, including the hosted remote application or even websites.
You can create a script, which will poll necessary information for NetCrunch, or modify the existing application to send REST requests to NetCrunch.
It doesn't involve much programming as you can use cUrl, an open-source project available for almost any platform.
You can find it at http://curl.haxx.se.
Read more about Sending Data to NetCrunch.
To access monitored systems, NetCrunch needs valid credentials for each system. You can manage these credentials using profiles or set custom credentials for each node and system separately.
To maintain compatibility with previous versions, one default profile for each operating system is defined. You can create more profiles by pressing the plus button at the window's top right corner.
You can manage credentials used for connecting to monitored systems in Settings Monitoring Monitoring Credentials Manager
All credentials are stored securely on the NetCrunch Server, and the console can never retrieve passwords.
credentialspasswords
Read about monitoring Windows, macOS, Linux, BSD, Solaris, and ESXi systems.
NetCrunch monitors all operating systems without installing any agents. It's convenient but sometimes requires extra settings to be set on monitored systems.
Monitoring Windows computers without agents depends on two factors:
OS Monitoring
in node properties – it must be enabled and set to Windows.NetCrunch is designed to monitor Windows systems securely. It:
This ensures compatibility across Windows versions and deployments, even with strict domain policies.
Windows is the most common desktop OS, but its security configuration can complicate monitoring. Many challenges are automatically resolved when systems operate in an AD domain.
Before enabling monitoring, review the detailed setup steps in the Windows Monitoring Setup topic. We also provide a shell script for configuring standalone systems.
Windows monitoring includes Windows services, Event Logs, and performance counters. Additional insights are possible with WMI sensors for software, hotfixes, and hardware. You can explore example configurations in the available Monitoring Packs for Windows.
Enable Windows-specific monitoring via:
Node Settings Monitoring Windows
Windows Services monitoring tracks both service states and installation lifecycle.
NetCrunch observes and alerts on the following service states:
It also detects service installation or removal events, which are distinct from state changes:
These are treated as event-based alerts, not state-based ones. This distinction is important for software that installs new versions by uninstalling and reinstalling the service — where NetCrunch captures that transitional moment.
Alerts can be defined for:
Manual service inspection is available at:
Node Status Windows Windows Services
NetCrunch provides a set of configuration sensors (formerly inventory monitor) for monitoring hardware and software configuration changes. Although Windows machines can be monitored with RPC, these sensors require access to WMI, so make sure WMI is enabled and not blocked on the destination machine.
Using this sensor, you can download and monitor for changes in the hardware configuration of Windows-based hosts. The hardware configuration includes information about the processor, memory, installed disks (storage), video (graphic card), and monitor.
The sensor collects information about installed software, including installation date, version, and vendor. It can notify you when new software is installed, uninstalled, or updated.
The sensor collects information about installed hotfixes. It can notify you when a hotfix is installed or uninstalled.
NetCrunch can monitor Event Log entries on a given computer. It does it with a WQL query that NetCrunch automatically builds upon your parameters. You can set up this monitoring by adding an alert to the Windows Event Log sensor in Node SettingsMonitoring. Many of the predefined Monitoring Packs also enable Event Log monitoring.
Specify the narrowest query possible. Windows Event Log entries are large, and monitoring all Windows Event Log entries, even on a relatively small number of computers, might overload the NetCruch Event Log database.
NetCrunch allows monitoring Hyper-V services, but you must configure the node to monitor Windows first as it runs on Windows. Then, NetCrunch automatically detects the system is running Hyper-V services and ads.
NetCrunch provides several WMI sensors that include executing custom query and monitoring processes, performance counters, or file shares.
Windows offers many built-in performance counters, which the installed applications extend. NetCrunch allows the defining several types of Event Triggers for Counters on Windows counters. You can set up triggers by adding a new alert to a node or the Monitoring Pack. Settings Alerting & Notifications Monitoring Packs and Policies
There are several Monitoring Packs that you can use for monitoring different aspects of your Windows environment.
list supported Linux systems and requirements
NetCrunch monitors Linux without agents using an SSH script, which is automatically copied to a remote machine.
Open Node Settings > Monitoring and enable OS monitoring by selecting Linux from the drop-down menu.
Now, you can see the parameters to monitor Linux. Enter SSH credentials unless you use default settings for Linux or add a new credentials profile.
Monitor the most important Linux performance indicators such as processor and memory utilization, free disk space, and available swap, and create a Linux Server Report.
NetCrunch monitors BSD without agents using an SSH script, which is automatically copied to a remote machine.
After review copy Linux section and change on Linux to BSD, everything applies.
Monitor the most important BSD performance indicators, such as processor and memory utilization, and free disk space, and create a BSD Report.
NetCrunch monitors Solaris without agents using only an SSH script, automatically uploaded to the remote machine.
Monitor most important Solaris performance indicators such as processor and memory utilization, and free disk space, and create Solaris Report.
NetCrunch monitors macOS without agents using only an SSH script, automatically uploaded to the remote machine.
Monitor the most important macOS performance indicators. Such as processor and memory utilization, and free disk space, and then create a macOS Report.
NetCrunch supports direct monitoring of ESXi or using vCenter. Read more in VMware Monitoring.
IBM AIX and AS/400 systems can be monitored via SNMP.
As each NetWare system has a pre-installed SNMP agent, monitoring is possible using SNMP.
Invalid Reference @monitoring-packs:NetWare
bsdesx/iesxifree bsdlinuxmacmonitoringopen bsdososxsolariswindows
Read what disk performance metrics can be monitored on Linux, macOS, BSD, and Solaris.
Reads completed - The total number of reads completed successfully.
Reads merged - Reads adjacent to each other may be merged for efficiency. Thus two 4K reads may become one 8K read before it is ultimately handed to the disk, and so it will be counted (and queued) as only one I/O. This field lets you know how often this was done.
Sectors read - The total number of sectors read successfully.
Read Bytes - The total number of bytes read successfully.
Milliseconds spent reading - The total number of milliseconds spent by all reads (as measured from __make_request() to end_that_request_last()).
Writes completed - The total number of writes completed successfully.
Writes merged - Writes which are adjacent to each other may be merged for efficiency. Thus two 4K writes may become one 8K write before it is ultimately handed to the disk, and so it will be counted (and queued) as only one I/O. This field lets you know how often this was done.
Average queue length - The average queue length of the issued requests to the device. - Average time for I/O requests (ms) - The average time (in milliseconds) for I/O requests issued to the device to be served. This includes the time spent by requests in the queue and the time spent servicing them. - Average read requests time (ms) - The average time (in milliseconds) for reading requests issued to the device to be served. This includes the time spent by requests in the queue and the time spent servicing them. - Average write requests time (ms) - The average time (in milliseconds) for write requests issued to the device to be served. This includes the time spent by requests in the queue and the time spent servicing them.
Solaris 10 and Solaris 11 are supported.
Reading this topic will make your Windows monitoring experience much better.
NetCrunch can monitor Microsoft Windows systems without installing additional agents. However, tightened security rules make remote monitoring possible only after the initial configuration, depending on your Windows environment.
NetCrunch Server can be installed on Windows Server 2016 or later. If you manage most of the servers by Active Directory, installing NetCrunch on a machine within an Active Directory domain is the better option. This method makes configuration much easier.
Most server systems come with an enabled firewall, which blocks remote administration. It is the first step you need to take. It could be done either from Active Directory Group Policies or manually one by one. We suggest using a simple script.
Download it here: www.adremsoft.com/download/SetWinForNC.zip.
If you manage your workstations by Active Directory, preparing them for monitoring will be the same as for the servers (by Active Directory Group Policies or using the script).
Monitoring of workstations in Workgroups requires manual configuration. You can choose to use the built-in local Administrator account or create a new account and manually assign necessary rights directly to this monitoring account.
Setting Access Rights
NetCrunch needs a user account for monitoring that has proper access rights to DCOM, WMI
(root\cimV2), and (Read Access) to the registry key (HKLM\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Perflib. The easiest way (not the only one) you can do it is by adding this user to the local
Administrators group.
Setting Firewall Rules
Firewall rules must allow traffic of RPC, Performance Monitoring, Named Pipes, and WMI.
The procedure below requires a working knowledge of Active Directory Users and Computers and of the Group Policy Management Administrative Tools.
If you manage most of the servers by Active Directory, the best solution is installing NetCrunch on a server in the Active Directory domain and creating a dedicated user for monitoring. If you have not yet created such a user in your Active Directory, you should abort your NetCrunch installation now and configure your Active Directory first, allowing time for the user to propagate across your machines. You can start the NetCrunch installation again after your configuration has been propagated to all servers – it takes approximately 2 hours.
This is done for NetCrunch to discover all servers in the AD and automatically set up monitoring for them. Other servers in untrusted domains or workgroups can be configured separately (see Configuration of Separate Windows Server section below).
Create an Active Directory user account (for example, nc-mon-user) used by NetCrunch Server for monitoring. You will be asked later for this user's credentials during the NetCrunch installation.
The user account needs administrative rights to all monitored Windows computers (including the server where NetCrunch Server is installed). There are two different ways to accomplish this, depending on your Active Directory architecture and your needs:
Create an Active Directory group (e.g., Monitoring Users) and add the previously created nc-mon-user account. Then use Group Policy to add that group to the local Administrators group on each monitored computer, using Restricted Groups.
It would be best to use Group Policy to modify local Administrators' groups (on each monitored Windows machine).
a) Create an Active Directory group named Monitoring Users and add a previously created user account (nc-mon-user) to it.
In the multi-domain forest, the default Active Directory group scope (which is Global) should be sufficient for this group because global groups can assign permissions to resources in any domain in a forest.
b) Create a new Group Policy Object (GPO) and name it, for example, Local Administrators group membership for NetCrunch.
c) Create the rule for Monitoring Users' group membership.
Go to: Computer Configuration Policies Windows Settings Security Settings Restricted Groups
and add Monitoring Users to the local Administrators group using the section 'This group is a member of.'
d) Link the Local Administrators group membership for NetCrunch GPO to the appropriate Organization Unit(s) (OU) in your Active Directory domain(s).
Create a new Group Policy Object and name it, for example, "Windows Firewall rules for monitoring by NetCrunch."
For Windows Server 2016 or later, go to:
Computer Configuration Policies Windows Settings Security Settings Windows Defender Firewall with Advanced Security and add these rules to Inbound Rules, choosing them from a predefined list:
File and Printer Sharing,
Windows Management Instrumentation (WMI-In),
Remote Event Log Management,
Performance Logs and Alerts.
Link Windows Firewall rules for monitoring by NetCrunch GPO to the appropriate Organization Unit(s) (OU) in your Active Directory domain(s). For security reasons, it is recommended to customize remote administration rules to narrow the list of allowed IP addresses to the address of your NetCrunch Server only.
By default, Windows built-in firewall doesn’t block outgoing traffic – if you have changed this behavior, add rules with the same names from the predefined list above to the Outbound Rules.
Create nc-mon-user account using shell commands and add it to the local Administrators group.
net user /add nc-mon-user <Password>
net localgroup Administrators /add nc-mon-user
you can create a rule for the IP address of the NetCrunch server only.
New-NetFirewallRule -DisplayName "NC-Mon-In" -Direction Inbound -RemoteAddress %IP% -Action Allow -Protocol TCP
New-NetFirewallRule -DisplayName "NC-Mon-Out" -Direction Outbound -RemoteAddress %IP% -Action Allow -Protocol TCP
Set up Remote Registry service startup and start the service.
Set-Service -Name RemoteRegistry -StartupType Automatic
Start-Service RemoteRegistry
This step is no longer required for modern systems and has been removed in accordance with current best practices.
Windows technologies have been built layer by layer, one on top of another. For example, RPC is working on top of the Named Pipes, Remote Registry needs RPC, and WMI uses DCOM, which is also using RPC for communication. Everything needs proper settings of firewall and security settings. Here is the shortlist of technologies used by NetCrunch that need adequate configuration:
It is straightforward and fastest way to do when the user designated for monitoring is a member of the local Administrators group – as described in this document.
It is the simplest way to configure servers for monitoring, but not the most secure. When you need to tighten your security settings even more, setting up specific rights for the monitoring account is possible but may vary depending on your configuration. Please contact Microsoft support for help.
You may also contact AdRem support for an example of how other users NetCrunch users have modified their Windows Monitoring Setup.
access rightsactive directoryfirewallmonitoringperfmonsetupwindowsworkstations
Monitor summary of hypervisor services and state of virtual machines run by Hyper-V
Hyper-V services run on Windows, so you must configure the node to monitor Windows first. Then, NetCrunch automatically detects the system is running Hyper-V services and ads Hyper-V Server Monitoring Packs, which tracks the state of Hyper-V services.
You can automatically add all Hyper-V guests to NetCrunch Atlas and alerts or collectors on Hyper-V metrics in the' Hyper-V' settings.
You can also go to Custom
settings and add standard metrics visible through the perfmon Windows monitor.
In the Node Status window, you can see the state of all virtual machines located on a given server.
If you monitor Hyper-V servers, NetCrunch recognizes the machine running on the Hyper-V server and shows the additional information in the Node Status window.
You can also add various alerts regarding the state of the virtual machine and its guest performance metrics.
You can do this by clicking on Hyper-V/VM
in the Node setting section.
The view gives you a quick overview of monitored Hyper-V hosts. When NetCrunch discovers any Hyper-V Server, it creates them automatically.
You can see all the information about all monitored Virtual Machines per map and on the Atlas level.
Guest Status is taken directly from Hyper-V; therefore, the machine that won't respond to any service check may be marked as 'down' while the guest status shows 'running.'
hyper-vvirtualwindows
NetCrunch offers a collection of WMI sensors. Basic sensors are designated to serve specific purposes such as monitoring perfmon, process, shares, or time difference. The advanced sensor is more general and allows us to explore WMI classes or create custom WQL queries.
WMI sensors do not require a Windows OS monitor to be enabled on the node. In such a case, you must select an appropriate credential profile for each sensor.
These sensors don't require any knowledge about WMI or CIM. In particular, WMI Perform resembles the object names and counter names of the Perfmon.
wmi-perfmon
WMI performance data comes from the perfmon provider, so we decided to use the same UI for WMI performance monitoring as perfmon. You don't need to use cryptic class names to get to the data. The sensor does this job. If you feel using class names is necessary, use the WMI Data sensor.
file-shares
This sensor monitors the status of Windows Share. You can enter the share name manually or from the list.
process-group-summary
The sensor extends the Process sensor functionality by tracking multiple processes and using the wildcard (*) in the process name. It also supports monitoring child processes of a specified process when the Include child processes option is enabled, allowing you to view combined metrics for an entire process tree.
You can specify multiple processes and separate them by commas. You can also use partial names using *. For example, 'nc*' will select all processes starting with 'nc'.
In addition to tracking top-level processes, the Include child processes option lets the sensor monitor all child processes spawned by a given parent process. This is particularly useful for monitoring services or applications that create multiple subprocesses or worker threads under distinct process identifiers.
Key Points:
When monitoring either individual or grouped processes (including child processes), the following counters are available:
Note: Process names are case-insensitive in WMI.
By leveraging the ability to group processes and include their children, you gain a comprehensive view of resource usage and performance, simplifying monitoring complex applications and services that rely on multiple subprocesses.
process
The Process Sensor allows you to monitor the health and performance of a specific process running on a target machine. By tracking various system resource counters, you can ensure that critical processes remain stable, efficient, and responsive over time.
The sensor provides detailed insights into several performance metrics, including:
These counters help you proactively detect unusual resource consumption, such as memory leaks, CPU spikes, or excessive handle usage, enabling timely troubleshooting and corrective action.
When multiple processes share the same executable name or run with similar parameters, uniquely identifying the correct process instance is essential. The Process Sensor supports multiple identification methods to ensure accurate and reliable selection:
By Name:
Specify the process name (e.g., chrome.exe
) to monitor. The sensor will generate a monitoring issue by default if multiple instances are found unless further criteria are provided.
By Name and Parameters (Regular Expression):
Combine the process name with command-line parameters to distinguish between different instances. A regular expression can be applied to the full process name (including extension) and its command-line arguments, helping you isolate a specific instance when multiple identical processes are running.
By Command Line Regular Expression:
Instead of relying on the process name, use a regular expression directly on the command-line arguments. For instance, you might match a particular Java application instance by its unique JAR file name or a specific parameter that differentiates it from other JVM instances.
The sensor will report a monitoring issue if more than one process instance matches the specified criteria. This safeguard prevents ambiguous situations where the sensor cannot determine which process instance to track.
To avoid conflicts:
By leveraging these identification methods and tracking critical performance counters, the Process Sensor helps maintain stable and predictable operations, offering a clear path to identifying and resolving performance-related issues with key processes in your infrastructure.
ad-replication
This sensor checks a Windows Domain Controller for replication errors.
To use this sensor, add it to the machine, which is Domain Controller, and provide the credentials necessary to access the root\MicrosoftActiveDirectory
namespace. The sensor shows one status per each one discovered AD partition.
Microsoft documentation about AD partitions
The total number of statuses depends on the Domain Controller configuration. Three statuses always occur:
Each status contains below data (properties):
You can use any credential that allows access to the root\MicrosoftActiveDirectory
namespace. Administrative rights might be required.
pending-reboot
Check if the machine is waiting for a reboot because of the updates.
windows-updates
The sensor monitors the status of Windows updates on a computer. Allows triggering an alert on a failed update.
registry
This sensor uses WMI to monitor Windows Registry objects specified by path.
The registry sensor alerts you when a subkey or value list is changed.
Possible alerts:
registry-counters
This sensor uses WMI to get and monitor numeric values from the Windows Registry.
This sensor can utilize thresholds in NetCrunch to monitor different scenarios:
remote-ping
The sensor checks connectivity from a remote system by running Ping remotely.
It allows setting a threshold on the following counters:
time-difference
The sensor checks the time difference between a remote machine and the reference machine (NetCrunch Server or NTP server).
The sensor provides one counter, which is Difference sec.
wmi-hdd-health
This sensor can monitor the HDD disk health state if the disk implements SMART technology, predicting disk failure.
It allows many counters, which depend on specific vendor implementation.
battery
The sensor monitors the battery connected to the computer system. It can be a battery or any external battery (like UPS) connected to a computer.
Sensor alerts:
windows-task-scheduler
The sensor monitors the status of Windows Task Scheduler tasks. It allows triggering an alert if task configuration has changed or if a task was not run on time.
You can use various alerts to monitor tasks.
One sensor can monitor several tasks specified by name or regular expression.
iis-application-pool
The sensor monitors the state and performance of the selected IIS Application Pool, providing insight into its operational status, resource utilization, and error conditions. This enables administrators to address issues proactively before they impact end-users.
IIS Application Pool Data Collector – This collector provides key performance counters and operational health metrics for the selected Application Pool. It includes:
By leveraging these metrics, you can ensure your IIS Application Pool remains stable, responsive, and securely accessible to users.
These sensors require a bit of knowledge about querying WMI objects.
wmi-data
The sensor allows the selection of the WMI class and instance key to retrieve object properties. The sensor enables processing multiple objects (and that's why it needs an instance key property).
The sensor allows setting alerts only on performance counters. Each numeric property of the selected class becomes a counter.
Beware that some classes may return large datasets as the sensor does not filter any instance.
another way to monitor processes
wmi-object
The sensor monitors a specific WMI object without writing a WQL query. So, in addition to the WMI Data sensor, an instance value must be set. It is compared for equality, so you can't filter objects by several properties.
The sensor also allows setting alerts on performance counters and treats each object's numeric property as a counter. Additionally, it allows for tracking status properties.
Status alert tracks change from one value to the other. You have continuously tracked
Finally, you can store the last object data in the program database. Widgets or UI can later access these data.
wql-query$object
The WMI Object sensor has some limitations. For example, you can filter instances by a single field only. Unfortunately, simple things tend to be limited, while flexible ones quickly become complicated.
another way to monitor the process
SELECT * FROM win32_Process WHERE name = 'NCServer.exe'
Now, you can access counters of the given process or set the status alert for property changes.
You can use State Trigger to track value changes if the property is numeric and has discrete values.
batterychargefilehddmicrosoftobjectperfmonpingqueryregistryremoteschedulersharessmartsummarytaskupswindowswmiwql
WBEM stands for Web-Based Enterprise Management. It's a set of technologies that describe and share management information about various devices across different platforms. NetCrunch includes three WBEM sensors - Data, Object, and WQL Query: Object that allows access to all available WBEM classes.
WBEM Sensors are similar to their WMI Data, Query, and WQL Object counterparts.
CIM Server (CIMOM) should be launched on a monitored machine along with CIM Providers to use these sensors.
Sensors support HTTP or HTTPS connections.
Credentials may be required, but it depends on the implementation of WBEM installed on a given machine.
wbem-data
The sensor allows selecting WBEM class and instance key to retrieve object properties. The sensor enables the processing of multiple objects (and it's why it needs an instance key property).
The sensor allows setting alerts on performance counters only. Each numeric property of the selected class becomes a counter.
Some classes can return large datasets because the sensor does not filter any instance.
Requirements (standard for all WBEM sensors) To use this sensor, add it to the machine which has launched CIM Server (CIMOM) and some CIM Providers.
Processes monitoring:
wbem-object
The sensor monitors a specific WBEM object without writing a WQL query. So, in addition to the WBEM Data sensor, it requires an instance value to be set. The sensor checks it for equality, so you can't filter objects by several properties.
The sensor also allows setting alerts on performance counters and treats each object's numeric property as a counter. Additionally, it allows for tracking status properties.
Status alert tracks change from one value to the other. In such cases, you can always track status values and set them to appropriate values.
Finally, you can store the last object data in the program database. Widgets or UI can later access these data.
It is allowed to monitor only instances with unique instance property values.
wbem-wql-query$object
WBEM WQL Query: Object Sensor The WBEM Object sensor has some limitations. For example, you can filter instances by a single field only. Unfortunately, simple things are limited, while flexible ones quickly become complicated.
It allows retrieving data of specific CIM Class instances by writing a WQL query. The query must point to only one instance.
Monitoring of specific process
root\cimv2
SELECTt * FROM CIM_Process WHERE name = '-bash'
Now you can access counters of the given process or set the status alert on property change.
linuxobjectqueryunixwbemwql
TACACS+ provides access control for routers, network access servers, and networked computing devices via one or more centralized servers. TACACS+ provides separate authentication, authorization, and accounting services.
The TACACS+ sensor checks the server connectivity and tries to authenticate a specified user. Sensor requires - TACACS server shared secret - used for data encryption - user (with password) to authenticate
The sensor should be added to the TACACS+ server machine.
tacacs
NetCrunch's Cisco CBQoS (Class-Based Quality of Service) Sensors offer a sophisticated solution for monitoring Quality of Service statistics through SNMP. This tool is instrumental in analyzing key QoS metrics, providing network administrators with detailed insights to refine QoS policies effectively.
The Cisco CBQoS Sensors in NetCrunch enable comprehensive monitoring of QoS statistics, crucial for optimizing network performance and traffic management. The main areas of monitoring include:
ClassMap Statistics: Gain in-depth statistical information related to ClassMaps, which helps in understanding how different classes of network traffic are being managed and prioritized.
Match Statement Statistics: Access detailed statistics associated with Match Statements to analyze how specific traffic is identified and categorized within the network.
Queueing Action Statistics: Examine statistics related to Queueing Actions, offering insights into how various traffic classes are being queued and handled, essential for effective bandwidth management and traffic shaping.
NetCrunch automatically generates alerts to keep administrators informed about critical issues, including:
NetCrunch provides default reports for comprehensive analysis, such as:
With these tools, network administrators can make more informed decisions regarding QoS policies, leading to improved network performance and efficiency.
cbqosciscomonitoringqos
LDAP Authentication Sensor checks the user authentication process via LDAP that is used by directory services such as MS Active Directory.
The sensor connects to LDAP (Lightweight Directory Access Protocol) server and tries to bind using a specified Distinguished Name and password.
LDAP Bind operation is used to authenticate clients (and the users or applications behind them) to the directory server.
Sensor alerts when bind operation cannot be completed (e.g., because specified credentials are invalid).
The sensor supports both plaintext and SSL/TLS connections and LDAP v2 and v3 protocols. Bind operation can be performed with a simple (plain text) authentication mechanism or using SASL (Simple Authentication and Security Layer) - Digest (MD5), NTLM/Negotiate, and Kerberos can be used.
Distinguished Name is commonly the string that uniquely identifies an entry in the Directory Information Tree, like:
cn=John Doe,ou=people,dc=example,dc=com
For Windows Active Directory, use either :
Domain\user
or
user@domain
adldapopenldap
NetCrunch allows monitoring files, folders, and text logs.
Node SettingsMonitoringAdd Monitoring Sensor
file
FTP/S, HTTP/S, SSH/Bash, SFTP, Windows/SMB, TFTP
NetCrunch contains a single sensor that allows monitoring files using one of the five common protocols. You can monitor file size, file changes, accessibility, or when it has been updated (file age).
text-file
FTP/S, HTTP/S, SSH/Bash, SFTP, Windows/SMB
The sensor allows the monitoring of file properties and file content. The content can be searched globally or per line.
File content event monitor can search for some text using plain text patterns or regular expression. The incremental search will monitor only new occurrences in a file.
Text File Line event monitor tries to match lines, and if a new line is found, an alert is triggered. A search pattern can be plain text or a regular expression. If multiple text lines match the search pattern, the program can group them into a single alert.
For example, we want to know when there is at least 1 job ran on exit. So let's see a sample cron log:
Nov 5 01:01:01 localhost CROND[22826]: (root) CMD (run-parts /etc/cron.hourly)
Nov 5 01:01:01 localhost run-parts(/etc/cron.hourly)[22826]: starting 0anacron
Nov 5 01:01:01 localhost anacron[22836]: Anacron started on 2015-11-05
Nov 5 01:01:01 localhost anacron[22836]: Normal exit (1 job run)
Nov 5 01:01:01 localhost run-parts(/etc/cron.hourly)[22838]: finished 0anacron
We can define the following expression to a more specific line:
anacron.*Normal exit \([1-9]*
data-file
FTP/S, HTTP/S, SSH/Bash, SFTP, Windows/SMB
The sensor loads data from a file using one of the predefined file formats. It supports native NetCrunch XML, JSON, and CSV and allows processing data through Data Parsers, allowing for any data.
The sensor allows getting data from various sources and processes them to get metrics and status values.
Data Parsers transform and analyze external data, returning NetCrunch counter metrics and status objects.
Read how to send data to NetCrunch and create a custom monitor. You can easily turn any application or script into a NetCrunch agent.
folder
FTP/S, SSH/Bash, Windows/SMB
The folder sensor allows observing folder content. The sensor can trigger alert when the file is deleted and on various other conditions.
You can check the list of files matching the given file mask.
filefile monitoringfile sensorsensortexttext log
NetCrunch allows monitoring text file content and has a special sensor for text logs.
Node SettingsMonitoringAdd Monitoring Sensor
NetCrunch allows two levels of monitoring of log files. Simple monitoring can be configured with file sensors (using remote windows, FTP or HTTP), which look for a specific text pattern in log files.
More advanced log monitoring is possible with Text Log sensor, which can parse the file, and then the program can collect and alert on parsed entries.
FTP/S, HTTP/S, SSH/Bash, SFTP, Windows/SMB, TFTP
This sensor parses a file and converts each entry into a list of properties, which later can be filtered like any other type of log (Windows Event Log, syslog). This gives you more control over how alerts are triggered and allows better analysis of collected log entries in the event log.
Text Log sensor can remotely monitor large log files on Windows (tested on gigabyte files) and using SSH/Bash connection. Unfortunately, FTP or HTTP requires whole files to be downloaded.
NetCrunch contains sample text log formats and allows defining custom formats using text parsing expressions.
Settings Resources Text Parsing Expressions Text Log Expressions
Best suited for simple log formats where each line contains fields separated by a single character.
For example, such a line can look like this:
11/19/15 7:20:38 am,Information,Monitor started
And we can define that the program should convert this to fields:
In the case of logs where there is no separator between fields, we can use regular expressions. The expression must contain search groups to identify each field.
In this example, our log can contain a number at the beginning and then a message until the line's end.
10345 : Error during loading module.
Expression:
([0-9]*) : (.*)
We can define two fields to manage such log:
Parsing expression editor allows immediate testing of your expressions:
This is a widely used log format by Java programs. It's a good example of using regular expressions to monitor the log entries.
Apache logs can be formatted using special formatting strings. NetCrunch can reverse-engineer these formats to parse a given log. To parse the log, you need only copy the log format string from the configuration and put it into NetCrunch expressions.
You can write a simple code to parse input text.
For example, for our simple log, we can define an alert:
The program can also trigger an alert on the following performance counters:
logmonitorsensortexttext+log
NetCrunch can monitor requests, pages, and data on the web.
Node SettingsMonitoringAdd Monitoring Sensor
NetCrunch includes two sensors for web monitoring.
web-page
The sensor renders the page like a web browser. It loads all resources and runs scripts. It's intended for monitoring modern pages or applications. It supports standard login and custom login forms.
You can test page content in two ways. First, you can search for text patterns using regular expressions within the HTML or text content. Second, you can define a DOM selector to locate an element on the page and then check the text or attributes of that element (or list of elements).
Default report added collects: % Availability, Load Time , Total Size, Resource Count, Resource Error Count. These parameters are also available in the @trend-viewer.
basic-http
This sensor sends a single request and can alert based on the response code or specific response data. It supports GET, HEAD, and POST requests. The sensor allows you to set custom headers and cookies for the request. Additionally, it can follow redirects (disabled by default) and alert if there are issues with the SSL certificate.
By default, the sensor adds a report containing: % Availability, Response Time, and Content Length charts. These metrics will be collected and available as a report or through @trend-viewer
rest-http
Similar to the Basic HTTP sensor, this sensor can send a single request and alert based on the response code or specific response data. Additionally, it can send any type of HTTP request, including custom requests, and easily add URL query parameters. It also allows setting custom headers and cookies for the request. Additionally, it can follow a redirect (the option is off by default) or alert if there is a problem with the SSL certificate.
The sensor enables the processing of any received data and converts it to counters and status objects using Data Parses.
Check remote file content and authentication parameters, monitor remote text logs, file size or change time, presence, and more. See Monitoring Files and Folders.
htmlhttphttpsrestsensorwebweb+pageweb+sensor
See how to monitor virtual machines, datastores, hardware status, and the performance counters of host and guest VMs.
NetCrunch supports ESXi version 4 or newer. It can connect directly to the ESXi servers or through vSphere vCenter. In the case of vCenter failure or maintenance, it can switch to direct mode automatically.
NetCrunch comes with pre-configured Automatic Monitoring Packs to monitor ESX as soon as OS monitoring is set to ESX.
It requires adding a vCenter Sensor to a machine hosting a vCenter server. The sensor will automatically switch detected ESXi machines to vCenter mode. Features such as active alarms and configuration issues are available in vCenter mode only.
You should use this mode only if you do not run vCenter to manage your ESXi host or fall back when vCenter is not available. It requires specifying credentials per each ESXi server, which can be different than vCenter credentials.
The view gives you a comprehensive at the state of monitored ESX hosts.
You can see all information about all monitored Virtual Machines per map and on the Atlas level.
StatusSystem ViewsVirtual Machines
For each ESXi machine, NetCrunch lists all virtual machines running on the given ESXi. You can add the virtual machine to the Atlas to be monitored. It's possible only if the machine is running, and it has an IP address assigned.
StatusSystem ViewsDatastores
NetCrunch allows the monitoring of VMware data stores. You also see their properties in the node status window.
StatusSystem ViewsHardware Status
NetCrunch tracks all hardware statuses provided by the ESX server.
Counters can be monitored by ESX or using the VM machine instance Counters.
NetCrunch provides counters for both the host ESX system and guest VM.
You can set triggers on those values using Event Triggers for Counters.
The object provides performance counters for a given ESX server.
The object provides performance counters for a given guest system (VM).
If you want to add alerts to the single ESX server, go to Settings Alerting & Notifications Monitoring Packs and Policies. You can add new or overwrite alert rules defined by ESX Monitoring Pack.
The other option is to modify ESX Monitoring Pack and then go to Monitoring > Monitoring Packs & Policies and edit the selected monitoring pack located in the VMware section.
Node SettingsMonitoringESXi
You may decide to add each guest system to NetCrunch automatically. Each guest will be automatically monitored according to the defined Automatic Monitoring Packs.
The option is disabled by default.
When the option is enabled, and you want to remove a VM guest from Atlas, you need to set a specific exclusion in the Network View. Otherwise, the machine will be added over and over again.
Guest Status is taken directly from VMware (ESX) - therefore, the Machine which won't respond to any service check may be marked as 'down' while guest status is marked as 'running.'
datastoreesxihardwarestatusvcentervirtualvmvmwarevspherevsphere-vcenter-server
Monitor most popular Apache web servers.
Node SettingsAdd Monitoring SensorApache Web Server
NetCrunch allows the monitoring performance of Apache web servers. The Apache sensor allows you to monitor various performance metrics grouped in objects like Country, Summary, and Virtual Host.
apachemonitorsensorweb server
NetCrunch allows monitoring mail content, mailboxes, and checking basic mail server functionalities using a round trip email sensor.
Node SettingsAdd Monitoring Sensor
NetCrunch allows various aspects of email monitoring.
All email sensors support IMAP4/S and POP3/S.
mailbox
This sensor allows for monitoring of mailbox authentication, activity, performance, and size. You can check if the mailbox is properly processed by checking the oldest email in the mailbox or receiving the last message. The sensor is operating on the mailbox owned by another user, so the sensor is not changing the mailbox's content.
data-email
The sensor allows triggering alerts based on email sender, subject, or body. It can match emails using simple text patterns or using parsing expressions (i.e., regular expressions, scripts). The sensor can extract data from email content and turn them into performance metrics.
In this case, the sensor must own the mailbox. It will automatically delete all processed emails.
email-round-trip
This sensor is intended to check the mail server functionality by sending and receiving test emails. It must use a dedicated mailbox, and it removes test emails from the mailbox automatically.
Because the SMTP server might be configured to use a different IP address than POP3, you should add the sensor to the POP3 server, and then you can set the SMTP connection IP address separately.
contentemailemail dataimap4mailboxpop3round tripsensorsmtptext
Read how to send data to NetCrunch and create a custom monitor. You can easily turn any application or script into a NetCrunch agent.
You can add the Data Receiver Sensor from the sensor list on any node. The configuration here is very minimal. It needs a name, and it automatically creates an API key for an external agent expected to send data to NetCrunch. The API key consists of the sensor name (without spaces) and node id number. For example, an API key can be JMX@1034, if we name our agent "JMX."
You can add multiple sensors on a single node for each application you need to monitor.
The sensor can process data in various formats using specific data parsers. By default, it uses native NetCrunch JSON format, but you can easily create a custom parser.
Because NetCrunch does not know how often you will send data to it, you need to specify a "retention time" for data, after which it will expire and be cleared out of memory.
We wanted to keep the API for the sensor straightforward. The simplest tool you can use to send requests to NetCrunch is cUrl open source project, available for almost any platform. You can find it at curl.haxx.se
The API consists of only 5 requests:
By default, sensor expects <NetCrunch JSON>
data format. So, the request payload should be application/json content type
{
"retain": 1,
"counters": {
"PBX/line status.0" : 1,
"PBX/line status.1" : 0
},
"statuses": {
"AC" : "On",
"Power": "On"
}
}
As you can see, you can send multiple statuses and counters in a single request.
Beside simple status values (key, value pairs), NetCrunch allows tracking of status objects. The status object can be described by the JSON object and can contain additional user data.
For example:
{
"statuses" : {
"Disk C:" : {
"value" : "ok",
"message" : "Working fine",
"retain" : 5,
"critical": true,
"data" : {
"type" : "SDD",
"upTimeSec" : 123431
}
}
}
}
The status object can contain fields such as:
value - a status value that can be any string, but if one of the standard values is used, NetCrunch will use the value for calculation alert conditions. The field is required to recognize the status object, otherwise, the whole object will be treated as a text string. Standard values are: ok, error, warning, disabled, unknown.
name - the name of the object, it does not have to be unique. (optional)
true
the sensor status will reflect highest status of a given objectAdd more about classes after we define some.
When your object contains the "Critical" field, it will influence whole sensor status; otherwise, sensor status is based on active alerts state.
Any object set to "critical": true will set sensor status to the highest alert level (error or warning). If the object is non-critical ("critical": false), then the sensor is in error state only if all objects are in error state, and it is in warning state when any of its objects are in error or warning state.
<nc-server-address>/api/rest/1/sensors/<api-key>/counter?Temp=65&Wind=4&@retain=5
As NetCrunch stores counter values in memory, the agent can increment it without storing its actual value. The request will increase or decrease the counter by the given value.
<nc-server-addres>/api/rest/1/sensors/<api-key>/counter/inc?Door.Opened=1
<nc-server-address>/api/rest/1/sensors/<api-key>/status?Door=Opened&@retain=5
By default, the data URL is pointing to a single sensor on a single node, but if you want to supply data to multiple nodes, you can use a shared URL. To do this, check Use the shared URL option in the sensor configuration.
Now your data should be in the form of a JSON array, and each sensor object has to contain node identification, which is node DNS name or IP address depending on node Identification Type.
For example:
[
{
"node": "192.168.10.1"
"statuses": {
"AC" : "On",
"Power": "On"
}
} ,
{
"node": "test.lab"
"statuses": {
"AC" : "Off",
"Power": "Off"
}
}
]
You can easily send event messages to NetCrunch using an HTTP request. The program accepts POST and GET requests.
http://<nc-server>/api/rest/1/event/<node-identification>
Node identification is a node IP address or DNS name.
Find more information below.
agentapidatadata receiverdata sensorgenericjsonmonitorobjectrestsensorstatus
Read about how NetCrunch monitors itself.
NetCrunch Server is like any complex application with multiple processes, lots of data processing, and high demand for storage. In fact, NetCrunch has multiple logs and monitors thousands of parameters.
These parameters are essential when diagnosing potential performance problems. Be aware: the program is always limited to the hardware capabilities it's running on.
So, NetCrunch Self Monitor is the monitor that watches the NetCrunch Server.
Some programs merely hide problems, which is troublesome, mainly because it creates the illusion that something is working when in actuality, it's not. It's hard to detect and impossible to solve when there is no visible problem. Most problems are caused by overloading - NetCrunch Server overloading or monitored device overloading. Many Cisco devices have protection against too many monitoring requests sent to the device. In many cases, the solution is simple: increasing monitoring time or timeouts.
The status of the NetCrunch server is available on the NetCrunch Server Status
dashboard.
Settings Alerting & Notifications Monitoring Packs and Policies Global NetCrunch Self Monitor
The global monitoring pack contains default alerting settings for this monitor, and they trigger Default action.
NetCrunch alerts on:
SQL sensors allow for measuring connectivity, query execution, and result data processing as metrics or statuses. Supported databases: Oracle, SQL Server, PostgreSQL, MySQL, MariaDB, ODBC.
NetCrunch supports ODBC sources and several native drivers. It requires x64 ODBC drivers to be installed. Please remember that you must use System DSN because NetCrunch runs as a service process and can't access your User DSNs.
To use the SQL Query Sensor in NetCrunch, install ODBC Driver 18 for SQL Server on the NetCrunch server. This ensures compatibility with SQL Server 2008 and later, including SQL Azure.
The sensor does not support the Numeric(18,0) data format and treats it as a string value. Use float, real, or int for numeric data.
The program supports Oracle 11 up to 23, depending on the drivers you install. OCI v19 is the latest driver supported, but it will work with newer databases.
The latest driver is installed with NetCrunch. No further installation is needed.
The latest driver is installed with NetCrunch. No further installation is needed.
Each sensor allows the creation of connection profiles. If you do not save the profile, it's named ** Custom** and saved only for a given node.
If you saved the connection profile, you could reuse it for multiple SQL sensors, even on another node, if they use the same settings. You can edit the profile from the sensor setting, but to save changes, you have to save it and override the previous profile settings. Otherwise, the profile will automatically become Custom.
You can also manage database profiles in the Credential Manager. Settings Monitoring Main Monitoring Credentials Manager Database connection
OK. You set the credentials, and now it is time to connect. Select the database selection button, and you should see the list of databases or get an error message.
sql-query$object
It allows executing a query returning a single row. It can also be used with an empty query to check database authentication and connectivity. The row can represent an object, and the columns can represent object properties. The sensor allows setting an alert on the status of object properties.
You can keep your query empty. Then, the sensor will only connect to the given database, and its status will reflect database connectivity.
When you write your SQL query query, ensure it will return a single data row. This row of data can now be used to monitor status change values.
You can test your sensor query by clicking the test icon at the window's top right corner.
Predefined alerts:
sql-query$data
Allows executing a query returning multiple rows. Columns can be used as a source for metrics. The metrics enable threshold alerts to be triggered and used for reporting.
Now, you can add counters and set thresholds.
Let's try grouping table elements by name and counting them. This way, we can create a single counter; the instance is the name.
SELECT COUNT(*) AS Count, Name FROM MyTable GROUP BY Name
In this case, we have a single metric, Count, and the Name column will be the instance column.
Some queries aren't directly related to any database. The 'No Database' option allows for selecting multiple databases and data related to them in queries.
SELECT sys.databases.name,
CONVERT(int,SUM(size)*8/1024) AS [Total disk space]
FROM sys.databases
JOIN sys.master_files
ON sys.databases.database_id=sys.master_files.database_id
where sys.databases.database_id > 4
GROUP BY sys.databases.name
ORDER BY sys.databases.name
SQL sensors require being configured with the actual database connection. The query needs to be executed to select counters and status objects properly. - Only columns of numerical types can be used for counters. Use the CAST expression (or CONVERT in T-SQL) and alias name to cast the text column into numerical type. - Maximum number of returned values is 1000 - Default Request timeout is 15000 (15 seconds)
Beware when editing the query. Your counters may no longer match the query results.
databasemariadbmicrosoftmysqlodbcoraclepostgresqlsqlsql server
HTTP/S, ONVIF
NetCrunch's IP Camera Sensor efficiently monitors your camera systems, capturing snapshot images to verify connectivity. These images are displayed in the node status window and can also be integrated into network maps using the Snapshot Image widget for enhanced visual monitoring.
The sensor is equipped with essential alerts to maintain the security and functionality of your camera system:
These alerts are specifically designed to monitor changes or anomalies in camera images:
All image-related events from the IP Camera Sensor offer enhanced event detailing: - They display the involved image in event details. - They can include the image in email notifications (HTML format) for immediate reference.
Integrate real-time camera snapshots into your network maps by following these steps:
The sensor monitors usage of iNodes on Linux machines
The inode is a data structure in a Unix-style file system that describes a filesystem object such as a file or a directory. Each iNode stores the attributes and disk block location(s) of the object's data.
On many types of file system implementations, the maximum number of inodes is fixed at file system creation, limiting the maximum number of files the file system can hold.
Performance Counters:
Default Alerts:
Monitoring
This sensor requires standard access. (used by Linux monitor)
inodes
The sensor monitors basic statistics of the Docker container
The sensor uses Docker Engine REST API to communicate with the container.
Performance Counters:
Statuses:
Default Alerts:
Monitoring
To monitor the Docker container, you have to enable remote API.
To enable remote API:
Open /lib/system/system/docker.service file with any text editor
vi /lib/systemd/system/docker.service
Find the line which starts with ExecStart and add -H=tcp://0.0.0.0:2375
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
Save the Modified File
Reload the docker daemon
systemctl daemon-reload
Restart the container
sudo service docker restart
Open a port on the firewall if necessary.
To test that API is accessible, try to run test on the docker sensor, make sure you added that sensor to the proper node (just configured one)
Please note that enabling remote API on a different version of the docker and system where installed may look slightly different.
containerdockerrestsensor
The sensor checks connectivity and validates the response of SIP service
This Basic SIP sensor is configured to monitor connectivity or session initiation with SIP Server (SIP Phone System).
Monitored are an error code and response time for "OPTIONS" or "INVITE" request only.
In addition to the error code, the sensor returns a description of each error.
The sensor supports SIP platform services such as FreeSWITCH
, Asterisk
, OnSIP
, FreeSWITCH Legacy
, Asterisk Legacy
.
sensorsipvoip
The Sensor checks connectivity and validates response from DICOM capable system
The C-ECHO DICOM sensor monitors the availability of DICOM capable by sending a C-ECHO request. In fact, this very similar to what does regular Ping, so it's also called often DICOM Ping. The sensor verifies the DICOM handshake protocol and checks if the target system is properly answering DICOM messages.
dicomdicom-echoecho
Netcrunch delivers hundreds of monitoring facilities such as monitoring packs and sensors, but we want you to be not limited to them. Script sensors are essentials in filling any gaps in delivering monitoring data to NetCrunch.
Unlike other systems, we do not force you to deliver data in one native data format, because there are thousands of scripts written for open source systems already. We want these scripts to be adaptable to NetCrunch, and this is the role of Data Parsers that are responsible for translating data returned by an external script to NetCrunch native format.
remote-ssh-script
The Remote SSH Script Sensor in NetCrunch allows for the execution of scripts on remote machines via SSH, providing a powerful tool for remote system management and data collection.
This sensor is designed to remotely execute a script that is located on a target system. It utilizes an SSH connection to securely run the script, making it an ideal choice for managing and monitoring servers, network devices, and other systems in a secure and efficient manner.
Configuring the Remote SSH Script Sensor involves several key settings:
NetCrunch provides several pre-configured alerts for common issues that might arise during the sensor's operation:
Beyond the default alerts, NetCrunch offers additional alerting options to provide more granular monitoring:
The Remote SSH Script Sensor provides valuable data points, including:
By integrating the Remote SSH Script Sensor into your NetCrunch monitoring setup, you gain a versatile tool that extends the capabilities of your network monitoring, ensuring a more comprehensive and proactive management approach.
script
The Script Sensor in NetCrunch is a dynamic tool designed to execute scripts or programs on the local NetCrunch Server machine and accurately process their results.
This sensor is adaptable and can be configured on any node within your network. However, it's important to understand that the script will always execute on the NetCrunch Server machine. This feature allows it to effectively poll data from remote machines, with the option to include various parameters like address and credentials in the script.
NetCrunch supports a wide range of script types, enhancing its versatility:
NetCrunch can automatically detect script type by file extension, or you can specify the type in sensor configuration. If the program can't detect script type, it will execute it using Windows shell.
The output from the script (stdout stream) is fed into a data parser. This can be a custom parser designed to adapt to your script's output, subsequently returning counters and statuses to NetCrunch. The script must be accessible to NetCrunch and located on the NetCrunch Server's local disk. Remember, the NetCrunch Server operates under the Local System Account, which requires the correct disk access permissions.
Output File Saving
The script's output can be saved to a file for NetCrunch to read and parse. This is enabled by checking the 'Read result from file' option and specifying the output file path.
Script Parameters
Parameters can be included in the configuration window to map NetCrunch node properties to script parameters. This includes dynamically mapped parameters like node IP address ($address), node name ($name), and credentials ($credentials/...). For PowerShell, you have the option to pass credentials through a PSCredential object.
NetCrunch provides counters for: - Sensor/Script Execution Time - Counters returned by selected Data Parser
NetCrunch automatically sets up default alerts for script execution errors. Additionally, you can set up alerts for:
Arguments to the script can be passed using parameters in the configuration window. These parameters allow you to map NetCrunch node properties to script parameters, facilitating dynamic data collection and interaction with various network components.
profile path (you can use only one profile per sensor):
$credentials/windows/<name>/<property>
,
where profileName is optional, for example
You can pass credentials to Windows PowerShell via PSCredential object by checking the option 'Pass credential through PSCredential object' and providing parameter.
Prerequisites
Depending on the script type, specific prerequisites need to be fulfilled (e.g., installing NodeJS for JavaScript, Python for Python scripts, and setting PowerShell execution policies for PowerShell scripts).
Limitations
Due to the resource-intensive nature of external scripting sensors, the number of concurrent scripting processes running on the NetCrunch Server is limited to 5. This is to ensure optimal performance and stability of the system.
remoteremote scriptscript
windows-updates
The sensor monitors the status of Windows updates on a computer. Counts installed updates, updates available to install, and not installed updates. Allows triggering an alert if the installation of the update fails or when no update has been installed for a given time.
The minimum monitoring time for the sensor is 1 hour. Because the state of the Windows updates is rare, the recommended monitoring time is one day. You can check the current status of updates on System Views.
Querying for a list of updates may take a significant amount of time.
Each object has instances
Categories of updates presented in NetCrunch may differ from their Windows Update counterparts. For example, Windows lists all Security and Critical as 'Important' while NetCrunch shows the severity of each update individually (Important, Critical, Low, etc.) Read more here Windows Update
pending-reboot
This sensor checks a Windows machine for pending reboots using WMI. To use it, you need valid credentials for the monitored Windows machine.
Any valid credentials to the monitored Windows machine, allowing for access to the registry.
pending rebootsensorwindowswindows update
This view is available for all Windows machines. It contains extensive information about tasks on a given machine.
Tasks can be divided by multiple filtering options - state, when they are scheduled or in which folder they are located.
Accessing this view requires no additional configuration on a NetCrunch or server-side.
Each task contains information such as:
Additionally - Failed tasks will be marked by a red color.
It's effortless to search for a particular task using the search box in the top right corner.
windows task scheduler
Palo Alto Firewall sensor monitors IPSec site-to-site VPN tunnels configured on the Palo Alto device. By default, the sensor alarms when the state of any tunnel becomes non-active. It also allows for the monitoring of specific tunnels.
The sensor gathers traffic metrics for each tunnel. Depending on the system configuration, system-provided metrics are also available for monitoring.
The sensor uses Palo Alto PAN-OS API and needs the correct credentials.
If the sensor is properly configured, you can see detailed information about configured IPSec Site-to-site tunnels on Node Status in node System Views.
This sensor allows monitoring of the overall health status and performance of NetApp storage using SANtricity REST API.
SANtricity is a data management software that powers and administers the NetApp E-Series ad and EF-Series storage arrays. The sensor collects various performance metrics and checks the current status of Storage Polls, Volumes, Controllers, Drives, and Workloads. The sensor also checks the overall system health status.
SANtricity REST API is using HTTP basic authentication. You need to pass the username and password.
Default alerts are defined for each instance of a given subassembly. You may configure your own alerts depending on your storage system specifications.
The sensor provides a large number of metrics, observing which allows for an accurate diagnosis of the storage system. By default, the sensor adds reports containing the most important metrics about Volumes (IOPS, transfer speed statistics, cache utilization), Drives (temperature, endurance used), Hot spare drives usage, and more.
iopsnetappstorage
The sensor allows monitoring of the performance of HPE 3PAR StoreServ storage using Web Services API.
Using this sensor you can monitor the state and utilization statistics of the storage objects:
HPE 3PAR Web Services API (WSAPI) must be configured and enabled on the monitored device.
You need to pass your username and password. Use the same username and password that you use to access the 3PAR storage server through the 3PAR CLI or SSMC.
By default, alerts are defined for each Volume, CPG, etc. You may configure your own alerts depending on your storage system specifications.
By default, the sensor adds reports containing metrics for used space of monitored Volumes, CPGs, and storage capacity.
3parhpestorage
This sensor allows monitoring of the state of all Veeam jobs and the capacity of backup repositories.
This sensor will notify you when any backup or replication task fails or completes with warnings. The sensor monitors the capacity of your backup repositories and notifies you when the amount of available space in any repository is low.
The Veeam Backup Enterprise Manager with Enterprise Plus license must be installed on the monitored system.
By default, Veeam Backup Enterprise Manager REST API uses HTTPS on the 9398 port. You need to pass the username and password to Veeam Backup Enterprise Manager REST API.
backupprotectreplication
This sensor allows monitoring of the health and performance of NetApp storage components using ONTAP REST API.
ONTAP data management software offers unified storage for applications that read and write data over file-access protocols. ONTAP implementations run on NetApp-engineered FAS or AFF appliances, on commodity hardware (ONTAP Select). The sensor checks the current status of Storage components, Volumes, LUNs, Disks, Interfaces, Nodes, and Ports. The sensor alarms in case of degradation of any of the components. The sensor also collects metrics for IOPS, Throughput, Latency, and storage capacity.
The monitored device must be NetApp storage with ONTAP version 9.6 or later.
ONTAP REST API is using HTTP basic authentication. You need to pass your username and password.
By default, the sensor alerts when any of:
Depending on your needs, you may also define alerts on Ethernet ports, Fibre Channel ports, storage ports, and storage shelves.
By default, the sensor adds reports containing metrics for Cluster (IOPS, Throughput, Latency), Bytes transmitted over storage interfaces, LUNs and Volumes used space, and more.
diskiopslunsportsanstoragevolumes
NetCrunch can collect information about Windows machines' hardware, installed software, and hotfixes.
hardware
The sensor allows monitoring of the hardware configuration of Windows machines using WMI.
Using this sensor, you can download and monitor for changes in the hardware configuration of Windows-based hosts. The hardware configuration includes the following:
The Windows monitor automatically adds the sensor. It requires access to WMI regardless of Windows monitoring type. It uses a Windows credential profile. No additional configuration is required.
You may change scheduling options.
Alert is added to the Basic Windows Monitoring
pack by default.
After choosing the Hardware
in the Nodes
tab, the sensor presents the data for multiple nodes in the Nodes tab.
The last changes and the list of installed hardware can be found in Node StatusSystem ViewsHardware
software
The sensor collects information about installed software and can alert you when installed, uninstalled, or updated applications are installed. Also, thanks to the sensor, you can easily see the summary of installed software (per node group) on versions and find where it is installed. The last changes and the list of installed software can be found in Node StatusSystem ViewsSoftware
The sensor requires access to WMI on the destination computer.
hotfix
The sensor collects information about installed hotfixes and can alert you when hotfixes are installed or uninstalled. You can find the latest changes and the list of installed software in Node Status System Views Hotfixes. This sensor requires access to WMI on the destination computer.
It shows you a summary of recent changes for software and hotfixes.
confighardwarehotfixesinventorysoftwarewindows
The sensor integrates with NAKIVO Backup & Replication by polling its REST API. It collects repository state, backup storage usage, and job/task statistics to monitor system health and performance.
alertsbackupjob statisticsmonitoringnakivoreplicationrepositoryrest apisensorstorage
The NAKIVO Backup and Replication Sensor enables NetCrunch to monitor the operational state of a NAKIVO Backup & Replication system. It provides:
This sensor is critical for ensuring that backup operations remain healthy, reliable, and within capacity.
Backups are the last line of defense against data loss. Issues like failed jobs, corrupted repositories, or full storage can silently compromise recovery capabilities. This sensor helps to:
By actively polling the NAKIVO system, NetCrunch ensures consistent visibility without relying on external triggers or agents.
NetCrunch acts as a polling client, regularly sending REST API requests to the NAKIVO server. The sensor does not require any agents or NAKIVO-side configuration beyond API access.
Repository
Job Statistics
Backup Storage
Task Queue
The sensor includes key predefined alert triggers:
All alert thresholds are customizable and can be extended via standard performance/event conditions in NetCrunch.
To support historical reporting and analysis, the sensor enables:
Powerful module that enables comprehensive monitoring and storage of configuration changes across a diverse range of network devices, including switches, routers, firewalls, and other equipment
The sensor enables tracking changes to device configurations and stores multiple backups of device configurations using the telnet
or ssh
protocol.
The sensor allows monitoring and backup device configurations using the telnet
or ssh
protocol. It can detect configuration changes and allows storing multiple configuration versions. The sensor provides over 140 predefined device profiles. It supports a wide range of devices, including switches, routers, firewalls, and security devices.
It compares each version and stores the whole configuration in an encrypted database. When the sensor runs as part of the monitoring probe, it stores the latest configuration on the probe's hard drive, and history is kept in the central NetCrunch database.
You should provide proper credentials besides typical connection settings such as connection parameters (port and timeout). A credential profile for SSH allows establishing a connection using a private key or with a username and password.
ssh2
or telnet.
Some device models might support only one of them.enable
mode to perform specific commands. Additionally, a password might be needed after issuing the enable
command. Please refer to your device documentation.enable
command. Keep empty if not required.,<secret hidden>
textconfigdevicefirewallmanagementroutersshswitchtelnet
Browse 180+ ready-to-use config profiles for the most popular vendors and device types
NetCrunch allows creating custom profiles using simple YAML definitions allowing executing commands using ssh or telnet and processing output
NetCrunch config management engine has been inspired by the Oxidized
open source project and uses similar concepts to describe session parameters and processing of command output.
Although, there are important differences.
The only knowledge needed for describing the device configuration profile is good knowledge and understanding of regular expressions.
The prompt format is needed to recognize when the terminal is ready to read the next command and when the previous command output ends. You can set one or multiple prompt expressions.
prompt: "/^(\r*[\w\s.@()/:-]+[#>]\s?)$/m"
prompt: - /^(?:\x1b[..h)?[\w.-]+# $/m - /^\w+@\w+([-.]\w+)*>((?.+)?\s)?/m
This is a comment prefix that will be added to some command output such as show version
comment: "# "
In the above case, we have defined a two-character prompt prefix #
and a single space.
After each command, we can describe how to process the output.
lines: range: 1..-1 reject: - /^\r$/ - /time/ comment:
In the above example, the engine will:
- remove the first and last line (range: 1..-1
)
- skip empty lines (reject) and the ones containing the time
phrase
- comment out all output
It starts with all:
and then you can describe the processing block for every command. It's common to remove the first and the last line.
all: lines: # always cut off first and last line range: 1..-1
We can download config as is or remove secrets such as passwords, security key and others, by replacing them with some text.
The processing depends on the option state when configuring Device Config
sensor for particular node.
secret: /^(create snmp community) \S+/gm: "$1 <removed>" /^(create snmp group) \S+/gm: "$1 <removed>"
It's simple. In commands:
block add each command and after it describe processing if needed.
commands: show inventory: comment:
commands: show version: lines: reject: - /([Ss]ystem [Uu]p\s?[Tt]ime|[Uu]p\s?[Tt]ime is \d)/ comment:
commands: "show running-config | nomore":
Configuration block begins with config:
Then block is described by protocol list it refers to.
Telnet is sometimes the only options to get to the device. It requires setting regular expressions to detect prompt for username and password.
telnet: #set prompts for interactive telnet login login: username: /Username:/ password: /Password:/
ssh: #terminal options pty: charsWide: 1000
Commands can be executed in interactive session or in exec
mode. To enable exec
mode add
ssh: exec: true
ssh, telnet: # commands to be executed after login afterLogin: - "no page" # Commands to execute before logout beforeLogout: - logout y n
Many devices are allowing disabling paging by calling command after login or adding pipe to command, but some are missing this feature. For such occasion, you can define regular expressions for watching terminal output and the reaction which usually is sending one character.
For example
expect: /Press any key to continue(\x1b[\??\d+(;\d+)[A-Za-z])$/m: send: " " replace: ""
In this case if the message appears, program will send single space and then remove the prompt from the output.
Azure API Management sensor allows you to monitor the performance and behavior of the Azure API Management service. Azure API Management is a reliable, secure, and scalable way to publish, consume and manage APIs running on the Microsoft Azure platform.
This sensor provides information on API Management metrics, such as Requests and Utilization which includes Backend on Getaway Requests Duration [ms], Percentage of Metric Capacity as well as EventHub metrics with visibility of Dropped, Rejected, Throttled, Timed Out Events or Events Size [bytes], among others. Thanks to this sensor you'll be able to have insight into the utilization of APIs and monitor API Management Service ensuring optimal performance.
This sensor requires credentials for Azure subscription (Application (Client) ID, Tenant ID, and Client Secret Value).
By default, the sensor adds a report containing Requests and Utilization\% Metric Capacity, Requests and Utilization\Gateway Requests Duration [ms], and Requests and Utilization\Gateway Requests charts. These metrics will be collected and available as a report or through @trend-viewer.
apiazurecloudevent hub
This sensor monitors Azure Cosmos Database Service using Azure Monitor metrics.
Azure CosmosDB Sensor monitors service availability and alerts you when availability drops. The sensor collects service usage metrics such as the number of requests, RU consumed for requests, and more. You can set the RU consumption percentage alert based on your requirement.
This sensor requires credentials for Azure subscription (Application (Client) ID, Tenant ID, and Client Secret Value).
The sensor monitor only one, specified Cosmos DB Account.
The sensor collects metrics about service usage and performance, eg:
You can also detect configuration changes by defining an alerts on a change of configuration metrics eg:
Depending on the account data model (Cassandra, Azure Table, etc.) list of available sensor metrics may differ.
By default, the sensor adds a report containing Average Server Side Latency [ms], % Service Availability, Request Count, % Normalized RU Consumption, and Request Charges charts. These metrics will be collected and available as a report or through @trend-viewer
azureclouddatabase
The sensor monitors the estimated cost of resources in Azure subscription, and the progress of expenses within budgets defined in Azure subscription.
The sensor reads all budgets defined in the Azure subscription and, by default, alarms when some budget has been used up above 80%. You can also define your own alerting rules for any budget.
The sensor calculates the estimated subscription costs for the current billing period. You can define an alert when the current cost exceeds a certain value. Alerts can also be defined for costs generated by selected resources or resource groups.
Both current cost and budget status can be represented with Object Status Widgets.
This sensor requires credentials for Azure subscription (Application (Client) ID, Tenant ID, and Client Secret Value) with a custom role with permissions:
You need to choose an Azure subscription and the starting day of the monthly billing period. By default, the sensor is run once a day as recommended in Microsoft's documentation.
billingbudgetcloudcost
Azure Insights Components Sensor monitors Azure Insights Components resource, using Azure Monitor metrics. Application Insights is a feature of Azure Monitor and an extensible Application Performance Management (APM) service. You can use it to monitor your live applications. It will automatically detect performance anomalies and includes powerful analytics tools to help in diagnosing issues and understanding what users do with the app.
While the request is being sent to the monitored Azure resource, Insights Components helps track down a timeout which is caused by an unsuccessful network call attempt, a long time between receiving the last byte of the document, or long time between the network connection and receiving the first byte. Azure Insights Components Sensor allows us to maintain various statistics e.g. what exceptions and failures have occurred, what was the number of page views but also count of calls made by the application to external resources.
This sensor requires credentials for Azure subscription (Application (Client) ID, Tenant ID, and Client Secret Value).
By default, the sensor adds a report containing Page Views\Count, Page Views\Load Time [ms], Performance\Available Memory Bytes, Requests\Server Completed Requests, and Requests\Failed Requests charts. These metrics will be collected and available as a report or through @trend-viewer
azurecloudcomponentsinsights
The sensor monitors Azure Load Balancer resource, using Azure Monitor metrics.
Azure Load Balancer Sensor helps track health probe status per time duration, which is very important in case of scaling error. It also helps verify if a number of SNAT ports is set properly or whether the data path is available.
This sensor requires credentials for Azure subscription (Application (Client) ID, Tenant ID, and Client Secret Value).
azurebalancerload balancemicrosoft
Azure Logic Apps sensor allows you to monitor the performance and behavior of the Microsoft Cloud technology called Azure Logic Apps. Azure Logic Apps is a service in Microsoft Cloud that allows you to schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations.
The sensor allows monitoring of Azure Logic Apps usage in terms of performance and resources used. Thanks to this sensor, you'll be able to ensure the optimal performance of Logic Apps workflows and troubleshoot problems if they occur.
This sensor requires credentials for Azure subscription (Application (Client) ID, Tenant ID, and Client Secret Value).
By default, the sensor adds a report containing Action\Actions Completed, Action\Actions Failed, Action\Actions Succeeded, Run\Runs Completed [bytes], Run\Runs Started [bytes], Trigger\Triggers Completed, Trigger\Triggers Failed, Trigger\Triggers Started and Run\% Runs Failed charts. These metrics will be collected and available as a report or through @trend-viewer.
azurecloudlogic apps
Azure Server Farm Sensor monitors Azure Server Farm resources, using Azure Monitor metrics. The Microsoft Server Farm simplifies the provisioning, scaling, and management of multiple servers for administrators and hosting providers.
The resource is automatically created for multiple services that require cloud computing. Azure Server Farm Sensor monitors CPU and Memory usage along with HTTP and Disc queue length automatically, also it allows you to monitor multiple other metrics depending on your needs.
This sensor requires credentials for Azure subscription (Application (Client) ID, Tenant ID, and Client Secret Value).
By default, the sensor adds a report containing System\Bytes Received, System\Bytes Sent, System\Bytes Received\sec, System\Bytes Sent\sec, Sockets\Inbound All Count, Sockets\Outbound All Count, Sockets\Outbound Established Count, Sockets\Outbound Time Wait Count and Sockets\Loopback Count charts. These metrics will be collected and available as a report or through @trend-viewer
azurecloudfarmsserver
Azure Service Bus sensor allows monitoring the metrics of the cloud apps connected to the Service Bus namespace. Azure Service Bus is a messaging service on the cloud used to connect any applications, devices, and services running in the cloud to any other applications or services. As a result, it acts as a messaging backbone for applications available in the cloud or across any device.
This sensor provides insight into the number of Active Connections, Requests, or Server Errors, as well as a count of Average Active Messages and Queue Size in Bytes. It also enables monitoring of Premium Tier metrics, such as % CPU Usage and % Memory Usage. Thanks to that you can control the data transferred between different applications and services on all tiers, as well as coordinate transactional work that requires a high degree of reliability.
You can monitor all available Service Bus Tiers but certain metrics, such as % CPU Usage and % Memory Usage are available for Premium Tier only.
This sensor requires credentials for Azure subscription (Application (Client) ID, Tenant ID, and Client Secret Value).
By default, the sensor adds a report containing Connection\Active Connections, Messages\Avg. Active, Messages\Queue Size in bytes, Request\Successful, Request\Server Errors,Request\Not Processed, Resource\% Memory Usage and Resource\% CPU Usage charts. These metrics will be collected and available as a report or through @trend-viewer.
azurecloudnamespaceservice bus
Azure Storage Account Sensor monitors Azure Storage Accounts service, using Azure Monitor metrics. Microsoft Azure Storage Accounts service allows to create a group of data management rules and apply them all at once to the data stored in the account: blobs, files, tables, and queues.
With this sensor, you can detect potential service problems (decrease in service Availability) or monitor cost-related service usage (increase in Capacity), you can also observe key parameters characterizing the operation of the service, including Success Server Latency, Egress Bytes, or Transactions. This sensor includes two default alerts one of which gives the opportunity to verify whether the service availability decreases below 99% and the other allows you to see a large change in capacity consumption.
This sensor requires credentials for Azure subscription (Application (Client) ID, Tenant ID, and Client Secret Value).
By default, the sensor adds a report containing Storage Account\Egress Bytes, Storage Account\Ingress Bytes, Storage Account\Success E2E Latency, Storage Account\Success Server Latency and Storage Account\Transactions charts. These metrics will be collected and available as a report or through @trend-viewer
accountazurecloudstorage
With the Azure Web Site sensor, you can monitor the usage and performance of the web applications that are deployed to the cloud with Azure Web Apps. Azure Web Apps is a managed cloud service that allows the deployment of a web application and makes it available to customers on the Internet in a very short amount of time. of Azure Web Site sensor provides information on Application Domain metrics, such as CPU Time or Application Connections; Input/Output Bytes, and Operations Per Second; as well as HTTP Response time, Client or Server Errors, Redirects, and more. Thanks to this you'll be able to keep a close eye on the monitored resources and run diagnostics when necessary.
This sensor requires credentials for Azure subscription (Application (Client) ID, Tenant ID, and Client Secret Value).
The sensor collects metrics about Application Domain usage and performance, eg:
The sensor also collects metrics related to HTTP Response statistics, eg:
By default, the sensor adds a report containing HTTP\HTTP Response Time, Application Domain\Total Application Domains, HTTP\HTTP 401 Responses, HTTP\HTTP 403 Responses,HTTP\HTTP 404 Responses,HTTP\HTTP 406 Responses, and HTTP\HTTP 4xx Responses charts. These metrics will be collected and available as a report or through @trend-viewer.
azurecloudweb appswebsite
The sensor monitors Azure Kubernetes Services resource, using Azure Monitor metrics.
Azure Kubernetes Cluster Sensor allows collecting essential metrics for the managed Cluster. With this sensor you can for example be notified when the allocable memory on the Cluster is running out.
To use this sensor you will need Client ID, Tenant ID, and Application Secret.
By default, the sensor adds a report containing Container\Used Memory Working Set Bytes, Container\RSS Memory Bytes, Nodes\Nodes Condition, Nodes\CPU Cores Allocatable, Nodes\Memory Bytes Allocatable, Nodes\Bytes Received, Nodes\Bytes Sent, Nodes\Disk Used Bytes, Pods\Phase, and Pods\Ready charts. These metrics will be collected and available as a report or through @trend-viewer
azureclusterkubernetesmicrosoft
The Alarm Sensor monitors the status of Amazon alarms with CloudWatch API.
With this feature, you can keep track of your most important AWS Alarms, and extend monitoring to warn
about other AWS Alarm states like insufficient data
.
The sensor also allows monitoring of all regions.
At least one configured CloudWatch Alarm is required.
To use the Alarm Sensor you will need to provide Access Key ID and Secret Access Key. The user whose credentials are entered during sensor configuration needs access rights for CloudWatch queries.
alarmawsaws alarmcloud
AWS Auto Scaling Sensor monitors the parameters of AWS Auto Scaling Group. AWS Auto Scaling automatically adjusts capacity to maintain steady, predictable performance for your applications.
The sensor can track changes within the Auto Scaling group and observe multiple states of group instances. Thanks to it, you can find out about large variations in the number of instances that are running as part of the group, which can increase cost, and observe maximum group size which informs about maximal workload.
To use AWS Auto Scaling you will need to provide Access Key ID and Secret Access Key. The user whose credentials are entered during sensor configuration needs access rights for CloudWatch queries.
By default, the sensor adds a report containing Group\Pending Instances, Group\Standby Instances, Group\% Running Instances, and Group\Running Instances charts. These metrics will be collected and available as a report or through @trend-viewer
awscloudcomputingec2elastic
This sensor allows you to monitor the performance and behavior of the databases managed in Azure SQL DB. Azure SQL DB is Microsoft’s cloud database service, that enables organizations to store relational data in the cloud and quickly scale the size of their databases up or down as business needs change.
Azure SQL DB sensor allows you to monitor failed/successful connections or connections blocked by a firewall; to control data and CPU resources by monitoring their average values as well as the occupied memory space and usage percentage. It also makes it possible to verify the data space used, for example when the upper limit is reached. Thanks to this sensor you'll have insight into the utilization of the service and databases, ensuring optimal performance.
This sensor requires credentials for Azure subscription (Application (Client) ID, Tenant ID, and Client Secret Value).
By default, the sensor adds a report containing Process\% SQL Server Process Core, Process\% SQL Server Process Memory, and Resources\Avg. Data Space Allocated [bytes] charts. These metrics will be collected and available as a report or through @trend-viewer.
azureclouddatabasesql
The sensor monitors the estimated cost of services in the AWS cloud and the progress of expenses within AWS Budgets.
The sensor calculates the estimated costs for the current month's billing period. You can define an alert when the current cost exceeds a certain value. Alerts can also be defined for costs generated by selected AWS services. As an option, the sensor can read AWS Budgets associated with the selected account. You can define alerting rules for any budget spending progress. Current cost and budget status can be represented with Object Status Widgets.
The sensor needs user Access Key ID and Secret Access Key to AWS Cloud authorization. This sensor requires sufficient rights to query data from the AWS Cost Explorer API and AWS Budgets API. In IAM (Identity and Access Management), you need to create a policy with the statement (JSON format):
{ "Sid": "Stmt1613653218000", "Effect": "Allow", "Action": [ "ce:GetCostAndUsage", "ce:GetCostForecast", "ce:getDimensionValues", "budgets:ViewBudget" ], "Resource": [ "*" ] }
By default, the sensor is run once a day. Reading the AWS Budgets information is optional. You need to select the AWS account associated with a budget.
This sensor uses AWS Cost Explorer API and AWS Budgets API. According to the AWS Cost Management Pricing document, each request will incur a cost. This sensor performs two API queries plus one per each defined alert for AWS service cost. Additional one more request, if fetching information about budgets is enabled.
awsbillingbudgetcloudcost
The sensor allows monitoring of key metrics of AWS Elastic Block Store.
AWS EBS Sensor monitors Elastic Block Store in terms of performance and resources used by a single EBS instance. Amazon Elastic Block Store (EBS) is a high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2). It helps to keep the balance of the Burst Bucket on the save level and detect large changes in the number of write operations, which is cost-related and can be a sign of EC2 errors.
To monitor AWS Elastic Block Store you will need to provide Access Key ID and Secret Access Key. The user whose credentials are entered during sensor configuration needs access rights for CloudWatch queries.
By default, the sensor adds a report containing Volume\Queue Length, Volume\Bytes Read, and Volume\Write Bytes charts. These metrics will be collected and available as a report or through @trend-viewer
awscloudebsstorage
AWS EC2 Sensor allows monitoring usage of Amazon Cloud service Elastic Compute Cloud (EC2) Instance resources.
The sensor monitors the Network load, Disk usage, and CPU utilization for each available EC2 Instance. CPU Credits usage metrics are also available. You can use these metrics to determine whether you should launch additional instances to handle the increased load or, conversely, the under-used instances should be stopped to save money.
This sensor requires sufficient rights to query data from the AWS API. In IAM (Identity and Access Management), you need to create policy with statements (JSON format):
[ { "Sid": "Stmt1338559359622", "Action": [ "ec2:DescribeInstances", "ec2:DescribeVolumes" ], "Effect": "Allow", "Resource": "" }, { "Sid": "Stmt1338559372809", "Action": [ "cloudwatch:GetMetricStatistics", "cloudwatch:GetMetricData", "cloudwatch:ListMetrics", "cloudwatch:DescribeAlarms" ], "Effect": "Allow", "Resource": "" }]
Assign the above policy to the user account used for monitoring.
This sensor requires credentials for AWS API. Select the region for which you want to monitor EC2 Instances. The sensor provides metrics for each EC2 Instance in the selected region.
By default, the sensor adds a report containing Credit\CPU Credit Usage, CPU\% Utilization, Disk\Read Bytes, Disk\Write Bytes, Disk\Read Operations, and Disk\Write Operations charts. These metrics will be collected and available as a report or through @trend-viewer
amazonawscloudec2
AWS ELB Sensor allows monitoring of the performance of the AWS ElastiCache service.
This sensor offers good insight into ElastiCache performance using Amazon CloudWatch metrics. You set triggers for these metrics so that you can take corrective action before performance issues occur.
This sensor requires sufficient rights to query data from the AWS API. In IAM (Identity and Access Management), you need to create policy with statements (JSON format):
[{ "Sid": "Stmt1338559399560", "Action": [ "elasticache:DescribeCacheClusters" ], "Effect": "Allow", "Resource": "" }, { "Sid": "Stmt1338559372809", "Action": [ "cloudwatch:GetMetricStatistics", "cloudwatch:GetMetricData", "cloudwatch:ListMetrics" ], "Effect": "Allow", "Resource": "" }]
This sensor requires credentials for AWS API. Select the region for which you want to monitor ElastiCache clusters.
Alerts are defined for all monitored ElastiCache clusters.
ElastiCache provides both host-level metrics and metrics that are specific to the cache engine software (Redis).
Selected host-level metrics:
Selected metrics for Redis:
By default, the sensor collects data for two reports:
These metrics will be collected for all monitored ElastiCache clusters, and available as a report or through @trend-viewer
amazoncloudredis
AWS ELB Sensor allows monitoring metrics of AWS Elastic Load Balancers.
Elastic Load Balancing allows you to monitor the health of your applications and their performance in real-time with Amazon CloudWatch metrics.
This sensor requires sufficient rights to query data from the AWS API. In IAM (Identity and Access Management), you need to create policy with statements (JSON format):
[{ "Sid": "Stmt1338559359622", "Action": [ "elasticloadbalancing:DescribeLoadBalancers" ], "Effect": "Allow", "Resource": "" }, { "Sid": "Stmt1338559372809", "Action": [ "cloudwatch:GetMetricStatistics", "cloudwatch:GetMetricData", "cloudwatch:ListMetrics" ], "Effect": "Allow", "Resource": "" }]
This sensor requires credentials for AWS API. Select the region for which you want to monitor Load Balancers instances. Metrics can be collected for each Load Balancer instance or by Availability Zones.
Alerts are defined for all monitored load balancers.
Data for some metrics are available depending on the Load Balancer type and selected dimension.
By default, the sensor collects data for two reports:
Load Balancer Report - containing base balancer metrics, such as Requests\Request Count, Load Balancer\Backend Connection Errors, Load Balancer\Healthy Host Count, and others.
Application Load Balancer Report - containing ALB metrics.
amazoncloudelasticload balancer
AWS SQS Sensor allows monitoring of key metrics of AWS Simple Queue Service (SQS). SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.
The sensor monitors AWS Simple Queue Service performance and workload parameters. With this sensor, you can avoid queue overload (indicated by a large number of delayed messages) and spot a large number of empty receives - which can inform about errors within pushing service(s).
To use this sensor you will need a user Access Key ID and Secret Access Key (the second one is visible only shortly after key pair generation). The user whose credentials are entered during sensor configuration needs proper access rights for CloudWatch queries (example JSON with IAM rights below)
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1338559372809", "Action": [ "cloudwatch:GetMetricStatistics", "cloudwatch:GetMetricData", "cloudwatch:ListMetrics", "cloudwatch:DescribeAlarms" ], "Effect": "Allow", "Resource": "" }, { "Sid": "Stmt1338559548992", "Action": [ "sqs:ListQueues" ], "Effect": "Allow", "Resource": "" } ] }
This sensor can only be used on the NetCrunch Cloud Service node. There is only one sensor allowed per node. During sensor configuration, you will have to choose one specific SQS instance that you want to monitor. You can also add your own ID to the sensor if you monitor multiple SQS instances.
By default, the sensor adds a report containing Queue\Oldest Non-deleted Message Age, Queue\Messages not Visible, Queue\Messages Visible, and Queue\Messages Added charts. These metrics will be collected and available as a report or through @trend-viewer
awscloudqueueservice
onedrive
This sensor allows monitoring of Microsoft OneDrive storage usage.
This sensor uses OAuth 2.0 to get an access token to your OneDrive.
By default, the sensor adds a report containing Storage\Allocated Bytes, Storage\Used Bytes, Storage\Trash Space Bytes, and Storage\% Free Space charts. These metrics will be collected and available as a report or through @trend-viewer
google-drive
It is necessary to enter at least one authentication profile and pass OAuth 2 authentication by clicking on "Authenticate".
By default, the sensor adds a report containing Storage\Allocated Bytes, Storage\Used Bytes, Storage\Trash Space Bytes, Storage\% Free Space, Storage\Free Space Bytes, and Storage\% Trash Space charts. These metrics will be collected and available as a report or through @trend-viewer
clouddrivegooglemicrosoftstorage
NetCrunch includes dedicated sensors for monitoring cloud-hosted email services such as Gmail and Outlook. These sensors offer the same diagnostic functionality as traditional email sensors but use OAuth 2.0 authentication instead of basic credentials. They support Gmail, Microsoft 365/Outlook.com, and custom OAuth providers.
cloud authenticationcloud emailcloud email round-tripcloud monitoringemail performancegmail monitoringimap oauthmicrosoft 365 monitoringnetcrunch oauth2oauth email sensoroutlook monitoringsmtp oauth
NetCrunch supports monitoring of modern cloud-based email platforms using a dedicated Cloud Email Service node. Once added, this node can host any of the three specialized cloud email sensors, which replicate the functionality of standard email monitors—while using OAuth 2.0 authentication to access protected accounts.
These sensors allow testing of mail delivery time, mailbox contents, and even scanning message bodies for alert conditions. This provides comprehensive, agentless visibility into cloud-hosted mail systems without the use of insecure credentials.
The sensors support:
To use any of these sensors, you must create a Cloud Email Service Authentication Profile, which includes:
This ensures secure, standards-compliant access to mailbox data without storing passwords.
All sensors are added from the Monitoring → Sensors section of a Cloud Email Service node.
A powerful sensor that monitors inbound email content. It reads messages via IMAP and can:
Typical use cases:
Tests the entire email pipeline by sending a message to a monitored mailbox and reading it back via IMAP.
Ideal for:
A mailbox-level sensor that:
Useful for:
Feature | Traditional Email Sensor | Cloud Email Sensor |
---|---|---|
Authentication | Basic credentials | OAuth 2.0 |
Mailbox type | Local/SMTP/IMAP/POP3 | Gmail, Outlook, custom OAuth IMAP |
Security | Password stored (hashed) | Token-based (secure) |
Compatibility | Any mail server | OAuth-supported cloud services |
Integration | Node-based (Email Server) | Cloud Service Node |
Cloud Email sensors in NetCrunch allow secure, modern monitoring of cloud-hosted mail systems without compromising visibility or functionality. Whether you're tracking delivery time, scanning inboxes for alerts, or auditing mailbox usage, these sensors let you do so securely, efficiently, and without deploying agents.
Zoom sensors allow monitoring various aspects of Zoom services.
The sensor checks the status of the Zoom services requesting API available on https://status.zoom.us/ (like Zoom Meetings, Zoom Video Webinars, and others).
By default, the sensor alerts you when any of the services are in a state other than Operational. You can also set an alarm for the selected service.
This sensor reads information about the Zoom account's billing plans. Additionally, the sensor collects metrics of Cloud Recording Plan storage usage.
This sensor requires credentials (Account ID, Client ID, and Client Secret) for Zoom API. To enable access to the API you must build Server-to-server OAuth application in the Zoom App Marketplace.
Required scopes:
Server-to-server OAuth app must be enabled in User Management->Roles->[Admin]->Advanced Features (View only).
Due to Zoom API restrictions, your Zoom account must be a paid account.
By default, the sensor collects data for reports containing Recording\Free Storage Used Bytes, Recording\% Free Storage Used charts, and Plan\% Usage charts for each available plan. These metrics will be collected and available as a report or through @trend-viewer
This sensor allows you to audit administrator and user activity by reading new entries from the Zoom Operations Log.
The Zoom Operation Log contains entries that describe changes made by admins on the account, specifically changes in the sections under Account Management, User Management, and Advanced. This sensor checks Operation Log and can generate alerts for new log entries.
This sensor requires credentials (Account ID, Client ID, and Client Secret) for Zoom API. To enable access to the API you must build Server-to-server OAuth application in the Zoom App Marketplace.
Required scope:
Server-to-server OAuth app must be enabled in User Management->Roles->[Admin]->Advanced Features (View only).
Due to Zoom API restrictions, your Zoom account must be a paid account.
By default, an alert is generated for each new log operation.
You can create your own alerting rule for operations matching selected criteria, like: * Operation Action (eg: Update or Add) * Operation Category (eg: User, Account, and other) * Operation Operator * The text that appears in the operation description
This sensor reads the actual configuration settings of the Zoom account.
Use this sensor if you want to monitor changes to the configuration of the Zoom account.
This sensor requires credentials (Account ID, Client ID, and Client Secret) for Zoom API. To enable access to the API you must build Server-to-server OAuth application in the Zoom App Marketplace.
Required scopes:
Server-to-server OAuth app must be enabled in User Management->Roles->[Admin]->Advanced Features (View only).
Due to Zoom API restrictions, your Zoom account must be a paid account.
You can set alerts on various configuration properties, according to your needs; For example: when End-to-end encryption for meetings is disabled/enabled or cloud recording is enabled/disabled, and others.
accountlogsplanrecordingstatuszoom
This sensor allows you to monitor the health of Microsoft 365 services.
The sensor monitors the state of Microsoft 365 services and alerts you if some services are in a degraded or interrupted state. You can also define an alert for a selected service. Then the sensor reads detailed information about service failure, e.g., with what feature of the service the problem occurs and how the problem affects users of the service.
The Office 365 Management APIs use Azure AD to provide authentication services that you can use to grant rights for NetCrunch to access them. To allow NetCrunch access to the Office 365 Management APIs, you need to register your application in Azure AD.
Specify the API permissions for Microsoft Graph and select "ServiceHealth.Read.All".
Then you have to create a new application secret for authentication. Detailed instructions can be found in the Microsoft documentation at: https://docs.microsoft.com/en-us/office/office-365-management-api/get-started-with-office-365-management-apis
The sensor needs Application ID, Tenant ID, and Application Secret to authenticate with Azure.
By default, the sensor alerts you when any Microsoft 365 service is in a degraded or interrupted state. If you need detailed information about the status of a specific service, you must define an alert for the selected service.
By default, the sensor collects statistical data showing the number of services in different states, like Services\Operational, Services\Degraded, Services\Restoring. The metrics will be collected and available as a report or through @trend-viewer
azurecloudhealthmicrosoftofficestatus
Google Analytics sensors allow monitoring of various metrics provided by Analytics Reporting API.
This sensor uses OAuth 2.0 to get access to your Google Analytics data.
An Analytics account is organized into several levels: accounts, properties, and reporting views. You must select Reporting View for the selected Property in sensor configuration.
This sensor allows monitoring metrics related to Users' activity.
By default, the sensor adds a report containing User Count, Number of Sessions per User, Session Count, Avg. Session Duration, Pageviews, and Pages per Session charts.
This sensor allows monitoring metrics of the selected reporting view.
You can collect any predefined metrics provided by Analytics Reporting API. Custom-defined metrics are also available.
You can choose Data Collection Mode. This option determines how the sensor processes the counter values.
analyticsgoogleperformanceweb
This sensor monitors the status of the selected Pingdom check and response time of a monitored webpage
To use this sensor, you need to configure at least one Pingdom basic check.
Add some screen
Last Check Status (Contains: time of the latest check, descriptions of statuses, probe id)
down,
unknown,
or unconfirmed
state.The sensor requires an API token to work that you can create in the Pingdom application.
NetCrunch requires Read Access
permission.
Copy token and create a profile in NetCrunch (Click on edit next to the profile while adding the sensor and select the 'bearer' authentication scheme.
Pingdom sensor can be added to the Cloud Service node only
This sensor monitors the latest build/job status on a specific project in the GitLab cloud repository. At least one project on GitLab, with configured jobs/builds, is necessary to use this sensor. GitLab documentation about jobs configuration can be found in GitLab CI/CD pipeline configuration reference
Access Token: GitLab documentation: https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html#doc-nav E.g. ATzt2bfFo_YqLXsVc-6y
GitLab Namespace and GitLab Project name: After entering any project, both can be found within the URL to access your project. E.g., https://gitlab.com/test1ws12/test_123.git- GitLab Namespace is test1ws12, and GitLab Project name is test_123
gitlabresponsesensorweb-page
Alerting is the heart of the monitoring system. NetCrunch triggers alerts receive, process and then executes actions in response to them
See how to create an alert for Syslog messages, SNMP traps, Web Messages, and Windows Event Log entries.
NetCrunch can act as a log server for external events. It can store them in the NetCrunch Event Log and perform defined alert actions (i.e., notifications) as the response. You can also correlate incoming events to track active alerts.
Settings Monitoring Syslog Server
You can change the port on which NetCrunch listens to Syslog messages (default 514) and set the option to forward each message to another Syslog server.
NetCrunch waits for a given time frame and groups the same messages. This helps to avoid flooding with the same messages. This may happen in case of a device or service failure.
Configuring the NetCrunch Syslog server is the first step, while the next one is creating alerts.
You can add an alert to a node sending Syslog messages and specify a filtering condition for the received message. As such, you can create different alerts for different messages.
Only messages matching defined alerts (filters) pass into NetCrunch, whereas others are discarded.
Go to Settings Alerting & Notifications Monitoring Packs and Policies. In the desired monitoring pack, click Add Alert and choose 'New Event for Received Syslog Message.'
You can also go to Node SettingsMonitoring, click on the Syslog tile in the Node section, and click Add Alert.
The window will allow you to declare expected message parameters.
Settings Monitoring Web Messages Receiver
You can easily send an event message to NetCrunch using an HTTP request. The program accepts POST and GET requests. For example, we skip the first part of the URL, your NetCrunch Server Web Access URL. We strongly recommend configuring the server to use HTTPS protocol.
You can use the cURL program (available on multiple platforms) to send requests to NetCrunch. You can download it from curl.haxx.se.
http://<nc-server>/api/rest/1/event/<node-identification>
Node identification is a node IP address or DNS name.
The simplest way is attaching the message to the URL as a search string.
api/rest/1/event/crm.acme.com?CRM%20must%20be%20restarted
Because URLs cannot contain spaces, it must be properly encoded.
If you form a query string as a parameter list, it will be converted to a JSON object, and then you can create alerts based on these parameters.
Example:
api/rest/1/event/crm.acme.com?error=1
NetCrunch accepts data encoded as typical form encoding application/x-www-form-urlencoded and allows filtering events on parameters. Another accepted encoding is JSON application/json.
Curl Example
curl -d "error=1" http://192.168.10.112:8008/api/rest/1/event/192.168.10.1
NetCrunch can receive messages by default. You can disable this feature and set the message grouping option.
NetCrunch waits for a given time frame and groups the same messages. This helps to avoid flooding with the same messages. It may happen in case of a device or service failure.
You can add an alert to a node that sends web messages and specifies the filtering condition for the received message. So you can create different alerts for different messages.
Only messages matching defined alerts (filters) pass into NetCrunch, whereas others are discarded.
Go to Settings Alerting & Notifications Monitoring Packs and Policies, in desired monitoring pack, click Add Alert and choose 'New Event for Received Web Message.'
You can also go to Node SettingsMonitoring, click on the Web Messages tile in the Node section.
The window will let you declare the expected message parameters and create a filter for the messages.
Let's define a simple message alert where any non-empty message will be logged.
NetCrunch receives SNMPv1, SNMPv2, and SNMPv3 traps. It can also forward all received traps to another SNMP manager. Forwarding can be set in Settings Monitoring SNMP Trap Receiver.
On the Settings page, you can check the current status of the SNMP trap receiver. If SNMP is not enabled, click the button, enable it, and set its options.
You can change the port on which NetCrunch listens for SNMP traps (default 162) and set the option of forwarding traps to another SNMP manager.
NetCrunch waits for a given time frame and groups the same SNMP trap messages. This helps to avoid flooding with the same messages.
You can add an alert to the node sending an SNMP trap by creating a trap alert. You can also receive traps in the External Events window and click on the desired trap to create an alert.
To add a trap alert, go to Node SettingsMonitoring and click on the SNMP Traps tile in the Node section.
To add an SNMP trap to the Monitoring Pack, the SNMP only option must be checked. Otherwise, SNMP events will not be visible.
Settings Monitoring Windows Event Log Collector Monitoring Windows Event Log is enabled by default. It's not just a passive receiver like SYSLOG or SNMP trap. The event Log monitor connects to a remote machine and needs authentication to registetooreceiveng Windows Event Log events. It is rather an extension of the Windows Monitoring Engine.
When you click the Windows Event Log button on the Monitoring tab, you can change the Windows Event Log engine's global options.
NetCrunch waits for a given time frame and groups the Windows Event Log entries.
Windows Event Log monitoring has the same requirements as standard Windows monitoring. It requires node OS monitoring to be set to Windows. Then you can go to the Windows Event Log tile under the Windows section in Node Settings and add an alert.
Alternatively, you can create a new Monitoring Pack with a set of rules to monitor Windows Event log in Settings Alerting & Notifications Monitoring Packs and Policies
You can also find the 'Security Audit' Monitoring Pack in the global section of defined monitoring packs. It contains alerts based on selected events from the Windows Event Log - Security category.
alertsevent lognt logsnmp trapsyslogtrapweb messageswindowswindows events
NetCrunch alerts trigger proactive actions, not just notifications. They detect missing events and complex conditions while correlating events across nodes to enable tailored responses and automated remediation.
NetCrunch allows you to define additional conditions for each defined alert, regardless if it is a node status, an event log alert, or the SNMP trap. These conditions allow you to trigger an action, even if an event has not been triggered. For example, if there is no log entry confirming an operation (i.e., backup). Also, NetCrunch can receive heartbeat events and notify if one is missing. Other conditions allow you to suppress alert execution for some time (as the alert won't be triggered, the actions set to run on alert close won't be executed).
NetCrunch triggered alerts are automatically correlated. It means that when the condition causing an event is no longer present, a closing event is triggered, and the alert is closed. This helps manage the pending alert list and execute an action (notification, closing ticket action) on alert close.
It is fairly simple when the external source (i.e., trap, syslog message) triggers an event confirming that the alerting condition is no longer present. In such a case, you can add closing correlation events to the given alert.
You can also decide to close the alert automatically after some time, or you can use the alert condition to define a closing alert.
The device sends an alarm message every minute. If there is no error for 2 minutes, we can assume the alert is closed.
Settings Alerting & Notifications Monitoring Packs and Policies Global
You can find the Correlations Monitoring Pack in the global section of defined monitoring packs. It allows defining alerts triggered when alerts from multiple nodes happen within the same time range (window) or when all alerts in the group are pending (all have to be correlated).
These correlated alerts can be for any events previously defined on any node in the Atlas.
alertconditioncorrelationheartbeat
Active Alerts is one of the essential views showing current unresolved issues. NetCrunch tracks the state and correlates all internal events. When defining alerts for external events (SNMP, syslog, etc.), you can easily correlate by defining events that can close the alert.
This separate view shows only current alerts instead of forcing administrators to browse the history event log, which offers a history of all alerts. You can synchronize the Alert view with the Atlas Tree Window to change the Alerts window when you change the current view.
The Grid (Table) view offers enhanced capabilities for sorting and filtering alerts. This view provides a structured and detailed overview, allowing you to easily organize and pinpoint specific alerts based on various criteria. It's ideal for in-depth analysis and quick identification of important alerts.
The Active Alerts Tiles view provides a dynamic dashboard for monitoring the latest active alerts. It highlights the most recent and important alerts, with animations to help you notice new alerts as they appear and observe closed alerts fading away.
This view displays the number of alerts generated in the three time ranges (24 hours, 7 days, and 30 days). Additionally, it includes information about nodes with the most alerts, top critical and warning alerts by type, and statistics related to the number of alerts in custom views.
Every chart element is clickable and automatically brings you to the appropriate history view to dig deeper and see all selected events.
NetCrunch offers many predefined event log views and allows you to create custom views using an intuitive query builder. Views can be saved and used for any node group in the Atlas.
You can select the view by dropdown in the center of the header, or you can easily build a new view. Read about Managing Event Log Views
The Event Details window offers much more than just displaying more details. It allows you to check why this alert happened and its parameters, and you can change it or disable it for a particular node,
It allows you:
This window also shows all executed actions and links to the event that closed a given alert.
If a performance counter value has triggered the alert, the Details view displays a chart showing values at the time of the alert.
active alertsalertalertsanalyticscorrelationdetailseventevent detailsevent loghistorysummaryviews
Each alert can have its own alert escalation script. This means that NetCrunch executes actions if the alert condition persists or can execute actions when the alert is closed.
As a response to an event, NetCrunch can execute a sequence of actions. Actions can also be executed when the alert ends (on close). NetCrunch contains various actions, including Notifications, Logging, Control Actions, and Remote Scripts.
Notifications are very flexible and can be controlled by user-profiles and groups. Additionally, they can be combined with a node group (atlas view) membership, so it's possible to send notifications to different groups based on network node location or other parameters.
See also Alerting Actions
Actions can be executed immediately or with a delay (if the alert is not finished), and the last action can be repeated. Additionally, you can specify actions to be executed automatically when an alert is closed.
For example, you can decide to send a notification to some person and then, after some time, execute a server restart operation.
The script above executes only notifications for critical alerts and restarts the node causing this event to be a Windows Server node.
actionsalertingescalation
SNMPv3 uses a different notification model than previous SNMP versions. To receive SNMPv3 notifications, you need a proper authentication profile since they have to be decoded with a password and given encryption settings.
Because SNMPv3 allows authentication and encryption of receiving traps, you need to define the SNMPv3 Notification profile for them. Otherwise, the program won't be able to decode anything.
Unlike in v1 and v2 profiles, profiles for v3 are global, which means you can define them via NetCrunch Monitoring SNMP Communities and Passwords..., and you do not have to assign them to nodes.
NetCrunch supports the following authentication and encryption protocols:
notificationsnmpsnmpv3trapv3
Learn what situations increase alerts' volume and how NetCrunch helps prevent false alarms.
False alarms can arise in many situations. Here are some typical examples of events which can generate false alarms:
NetCrunch helps you avoid alert overload by implementing the following functionalities:
The order and frequency of monitoring nodes depend on the priority; intermediate nodes have a higher priority than nodes connected through them.
Event Suppression is the technique of preventing false alarms caused by network intermediate connection failure.
When NetCrunch receives an event related to a node connected through the intermediate link, it first ensures the link is OK so a broken intermediate connection does not cause the event. You can define exceptions when you want to receive an event from descendant nodes.
Additionally, NetCrunch can suppress alerts from various node services or sensors. Event suppression is enabled by default. You can disable it by clicking the icon next to network services or in a particular sensor settings window.
When a counter value changes, you should set a trigger on the average value instead of the actual one. You can also define hysteresis by adding a reset threshold to the trigger.
Read more about Event Triggers for Counters.
Usually, NetCrunch sends multiple requests in a row to check network service response. To ensure the service responds, increase the service's timeout or set an additional repeat count.
SNMP works over the UDP protocol, which is not reliable, as packets can be lost. In such a case, the program waits a given time, and when it does not get a response, it repeats the request.
Because of the nature of UDP communication, the program can't recognize whether the packet is lost or the device is bus,ywhichd delays the response.
Go to Node SettingsSNMP and increase SNMP timeout for busy devices or SNMP retry count for unreliable connections.
dependencyevent suppressionmonitoring dependency
NetCrunch can easily integrate alerting with the external service desk, productivity, and messaging systems.
Integration profiles allow you to configure integration actions that can be executed in response to alerts. NetCrunch can automatically send messages and manage the tickets in external systems. Several systems allow two-way integration, which means that NetCrunch can close a pending alert in response to a notification sent from an external system.
Several steps need to be completed before NetCrunch can send data to the external system. Each step requires authentication by the API Key created for the application.
Integration Profiles store connection settings for these systems. So before we add an action to the alerting script, we need to open NetCrunch Alerting & Notifications Integrations and add a profile for the system you want to integrate NetCrunch.
We can group these external systems by the type of services provided.
Once you have a profile, you can add integration actions to the alerting script. Each action has different options depending on the integration. You can also start by adding actions and creating a profile from the action editor.
Several systems allow information to be sent back to NetCrunch when tickets are closed to close the alert from the NetCrunch side. We need to open Settings Alerting & Notifications Integration API Keys to configure back-links to NetCrunch. The configuration is simple: you add a profile for a given system, and then you can copy part of the URL to send data to NetCrunch from an external system.
NetCrunch Web Access must be accessible outside the firewall. We recommend using a reverse proxy from an edge server, as it gives you more flexibility and allows you to use already-issued wildcard certificates. The other option is to install an SSL certificate into the NetCrunch Web Server, which always runs on the latest OpenSSL version.
Supported Systems:
amazon snsasanacampfireclicatellconnectwiseflowdockfreshdeskfreshservicehelp deskiftttintegration profilesjirajitbitliveagentmessagebirdmessagingmicrosoft teammojo helpdeskmqttops geniepagerdutypushoverryverservice deskslacksmssmseaglesnsteamstrellotwitterzendesk
Learn how to handle alerts from devices or systems that keep sending them until the problem is resolved. This way, you get a single alert instead of hundreds of them.
There are situations when the device or service sends us alerts repeatedly. So when the alert occurs, we receive it every minute; otherwise, nothing is coming, which means it works fine now.
By default, an external event does not correlate and triggers actions every time it is received. This is not what we need. You can switch this option off (No pending state), but the alert will require manual closing. It's better because you get only one notification at the alert start, but you won't get the notification when the problem is solved.
You should check the option to close the alert automatically after a given time. The time is measured since the last alert occurrence, so we need this. If the alert is repeated every minute, set the time to two or three times larger value.
This chapter provides a comprehensive guide on creating and editing alerting rules within NetCrunch. It covers the key properties that define an alert, such as Severity, Description, and Target State, and explores advanced configurations, including additional alerting conditions and automatic alert correlation. By understanding these concepts, network administrators can optimize monitoring strategies, reduce false positives, and ensure prompt and accurate incident responses.
When setting up alerting rules in NetCrunch, it's essential to comprehend the meaning of the fields that define an alert. You'll encounter three critical fields: Severity
, Description
, and Target State
.
The Severity field signifies the importance level of the alert. It helps prioritize issues and determine the urgency of the response needed. The severity levels usually range from informational to critical. Here's a brief overview:
Choosing the correct severity level is crucial for proper incident management and ensuring that the right team members prioritize their responses effectively.
The Description field should concisely explain the alert and its context. I.e. High Processor Utilization (> 90%)
The Operational State field is essential for understanding the impact of alerts in NetCrunch. It distinguishes between a service merely experiencing issues and one that is entirely down, thus providing critical insight that supports effective incident management and swift resolution.
Operational - The object/service is functional. This includes scenarios where performance might be degraded, but the service remains available.
Non-Operational - The object/service is not functioning as expected, typically indicating that it is not responding or is entirely offline.
The Event Condition is the core element of an alerting rule in NetCrunch, defining the primary reason for triggering an alert. This condition specifies the exact circumstances under which an alert is generated. For example, it could be set to detect a state change—such as a service transitioning from responding to down—or to monitor a threshold on a particular metric, like CPU usage exceeding a predefined limit.
The alert will be triggered only when the Event Condition is met in conjunction with any configured Additional Alerting Conditions. This dual-layer approach ensures that alerts are both precise and contextually relevant. By accurately defining the Event Condition, you help ensure that the alert system responds only to significant events, thereby reducing false positives and alert fatigue while enabling prompt and effective incident response.
NetCrunch allows you to define additional conditions for each alert, regardless of whether a node status change, an event log alert, or an SNMP trap trigger it. These additional conditions enable you to fine-tune the alerting process by triggering actions even when a primary event has not occurred. For example, you can specify conditions based on specific time intervals or the absence of an event, ensuring alerts are only activated under precise circumstances. The available additional conditions include:
NetCrunch's alert close correlation feature streamlines the management of alerts by automatically grouping related notifications. For internally triggered alerts—those generated by state changes or threshold breaches—NetCrunch is designed to close them automatically once the issue is resolved.
However, a closing correlation is required for external alerts originating from traps, syslog, or web messages. In these cases, external alerts are paired with corresponding resolution events to ensure they are appropriately dismissed. Alerts can be configured to close automatically after a set period, or they can be manually cleared by an operator once confirmed as resolved. This approach reduces alert fatigue and maintains clarity in the system, ensuring that the actual status of network issues is accurately represented.
Creating and editing alerting rules in NetCrunch requires a clear understanding of the Severity, Description, and Target State fields. Effectively using these fields, along with advanced configurations like additional alerting conditions and automatic alert close correlation, can enhance your monitoring strategy, streamline issue resolution, and ultimately ensure the reliability of your systems.
This chapter covers how NetCrunch visualizes monitored data in real time through dashboards, topology maps, node views, and Auto-Screens. It explains how to create dynamic views that present network state clearly and interactively—from high-level overviews to in-depth diagnostics.
alert dashboardsauto-screensdashboardsdata visualizationmonitoring screensmonitoring uimonitoring viewsnetcrunch consolenetwork mapsnetwork monitoringnoc visualizationnode statusreal-time datatopology mapsvisual monitoring
dashboards-overview
NetCrunch offers multiple dashboard types to visualize real-time system and network status. Each is designed for different scenarios: overview status, performance leaders, or fully customizable layouts.
NetCrunch provides three powerful types of dashboards that adapt to different monitoring needs. Each type offers distinct features for visualizing, interacting with, and drilling into system health and performance data.
This built-in dashboard type is available for all node-based views:
- IP Networks
- Physical Segments
- Node Groups
- Dynamic Folders
The Status dashboard provides a structured summary of the monitored environment, with interactive tiles showing alert levels, counts, and categories.
Example: Overview Status Dashboard
Example: Alert Popup from Status Panel
For non-overview views, the dashboard layout includes six default tiles: - Monitoring Summary - Alerting (24h) - Network Services - Monitoring Packs - Monitoring Engines - Sensors
Tile order is customizable.
Available for any node-based view, this dashboard shows ranked performance charts, for example:
Example: Top Charts Dashboard with Selected Metrics
Each chart: - Can be selected from built-in templates - Can be fully customized with user-defined counters - Supports click-through to open trend viewer (for single series or entire tile)
User can control: - Number of items shown - Sorting direction (highest/lowest) - Alert thresholds (critical, warning) - Filtering by node importance
To build entirely custom views, users can create Graphical Views with widget panels or boundless canvases. These dashboards support:
alert paneldashboardsip networksmonitoring overviewnode groupsperformance chartsstatus paneltop chartsvisualization
graphical-dashboards
Graphical Dashboards let you build fully customizable panels composed of live data widgets and dynamic visuals. These dashboards adapt to themes, respond to data changes, and offer interactive features for real-time monitoring and user-friendly operation.
Graphical Dashboards offer a range of interactive and responsive features that go beyond basic layout. This section highlights what you can do with dashboards, especially things that are not immediately visible.
Each dashboard supports automatic theming, adapting to the user's interface theme (light or dark). You don't need to create two versions of the same dashboard—NetCrunch will automatically render colors, backgrounds, and widget styling.
By default, the dashboard is in Auto
mode, adjusting to the viewer's application settings. You can permanently make it Light
or Dark
.
Dashboards are designed to display cleanly on screens of all sizes. Beyond scaling to aspect ratio, the Auto Zoom option adjusts the visible area dynamically:
While dashboards can include labels, icons, or shapes, their real power comes from data-bound widgets. These include:
Most widgets are inserted in a "Themed" style by default:
Each widget contains many powerful options grouped by function. These are not always visible unless the widget is selected and expanded in the editor:
These options make widgets more intelligent and self-explanatory when properly configured.
All dashboard widgets can be interactive: - Clicking a value widget or bar opens a tooltip with a live preview (e.g., trend or node state) - From the tooltip, users can open the Trend Viewer, Node Status, or perform actions - Visuals respond to real-time changes: flashing, color transitions, or icon switching
The dashboard is not static—it's a live control panel reacting to data.
The editor supports professional design tools behind the scenes:
Though these features mimic tools like Figma or Illustrator, they're tailored to administrator workflows, not designers.
Graphical Dashboards support two main classes of widgets:
These display live data from counters or statuses:
They include a Data Source section where the user can bind:
Most support tooltips with live data previews and access to the trend viewer or node actions.
These are static by default but become dynamic when bound to a State Reference: - Shapes (rectangle, triangle, blob, etc.) - Icons (font-based) - Symbols (vector-style images) - Pictures - Connection lines - Text and labels
Users can choose from 11 object types, such as:
To make these elements state-aware:
OK
, Warning
, Error
, etc.)Default styles are provided, but users can override every visual aspect.
This lets you build shape- or icon-driven dashboards with precise reactions to live conditions.
You can add animations triggered by state changes:
These subtle effects help focus attention on critical elements like failed links or overheating nodes.
A shape or icon can:
NetCrunch lets you build SCADA-like behavior with far less effort.
Dashboards support rich visual libraries to enhance layout and clarity.
Pictures can be used for background diagrams, branding, or visual overlays.
They can be stored in:
- Document
(local view-only)
- Shared
(across multiple views)
- Public
(global use within organization)
This system reduces duplication and makes it easy to reuse assets.
Symbols are vector-style images representing IT assets and services:
NetCrunch supports over 3,000 icons via Font Awesome.
These icons are ideal for compact state indicators.
A full set of vector shapes is included:
Shapes are fully editable and animatable. They can also reflect the system state.
All visual elements can be:
A simple icon or shape can become an alert source or a dynamic indicator.
Graphical Dashboards include a visual editor with streamlined selection, linking, and widget creation tools. The left-side toolbar contains three context-sensitive selectors:
When a new widget is dropped onto the canvas, the editor automatically opens a focused configuration panel. This avoids the need to browse through hundreds of settings initially.
For example:
This approach makes the experience intuitive, even for users unfamiliar with visual editors.
The editor supports intelligent chart creation. When adding a graph widget, you can configure it in two flexible ways:
You can also check the "Put on separate graphs" option to automatically insert multiple graph widgets at once, each with one configured series.
This makes building visual comparisons quick and avoids manual duplication.
NetCrunch's editor includes a right-click popup menu that reveals advanced object actions and selection tools, modeled after design applications like Adobe XD or Illustrator.
When you right-click a widget, you'll find:
Locking is essential for layered dashboards or background assets like images and shapes.
NetCrunch uses Adobe-style shortcuts for alignment operations:
Ctrl + Shift + Left
– Align left Ctrl + Shift + Right
– Align right Ctrl + Shift + Up
– Align top Ctrl + Shift + Down
– Align bottom Ctrl + Shift + C
– Align center horizontally Ctrl + Shift + M
– Align middle vertically Ctrl + Shift + V
– Distribute verticallyCtrl + Shift + H
– Distribute horizontallyThese shortcuts work with multiple selected widgets to polish the layout quickly.
From the context menu, you can also select all widgets of a specific type, such as:
This makes applying style or layout changes to multiple elements easy without manual selection.
Combined with locking and alignment, this allows users to work quickly across complex dashboards without misplacing key visual components.
The dashboard editor supports flexible multi-selection, layout duplication, and viewport control to speed up complex design tasks.
You can select multiple widgets using several intuitive methods:
These tools allow efficient bulk operations like simultaneously moving, aligning, or styling multiple items.
The canvas supports panning and zooming for large views:
This makes navigating large or boundless views fast and natural.
When an element has connections, any duplicated version will retain those connections. This is especially useful for replicating status-linked layouts or structured diagrams without manually reconnecting lines.
Widgets also preserve their configuration and references when duplicated or pasted, allowing rapid reuse of common visual elements or layouts.
These advanced selection and navigation capabilities make dashboard design fluid and powerful, ideal for constructing rich views quickly with precise control.
animated widgetsauto zoomdashboard themingdata widgetsdynamic iconsgraphical dashboardslive visualizationnetcrunch editornode tooltipreal-time monitoringshape stylingstate referencethemed widgetstrend popupvisual logic
sharing-embedding
NetCrunch allows for secure sharing of individual graphical views, enabling targeted visibility for specific users, departments, or external audiences. Sharing supports iframe embedding and HTTPS-based access, making it ideal for dashboards, executive summaries, and MSP-customer overviews.
Sharing graphical views in NetCrunch enables focused, read-only visibility into parts of your infrastructure without exposing the entire system. Instead of giving users full access to the Web Console, you can share specific dashboards or maps, making them accessible only to the intended audience.
A shared view can be published in two ways:
https://ncconsole.net/rc/connect/...
addressThese links use secure HTTPS transport. Even for local access, we recommend using HTTPS with a proper or self-signed certificate. HTTP is discouraged and may be deprecated in future versions.
By isolating what gets shared and how it’s accessed, NetCrunch ensures data security, performance isolation, and user simplicity—no full console is needed.
Shared views in NetCrunch operate through a restricted console-like interface designed purely for viewing. It provides a lightweight, secure way to present dashboards or diagrams without exposing the full system.
Unlike full Web Console access, shared views:
When a user opens a shared view link:
This interface behaves like a minimal console, focused on visual clarity and simplicity. It's ideal for management, customers, or external stakeholders who need insights, not tools.
NetCrunch handles shared views by automatically creating a dedicated "sharing user". This user:
If you assign additional views to an existing link, they become instantly available in the dropdown—there is no need to redistribute URLs. This makes sharing maintainable and scalable.
A shared link can be:
This combination ensures that shared access can be temporary, restricted, or public, depending on your needs.
Unlike the restricted Web Console (which still loads authentication, user profiles, and management modules), the sharing console is ultra-lightweight and contains only the essentials for view rendering.
To share a graphical view in NetCrunch, click the Share button at the view's top-left corner. This opens a guided sharing wizard where you configure who can access the view, under what conditions, and how.
When creating a new user, you can decide:
These options affect how and where the view can be accessed. We recommend setting a password for sensitive dashboards and avoiding embedding unless required.
Sharing linked views enables navigation between views, creating a portal-like experience using buttons or clickable widgets.
NetCrunch generates two links for you:
You can copy and send the link manually. NetCrunch never sends links automatically to the user.
If using password protection, we recommend sharing the password through a separate channel from the link (e.g., email + text message).
These steps ensure that you maintain complete control over who sees what, for how long, and under what conditions—all while keeping sharing secure and self-managed.
NetCrunch allows shared views to be embedded into other websites, portals, or dashboards using an HTML <iframe>
. This makes presenting live dashboards in intranets, customer portals, or wall-mounted displays easy.
You can copy the generated code snippet after enabling the "Allow embedding" option during the sharing process.
When enabling embedding, you can optionally restrict which server or domain can embed the view. By default, the field accepts *
to allow embedding from any origin.
You can enter a specific hostname (e.g., intranet.company.com
) to tighten control over where the view is displayed. This helps reduce the risk of unintended exposure.
Embedding requires proper configuration of the embedding site’s Content Security Policy (CSP). The CSP must include the NetCrunch sharing domain under frame-src
or child-src
.
For example:
Content-Security-Policy: frame-src https://ncconsole.net;
Additional notes:
http://
—browsers may block insecure content from being embedded in secure (https://
) pagesEmbedding is a secure and efficient way to deliver targeted insight from NetCrunch without exposing system internals or requiring user authentication.
NetCrunch gives users and administrators tools to manage sharing access effectively while keeping control centralized and secure.
To revoke access to a shared view:
This will immediately remove the view from the user's dropdown menu and deactivate the link. The view is no longer accessible without deleting the sharing user entirely.
Note: NetCrunch does not allow editing shared user settings after creation. To change access (e.g., password, expiration, embedding), you must remove and recreate the user with new settings.
Administrators can view the full list of shared users via:
Users & Access Rights Manager → Sharing
This panel shows:
However, this view does not allow editing—it’s for visibility and audit only. Modifications still need to be made directly from individual views.
NetCrunch logs all sharing-related operations in the Activity Log tile ApplicationsServer
You can track:
This provides administrators and security teams full visibility into the shared access lifecycle, including configuration changes and usage.
NetCrunch balances ease of sharing with clear boundaries: complete administrative control stays with the owner, while shared views remain lightweight and safe for external use.
One of the most powerful ways to enhance shared views is by linking them. NetCrunch allows you to create navigation between graphical views, turning individual dashboards into a complete, self-contained monitoring portal.
Shared links are great for single dashboards—but sometimes you want more:
By linking views, you allow users to move freely between views, without ever seeing the full NetCrunch interface.
When sharing a view, you can enable the option “Include linked views.” This means:
All transitions between views happen within the shared viewer. The user does not need a new link or reauthentication.
Linking is not just a design convenience—it’s a way to deliver structured navigation to users who shouldn’t see the entire system.
To create links between views:
This allows you to design clickable dashboards, just like a web application.
By combining links and embedding, you can deliver full navigable monitoring experiences to users without requiring logins or console access.
When someone opens a shared view—whether directly through a link or embedded on a webpage—they are presented with a simplified, focused interface that loads instantly and updates in real time.
If linked views are enabled, users can navigate between them using view links or from the dropdown selector—no reloads, re-logins, or new URLs required.
Shared view users cannot: - Access the full console - Perform any configuration or actions - View node lists or edit views - Access login-based features (sessions, roles, settings)
This ensures the view remains secure, read-only, and safe to expose to public or limited-audience environments.
Whether embedding a live KPI display in a customer portal or rotating dashboards on a wall monitor, the shared view interface provides a focused, elegant way to share NetCrunch insight without complexity.
NetCrunch automatically builds network topology maps to visually represent logical routing and physical Layer 2 connections between network devices. These maps aid in understanding network structure, monitoring connectivity, and troubleshooting problems.
layer 2network atlasnetwork topologyphysical connectionsport mappingrouting mapswitch maptopology views
Network topology maps in NetCrunch provide a dynamic, visual representation of how devices are connected and communicate across your network. These maps fall into two categories:
To generate accurate topology maps, NetCrunch relies heavily on SNMP data. SNMP must be enabled on your network equipment for NetCrunch to access necessary connectivity and port mapping details. Fortunately, nearly all business-grade hardware supports SNMP, and NetCrunch includes a built-in MIB compiler for vendor-specific extensions.
By default, NetCrunch uses the following protocols to detect network structure:
Although SNMP forwarding tables can assist in building topology, maps based solely on them might not be fully accurate—especially when devices do not fully expose their topology or omit VLAN data.
If your generated topology maps do not match your real-world network layout, the most likely cause is that a device is not being monitored. Incomplete topology is usually a sign that NetCrunch lacks visibility into one or more infrastructure devices.
To ensure reliable topology mapping:
If NetCrunch can’t see it, it can’t map it. Visibility is the prerequisite for topology accuracy.
When NetCrunch detects isolated or unreachable switches, it will display a live warning above the topology map. This alert may indicate that intermediate devices are missing or that SNMP is not properly configured on one or more switches. These warnings help quickly pinpoint where visibility gaps are breaking topology continuity.
Topology maps are also organized by site. A site in NetCrunch represents an isolated network location, typically defined by a unique address space—such as a network behind NAT. Sites help prevent IP conflicts and allow topology to be visualized per-location. Each site can be monitored by one or multiple monitoring probes, which collect data and relay topology updates to the main server. This organization enables effective monitoring of distributed or multi-location environments without requiring direct routing between them.
Routing maps in NetCrunch show how routers and subnets are logically connected across your network. These maps provide a high-level view of inter-network communication paths, helping you quickly assess network structure, routing issues, and device reachability.
Routing maps are automatically created for each monitored address space. NetCrunch builds them by analyzing IP address assignments, router interfaces, and routing tables obtained via SNMP.
The layout is dynamically generated and updated as topology changes. Unlike static diagrams in many other tools, NetCrunch maps are live, data-driven, and can reflect real-time changes in routing.
Each routing map includes the following interactive features:
In routing maps, monitored subnets are shown using a tile grid icon, while unmonitored networks are marked with a distinct, lightweight network icon.
When you click an unmonitored network: - NetCrunch offers to add the network to monitoring. - Upon confirmation, the subnet is scanned, and discovered devices are added to the Atlas.
This feature makes it easy to incrementally expand monitoring coverage directly from the topology map, ensuring no reachable segment is left unmanaged.
NetCrunch supports multiple layout modes for routing maps:
The layout mode can be changed at any time using the layout selector above the map.
NetCrunch visualizes Layer 2 (physical) topology by mapping switch-to-switch and switch-to-node port connections. These maps are derived from SNMP data and discovery protocols like CDP, LLDP, STP, and MAC address forwarding tables.
Unlike many other tools, NetCrunch does not overload the top-level map with thousands of end nodes. Instead, it separates the global Layer 2 view from per-switch segments, offering clarity and scalability.
The top-level map shows only the switches and their interconnections. Endpoints (e.g., computers, printers, IP phones) are not placed directly on this map to maintain readability—especially in large networks with thousands of devices.
Instead, each switch has its own segment view, listed under the Physical Connections container in the Atlas tree. These views display:
Sometimes, a single port may appear as its own segment. This usually means multiple endpoints are connected through an unmanaged switch (i.e., a switch NetCrunch cannot see via SNMP).
In each segment view:
These insights allow for quick identification of overloaded ports, dropped packets, or faulty cabling.
Like routing maps, Layer 2 maps are fully interactive:
This separation between global and per-switch views—combined with layered navigation—makes NetCrunch uniquely capable of visualizing large-scale physical infrastructures without clutter or performance degradation.
For physical topology maps to function properly, several configuration steps must be completed to ensure full device visibility and data accuracy.
Physical connection monitoring must be explicitly enabled in NetCrunch. To activate it:
This allows NetCrunch to begin polling devices for port-level data and build the physical connection maps automatically.
Accurate SNMP access is essential for physical topology mapping. You must ensure that SNMP credentials are correctly configured and matched to devices.
To manage SNMP profiles:
Once a profile is defined:
There are multiple ways to assign SNMP access, and it’s crucial to verify this step for every switch that should appear on the physical map.
To accelerate and fine-tune the configuration:
The wizard ensures that each relevant device is properly included and the resulting maps are logically grouped and navigable.
If a switch is not automatically placed into a segment, you can:
This manual method is especially useful in cases where SNMP visibility is partial or when devices are using non-standard MIBs or discovery protocol implementations.
The most common issues with topology accuracy stem from missing or misconfigured SNMP access. To resolve:
If segments appear with nodes grouped under one port or if entire switches are missing, the cause is typically:
You can always re-run discovery, adjust switch assignments, or tune SNMP configuration to restore map completeness and clarity.
One of the most powerful aspects of NetCrunch’s physical topology maps is their ability to present live interface metrics directly within the map view. This gives administrators immediate visibility into the performance and health of individual ports and connections.
Each switch port displayed on a physical segment can be clicked to reveal a live interface popup. This panel shows:
This allows you to identify bottlenecks, faulty cables, misconfigured VLANs, or flapping ports directly from the topology view—without needing to switch to other tools or interfaces.
On the global physical topology map, each switch is linked to its physical segment. Clicking on this link opens the live interface popup for the corresponding port or port channel.
This feature provides a direct path from the high-level overview to low-level diagnostics, allowing for efficient root cause analysis of network issues.
When SNMP is correctly configured and monitored, NetCrunch collects and displays the following per-interface data:
These metrics are updated live and can be trended over time for capacity planning and anomaly detection.
With this level of per-port granularity, NetCrunch transforms topology maps from passive diagrams into active diagnostic dashboards.
NetCrunch’s topology maps—both routing and physical—are designed to be more than static diagrams. They are live, interactive, and navigable interfaces that help users work with large-scale environments efficiently.
Each topology view is accessible directly from the Network Atlas tree, which organizes all views hierarchically. Topology views are grouped under:
You can expand these containers to browse all generated or user-added maps, including segments and special views. Selecting an item in the tree opens its live map view in the right pane.
Every topology map includes built-in tools to help users navigate large or complex networks:
The layout engine ensures visual clarity and responsiveness even with hundreds of objects.
Topology maps support full manual layout control:
This hybrid of automatic and manual layout makes it easy to tune visual structure while preserving clarity.
Devices often appear in multiple maps (e.g., a switch in physical segments and in routing maps). NetCrunch allows:
Topology views can also be linked to dashboards, node detail views, or embedded in Graphical Views for advanced use cases.
By combining live data, flexible layout, and intelligent navigation, NetCrunch makes its topology maps practical tools for daily operation—not just documentation artifacts.
NetCrunch topology maps are not just visual aids—they are functional tools that can be used daily for troubleshooting, planning, and operational awareness. Below are key practices and real-world scenarios where topology mapping delivers significant value.
Unmapped nodes or ports often indicate misconfiguration or limited SNMP access—critical issues that can be resolved before they affect operations.
Topology maps help uncover devices that are improperly added, assigned wrong VLANs, or physically miswired.
This is particularly useful in multi-site or NATed environments, where routing visibility is otherwise fragmented.
By seeing which ports and links carry critical traffic, teams can better prioritize upgrades or repairs.
In dynamic environments, topology maps reduce human error and accelerate safe rollouts.
Whether you're operating a small LAN or a multi-site distributed infrastructure, NetCrunch topology maps give you a live, graphical interface to manage, diagnose, and improve your network.
Topology maps in NetCrunch are tightly integrated with several other key platform features. Together, they create a comprehensive environment for monitoring, visualization, and diagnostics.
All topology views reflect live status and performance of network interfaces, powered by SNMP polling:
This interface-level visibility enables real-time health monitoring directly from topology diagrams.
Topology maps help visualize dependencies between nodes, such as:
NetCrunch uses this structure to automatically determine root causes of alerts. For example, if a switch goes down, alerts from connected endpoints are suppressed or deprioritized, avoiding alert storms.
The SNMP Credentials Manager allows defining and managing all SNMP v1/v2c/v3 profiles:
Proper SNMP setup is critical for accurate topology and interface data collection.
By combining real-time interface data, smart alert correlation, flexible SNMP management, and customizable visuals, NetCrunch topology maps act as both operational tools and communication assets for IT teams.
layer 2network atlasnetwork topologyphysical connectionsport mappingrouting mapswitch maptopology views
NetCrunch Auto-Screens allow you to cycle through live views in full-screen mode across one or more displays. Designed for NOC and control room usage, Auto-Screens help keep teams informed in real time by rotating through non-scrollable dashboards, topology maps, and other visual summaries.
auto-screensdesktop consolefullscreen viewslive monitoringmap rotationnetwork dashboardsnoc displaynon-scrollable dashboardsscenario editorscreen rotationscreen scenariosweb console
The Auto-Screens application is available in the NetCrunch Console, under Applications → Auto-Screens. It allows you to play full-screen live presentations of selected network views, cycling through them automatically at a defined interval.
This feature is designed for:
Auto-Screens are available in both the desktop and web console. You can open:
This makes it easy to distribute different Auto-Screens across different monitors or browser tabs.
Users can create and configure screen scenarios, each containing a set of live views. A scenario defines:
Each scenario is shown as a tile in the Auto-Screens panel, with the option to start playback or edit the scenario.
Clicking on a scenario opens the Edit Screens dialog. Here you can:
Only non-scrollable views are supported—tables and long reports are excluded because they are not suitable for display in rotating dashboards. This is an intentional design decision to ensure each screen:
❗ Dashboards should always present live system state at a glance. Scrollable, dense tables are not dashboards—they are tools for filtering, sorting, or reporting. Many users confuse this with Grafana-style “dashboards” that act more like static reports. In Auto-Screens, the focus is always on presenting the current situation clearly and immediately.
Whether used in a data center, campus NOC, or remote support team, Auto-Screens provide a passive, real-time awareness layer that augments active alerting and troubleshootin
auto-screensdesktop consolefullscreen viewslive monitoringmap rotationnetwork dashboardsnoc displaynon-scrollable dashboardsscenario editorscreen rotationscreen scenariosweb console
The Node Status window in NetCrunch provides a comprehensive, real-time view of a monitored node’s condition. It includes monitoring status, performance metrics, activity logs, system properties, dependencies, alerts, and more. For Windows, Linux, hypervisors, and network devices, NetCrunch presents unmatched visibility in a unified interactive interface.
The Node Status window is the central panel for accessing complete, live information about any monitored node. The layout dynamically adapts to the type of node—Windows, Linux, ESXi, switches, routers, or others—and presents tabs that combine monitoring data with configuration and historical insights.
It provides:
The following tabs may appear depending on node type:
The Windows tab includes specialized subviews offering live data and change tracking—not just static status.
All of these views support historical snapshots, comparisons, and are part of NetCrunch’s monitoring layer—not just informational screens.
The System tab includes runtime diagnostics for:
These are pulled via SNMP and SSH, offering high-value operational insight without installing agents.
When connected to VMware APIs, NetCrunch displays:
This gives full visibility into virtualization infrastructure from a single screen.
For network gear, the Interfaces tab displays:
Each port includes a detailed popup with visual charts and counters. Interactive elements allow quick pinpointing of overloaded or misconfigured ports.
Many subviews in the Node Status window support change history, not just current values.
This makes NetCrunch function as a configuration monitoring solution, not just a performance monitor.
Network devices that support CLI access expose a Device Configuration tab:
show version
, show configuration
, etc.
NetCrunch’s Node Status window is a single-pane, deeply integrated monitoring console. It unifies status, alerts, trends, configuration, and change intelligence for all supported node types—live, historical, and interactive.
alert contextconfiguration monitoringdependenciesesxihyper-vinteractive monitoringinterfacesinventory trackinglinux monitoringnetcrunch node screennode overviewnode propertiesnode statusprocess viewerreal-time metricsservice controlsoftware changessystem configurationvirtual machineswindows monitoring
GrafCrunch is a legacy, standalone dashboarding tool based on Grafana 7, bundled by AdRem Software as a free add-on for NetCrunch users. It was originally created to offer Grafana-style dashboards using NetCrunch data, and it includes a custom plugin for seamless integration. While still supported, GrafCrunch is no longer actively developed and will be phased out over time as NetCrunch's native visualization capabilities continue to surpass it.
GrafCrunch is a customized fork of Grafana 7, bundled by AdRem Software as an optional, free add-on to NetCrunch. It includes a built-in plugin for accessing NetCrunch's internal metrics and time series data (TrendDB/StatusDB), allowing users to build familiar Grafana-style dashboards using the data NetCrunch already collects.
Originally developed to offer more flexible dashboards and allow merging metrics from multiple sources, GrafCrunch provided a useful bridge before NetCrunch introduced its modern dashboarding engine.
GrafCrunch is now considered a legacy tool for the following reasons:
As a result, AdRem Software does not plan to upgrade or expand GrafCrunch. Instead, NetCrunch’s native dashboards have already surpassed GrafCrunch in terms of usability, real-time status visibility, and alert integration.
Although development has stopped, GrafCrunch is still maintained to ensure compatibility with current NetCrunch versions.
For historical context and usage tips, visit:
https://www.adremsoft.com/blog/
Look for posts about GrafCrunch, Grafana, and dashboard strategies.
Users are encouraged to migrate to NetCrunch's built-in dashboards, which now offer:
GrafCrunch was an effective bridge at the time—but NetCrunch has since evolved far beyond it. If you still rely on GrafCrunch, it will continue to work for now, but the future is clearly native.
adrem softwaredashboard legacygrafana 7grafana alternativegrafana forkgrafana plugingrafana storegrafana vs netcrunchgrafcrunchlegacy dashboardsmetrics dashboardnetcrunch graphingnetcrunch integrationnetcrunch pluginnetwork dashboardstime series plugin
The Event Details window in NetCrunch transforms alerts into actionable insights. It explains what triggered an alert, how often it happens, how critical it is, what response actions were taken, and what the system recommends next. Its purpose is to reduce alert overload and help you focus on what truly matters.
ai explanationalert contextalert diagnosisalert frequencyalert noisealert responsealert snapshotcounter thresholdevent closureevent detailsmonitoring packmonitoring parametersnetcrunch alertroot cause analysissensor sourcetriggered event
NetCrunch doesn’t just tell you that something went wrong—it shows you why, how often, and what to do about it. The Event Details window is where that happens.
It’s designed to:
While other systems show a title and a value, NetCrunch shows a timeline, a frequency model, live counter correlation, action results, and AI-powered explanations.
The Alert Frequency Chart shows how often this event has occurred in the last 24 hours, 7 days, or 30 days.
You can instantly spot:
This chart turns anecdotal "this alert keeps firing" into measurable, visual evidence.
The Counter Snapshot graph shows whether the alerting condition was a momentary deviation or part of a long trend.
With threshold and reset lines clearly drawn, you can assess:
This visual clarity prevents overreaction and encourages accurate prioritization.
The left-side panel always shows:
If it was closed by a parent node issue, that relationship is shown too—so you know not to chase symptoms of a higher-level problem.
Every alert includes its Monitoring Source:
This links policy to behavior—no more guessing how something was monitored.
The Action Log shows:
This provides transparency and confidence. You don’t have to wonder whether a restart script failed or an alert was never sent—you can see it right here.
With the Explain button, NetCrunch gives you a contextual explanation of:
Especially useful for junior staff or occasional operators, this turns raw metrics into understandable action paths.
For threshold-based alerts, the Parameters section shows the full logic:
This allows anyone to reproduce the logic, debug it, or fine-tune it confidently. It’s built for transparency, not magic.
The event window also supports human input:
No need to leave the alert context to document your work.
Most monitoring tools show alerts as fragments. NetCrunch shows alerts as fully contextual events—with history, cause, trend, resolution, and interpretation—all on one screen.
That means:
This is not just a window. It's your first defense against alert fatigue.
This topic compares NetCrunch’s visualization capabilities to typical monitoring tools and explains the philosophy behind our design. NetCrunch isn’t trying to mimic enterprise software complexity—it’s built to deliver clarity, reduce noise, and help users focus. While others bloat, we simplify. And we’re just getting started.
alert visibilitydashboards comparisonevent clarityfuture roadmapgrafana vs netcrunchmonitoring claritymonitoring evolutionmonitoring mapsmonitoring uinetcrunch visualizationvisual monitoring
Visualization in NetCrunch is not a UI layer added for decoration—it’s deeply connected to how monitoring is designed to work: real-time, contextual, and actionable. We believe monitoring should show what’s happening now, help you act quickly, and reduce uncertainty—not bury you in graphs or endless configuration panels.
That’s why our dashboards, maps, event views, and Auto-Screens all follow three principles:
We often get asked: “Why not just use Grafana?” or “How does this differ from XYZ?” Here's how NetCrunch compares to typical monitoring solutions:
Most tools either don’t offer Layer 2/3 maps or treat them as static diagrams.
NetCrunch:
- Auto-generated topology (Layer 2 & 3)
- Per-switch segment views to avoid clutter
- Live interface traffic, VLANs, errors
- Node isolation warnings and drilldowns
Others:
- Manual mapping or basic link tools
- Overloaded with endpoints, unreadable at scale
- Often missing interface-level insight
Dashboards should show real-time status, not act as scrollable spreadsheets.
NetCrunch:
- Non-scrollable, focused dashboards
- Real-time charts, live alerts, and contextual widgets
- Easy to build from existing views
- Designed for NOC, not reporting
Others:
- Scrollable “reports disguised as dashboards”
- Disconnected from monitoring logic
- Often require scripting or data wrangling
Most products give you a line of text. We give you understanding.
NetCrunch:
- Full condition breakdown
- Real-time value chart with thresholds
- Alert frequency over time
- Response action log
- AI explanation
- Parameter transparency
Others:
- Just the alert title and timestamp
- No response tracking or real-time context
- Tuning alerts is trial and error
We don’t need plugins, hacks, or browser extensions.
NetCrunch:
- Native NOC rotation scenarios
- Desktop and web console support
- No scrolling views allowed—focus by design
- Multi-screen, multi-scenario support
Others:
- Browser hacks, iframe loops
- Inconsistent fullscreen handling
- No native scheduling or layout discipline
While some tools are becoming more complex—turning into enterprise platforms with rising costs and overwhelming interfaces—we are heading in the opposite direction.
We’re focused on:
GrafCrunch, our legacy Grafana fork, served its purpose. But we’ve moved beyond it—because NetCrunch’s native visual layer now provides greater value without third-party complexity.
We believe that clear visualization is operational power. It’s how you see problems before they escalate. How you communicate status to your team. How you trust the system is doing its job.
That’s why visualization in NetCrunch isn’t a bolt-on—it’s a core part of how we think.
And we’re not done.
The best monitoring UI isn’t the one with the most options. It’s the one that helps you act fast, confidently, and without distraction. That’s the future we’re building—one focused screen at a time.
Read about how to keep NetCrunch running smoothly over time, how to upgrade it and keep your network information up to date.
Explaining aspects that are not seen but taken care of. Use these tips to optimize your NetCrunch resilience.
Our goal with NetCrunch is to provide a practical, reliable solution that works in real-world environments. We’ve built it to be resilient and self-reliant, designed to help IT professionals manage complex networks without unnecessary hassle or false promises.
NetCrunch is engineered to be trouble-free. It undergoes extensive testing and follows a development process aimed at minimizing defects. However, as industry experts know, even the best practices can only detect 75% to 90% of bugs before release - a reality confirmed by decades of research. Network monitoring software like NetCrunch must also handle countless edge cases caused by the wide variety of vendor devices and technology implementations it interacts with.
Networks are inherently complex, often relying on technologies defined by RFC documents that vendors don’t always fully implement. This is particularly evident with protocols like SNMP and NetFlow. On top of that, operating systems such as some versions of Windows Server sometimes contain unresolved bugs in components like WMI. NetCrunch is built to navigate these challenges, delivering dependable performance for IT teams working in demanding environments.
To address various issues and bugs in the software, we release NetCrunch minor releases several times a year. These releases can be installed over previous versions, and they usually do not change the data format. Read Update, Migrate and Backup .
Based on our experience, issues are coming from a limited number of sources. Let us explain where the problems come from and how you can correct them quickly, or help us to fix them for you.
Most monitoring issues are caused by invalid configuration. There is no way for NetCrunch to connect to servers with wrong or missing OS credentials or wrong SNMP profiles (communities or passwords).
There is a range of protocols based on NetFlow implemented by various vendors. They usually conform to the NetFlow v5 protocol. Each device has specific settings, and starting from NetFlow v9, data contained in the flows depends on the configuration. First, you need to set up the device to send data to NetCrunch – there is a range of articles about NetFlow configuration on the Internet.
If NetCrunch cannot decode the flow data, you need to capture such data with Wireshark and send it to us. Sometimes the devices have bugs in their NetFlow implementations. Although we can’t fix them, sometimes we can go around and accept invalid data.
More than 10,000 MIBs are circulating on the Internet. As there is no standard for MIB compiler (standard MIBs were defined only in RFC documents), it is somewhat hard to compile them. They were often written once and never checked, or compiled with a specific compiler in a particular environment.
Usually, the source of MIB problems is the wrong syntax or a missing module that some MIBs may depend on. This kind of issue can often be fixed by specifying the module name alias.
If you are not familiar with MIBs and can’t solve these problems, contact our support – we will try to find a solution. We already compiled over 8700 MIBs, so there is a chance we can handle some more...
Windows is a complex system. It’s built by setting layers on top of another layer. The easiest way to manage Windows configuration is by Active Directory. But we know that in real life, there are many unlinked systems and Windows versions.
Problems with Windows (Windows 7/2008 or later) are always related to Windows settings. See Windows Monitoring Setup. Monitoring of a workgroup Windows 7 workstation is easy with a built-in local Administrator account.
Although we know that it is possible to monitor remote systems with lower than administrative rights, we can't give you the recipe that is universal across all Windows versions. Simply, it sometimes works, and sometimes not - systems that appear to be configured the same way behave differently.
There are always some limitations to NetCrunch.
NetCrunch is a multi-threaded system that scales well with many processors, especially because many tasks are dispatched to different processes and threads.
Slow SATA disks are very inefficient in reading large quantities of data. We recommend using SSD drives instead.
NCDiag is the program located in the NetCrunch server folder. It allows browsing various logs and bug reports.
Bug Reports are not crash dumps.
Mostly, they are well-handled exceptions, but something that was not expected by the program.
It is always beneficial when you let us receive them automatically. The reports contain information about your system, memory, processor, and program execution context.
We do not know your computer address except its Windows machine name. All reports come by email, directly to our secure internal database, and are fully confidential.
You can review those files (if there are any) using the NCDiag program located in the NetCrunch server directory.
As many NetCrunch components run as background processes, they use text logs to store their activity and potential issues. You can review them using NetCrunch Console ApplicationServerLogs.
NetCrunch Logs:
You can find this view in the top Atlas view. It contains essential statuses and reports on hundreds of statistics NetCrunch collects on itself.
The report contains information such as memory used, the number of consumed internal resources, and program queues. The report can be exported to file in XML format or sent directly to AdRem Software from the console. It helps us to determine much loaded NetCrunch puts on your system.
As NetCrunch is the eyes and ears of an administrator, it should work without interruption. In case of an irrecoverable error (blue screen at a smaller scale), NetCrunch service is automatically restarted to bring it back to a healthy state and avoid data loss. This process is done in seconds, not minutes, so its impact on the monitoring process is minimal.
NetCrunch automatically creates data backups each day. You can change the settings in the Atlas properties window.
Settings NetCrunch System Automatic Backup
Sometimes we need to save configuration files without saving collected monitoring data (performance trends and event log data).
You can save the backup file to a selected folder to move it to another machine. This backup will contain all configurations, including program registry settings and monitoring credentials database.
You can create partial backup that does not contain any monitoring credentials (for example if you want to send it to AdRem for diagnostics). You can also decide to skip events and trends, which are the biggest part of the backup by unticking these option in the backup window.
License installation options described.
Help Install Program License
NetCrunch allows the license to be installed online straight from the program UI. When you purchase the product, the license is assigned to your account. If never logged into account.adremsoft.com, then you have to reset your password first.
After selecting the license from the list of available licenses, the license is installed on the NetCrunch Server and automatically activated.
It takes another step before you go to the customer portal. So select Install From License File, and then the window shows the activation code you need to copy and use to generate the file on the portal.
You can download the file and put it on the NetCrunch Server machine, which generated the activation code. This process must be repeated every time license gets updated, for example, when you add more nodes or features or a new maintenance expiration date.
Before you decide to move NetCrunch to another machine, you need to deactivate your license. You can do it on NetCrunch Server by running NCLicenseManager located in the program installation folder.
The program allows for installation, deactivation, or updating license.
You can deactivate the license online, which requires login to the customer portal from the license manager program. You can also use the numeric deactivation code to deactivate the license on the customer portal.
After the license is deactivated server restarts automatically.
(hamburger menu)Help Refresh Program License
You can refresh the license from the console or NCLicenseManager program on the server. You need to refresh the license when its properties change (expiration date, new modules, or additional elements).
If a refreshed license contains new features or modules to be enabled, the NetCrunch Server will restart automatically.
NetCrunch uses several specialized databases to efficiently manage configuration, event history, real-time status, long-term metrics, and documentation. Each database is optimized for its specific role in the monitoring platform, allowing NetCrunch to scale from hundreds to thousands of nodes while ensuring high performance.
The architecture separates data by function:
This separation improves scalability, simplifies backup strategies, and enables faster access to different data types.
NetCrunch Atlas stores all network objects, their state, monitoring configurations, maps, and dependencies. It is maintained in-memory for performance and periodically saved to disk as multiple XML (and optionally JSON) files.
The event log stores all system events and action execution logs. It is stored in a SQL database accessible via an ODBC driver.
The Status DB is an in-memory hierarchical NoSQL database used to track the live state of monitored objects, triggers, and monitors.
This is a specialized append-only NoSQL database optimized for storing and querying large volumes of performance metrics.
DocDB is a document-oriented database built on RocksDB. It is used for storing metadata and documentation linked to nodes.
Each NetCrunch database serves a distinct role. This architecture allows NetCrunch to balance real-time responsiveness with historical analysis and robust configuration management. It also enables efficient use of resources across environments of different sizes.
atlasevent logmonitoring architecturenetcrunch databasesnosqlperformance datarocksdbstatus dbtrend database
The primary NetCrunch goal is to keep all your network data up to date. This also includes network nodes and virtual machines.
NetCrunch can periodically discover nodes in a given IP network. Go to IP Networks, select IP network view, and open PropertiesMonitoring to schedule a periodic network discovery. The minimum network discovery time is 1 hour.
NetCrunch creates automatic folders with all known containers. To enable auto-scanning of the AD container, go to the given view and open PropertiesMonitoring and select Add Automatically checkbox. Discovered nodes can be automatically added to the Atlas, or they will be kept in the Task notification window so you can filter them manually.
As NetCrunch monitors ESXi virtual machine hosts, it can add newly discovered machines to Network Atlas automatically. The option is disabled by default - you can also add virtual machines from the ESXi VM machine list manually.
Ask Piotrek about the VM autodiscovery option - add a path to manage the option.
active directoryauto-discoverydiscoverydomain
Learn how to perform NetCrunch updates, migrate it to other machines, or set up a backup.
NetCrunch is updated several times per year, the procedure might be slightly different depending on an update type. It is recommended to perform a quick or full backup before upgrading NetCrunch.
Maintenance updates (i.e., 15.0.4) contain mostly bug fixes and small improvements that don’t require configuration data changes. They can be safely performed by installing the new version over the previous one (the installer will automatically uninstall the previous version).
Check your maintenance license expiration date before initiating the upgrade. If you install the NetCrunch version released after your upgrade & support subscription expiration date, it will work only in the trial license mode (14-day trial).
The minor version (i.e., 14.2 or 15.1) updates contain some new features, requiring some changes to the existing configuration. Thus, we recommend performing a quick configuration backup before starting the update.
You don’t need to save all trend and event data – just save the configuration. See Backup procedure.
Major versions usually introduce important changes to the data to expand program performance, enhance stability, and accommodate new features.
Upgrading from one version to another is fairly easy. All upgrades are performed in place without copying data.
Before upgrading, ensure that the disk usage is less than 50%.
The installer automatically uninstalls the previous version of the program (keeping all data) and asks you to import it. (Make sure you have the latest version of the old program version installed - when in doubt, please confirm with the support team).
Atlas backup is, by default, scheduled to run automatically every day at 1 am. The backup contains Performance Trends data (TrendDB), an Event Log database, and an encrypted password database. The procedure keeps the last 3 backups, but you can increase the number of backups to keep.
To modify the automatic backup schedule, go to Settings NetCrunch System Automatic Backup and select the Modify Atlas backup schedule
option at the bottom.
Atlas backups store only network monitoring data. The program configuration, such as NetCrunch server options, is not stored.
You can run a backup on demand by going to (hamburger icon) Maintenance Backup.
Quick backup can be used before you start making configuration changes so that you can revert to the previous state at any time. It stores only Atlas configuration files such as nodes, alert configuration, maps, etc. It does not store the event log database or performance data.
Settings NetCrunch System Automatic Backup
Each Atlas backup is stored in a single file in the directory specified in the program options. The default location points to a subfolder of the program data directory.
It's recommended to store backup files on a separate disk or disk partition. For example "E:\NetCrunch\backup".
(hamburger icon) Maintenance Restore...
You can easily restore previous Atlas versions by choosing a backup date in the Restore window. You can decide to restore only configuration or full data.
If you need to migrate NetCrunch to new hardware, export all data to a single file, copy it to another machine, and import it to a freshly installed NetCrunch instance.
Backup to separate folder...
option.Full Backup for migrating data to another machine
option in the Backup Type drop-down.Upon the first NetCrunch console launch, select the Import Atlas from another machine...
option to load the NetCrunch backup created on the previous Windows Server.
atlasbackupexportimportrestoreupdateupgrade
Read restart recommendations.
We recommend restarting the NetCrunch Server service once a month to improve the system's performance.
We strongly recommend installing Microsoft updates and patches, so your whole system should also be restarted at least once per month.
NetCrunch includes an auto-restart feature that restarts its service on a given day and time. To change auto-restart parameters, go to: Settings NetCrunch System Maintenance. The shortest time between restarts is 1 day, and the longest is 100 days.
There are two main reasons why computers and systems do need to restart.
Even if the operating system uses algorithms that minimize the risk of fragmentation, it often happens. It manifests in an inability to allocate a large block of RAM. For example, the application must read a 10MB file to memory. The system might refuse to allocate the block despite having gigabytes of free memory. In such cases, the whole operating system must be restarted.
The system has thousands of components, some of which leak resources. System resources are not limitless. This might be caused by the crashing process (process cleanup does not always claim 100% of resources) or a faulty DLL.
In this case, programs or systems restart after the crash (i.e., blue screen BOD). The crash means that the program is in a state that does not allow further processing, which can lead to damage or unexpected behavior. In such cases, the code must be reinitialized into the proper state.
When such a problem occurs in any NetCrunch services (processes), they are automatically restarted, and the system continues running almost uninterrupted.
restartserver
You can use NCCLI to execute commands serviced by NetCrunch Server
The nccli
command line utility is a powerful tool designed for administrators and advanced users to perform various tasks such as exporting trend and event data from nodes in a network. This utility provides flexibility in terms of specifying time ranges, output formats, and other parameters to ensure that the exported data meets your specific needs. The following documentation outlines the usage, parameters, and examples of how to use the nccli
utility effectively.
nccli.exe <command> [parameters]
run-task -task backup | auto-scan | reports | trend-compress
export-trend -node node -counter counter -from fromDate [-to toDate] [-file fileName] [-date-format format]
Processor(_Total)\% Processor Time
.<Atlas data folder>\TrendExport
.2024-06-03T08:17:48.291Z
6/3/2024 10:17:48 AM
2024-06-03 10:17:48
export-events -node node -from fromDate [-to toDate] [-file fileName] [-output dataFormat] [-local-time] [-date-format format]
<Atlas data folder>\EventExport
.2024-06-03T08:17:48.291Z
6/3/2024 10:17:48 AM
2024-06-03 10:17:48
nccli export-trend -node Node01 -counter "Processor(_Total)\% Processor Time" -from 2024-06-01 -to 2024-06-02 -file C:\Export\TrendData.csv -local-date -date-format iso
nccli.exe export-events -node Node01 -from 2024-06-01 -to 2024-06-02 -file C:\Export\EventData.csv -output CSV -local-date -date-format iso
admincertificateclieventsexportexport eventsexport trendsexport-eventsimportpasswordpassword resetperformanceperformance exportresetreset admin passwordreset passwordtasktrends
Keep in mind that topics covered in this section are experimental workarounds
This option enables status calculation based on sensors - not network services. This may be especially useful if nodes aren't reachable by normal means (e.g., they are behind nat), and the only way of monitoring is to use sensors.
C:\ProgramData\AdRem\NetCrunch\data\<id of the atlas>
where the id of the atlas is represented by a digit (usually it's 2 if there was more than one atlas created in the NetCrunch - this number may differ.ipnodeOptions.json
[{"id":1234,"noServices":true}]
NetCrunch REST API is a powerful tool that allows you to get information, add, remove and modify nodes, views, folders, and policies.
To see all possibilities with examples, please go directly to Nodes, Atlas Views, and Policies chapters.
generate-api-key
Before NetCrunch API can be used, a key is required to create one following steps below:
NetCrunch REST API consists of four scopes that allow you to access and manipulate various datafrome NetCrunch.
Data returned by NetCrunch is in JSON format.
Every request has to specify the scope and use the generated API key.
https://<NetCrunch Server IP>/api/rest/2/<scope>?key=<user API key>
Most of the scopes can be either names, IPs or IDs
An ID of the node, view, or monitoring pack can be found in settings. To copy it - click on it.
cURL
curl --request GET --url "http://<netcrunch>/api/rest/2/nodes/192.168.0.25?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Result
{ "id": 1001, "name": "ncsrv.ac.acme", "dnsName": "ncsrv.ac.acme", "networkAddress": "10.20.16.94", "networkPrefixLength": 21, "snmpComputerName": "", "snmpOsDescription": "", "snmpSysObjId": "", "snmpLocation": "", "snmpAvailable": false, "snmpManaged": false, "snmpProfile": "", "snmpPort": 161, "snmpTimeout": 5000, "snmpRetryCount": 3, "displayName": "", "organization": "<Root>", "networkServices": [ { "name": "PING", "status": "OK" }, { "name": "HTTP", "status": "OK" }, { "name": "CIFS/SMB", "isLeading": true, "status": "OK" } ], "monitoringTime": 1, "netBiosName": "", "identification": "ipAddress", "deviceType": { "class": "Server/Workstation", "os": "Windows Server", "version": "Windows 2016 Server", "manufacturer": "VMware" }, "simplifiedMonitoring": false, "status": "OK", "avgResponseTime": 1, "maxResponseTime": 1, "alerts24h": { "count": 1, "critical": 0, "warning": 0, "unacknowledged": 0 }, "lastAlert": { "id": 187, "info": "PhysicalDisk(_Total)\% Disk Time 32.19 is back below reset value 60", "serverity": "Warning", "time": "2018-12-04T10:52:42.000Z" }, "macAddress": "005056AB9815", "enabled": true, "disabledFrom": null, "disabledUntil": null, "addTime": "2018-12-03T13:58:46.841Z", "lastStatusChange": "2018-12-04T10:56:52.944Z", "issueCount": 0, "virtualization": { "type": "VMware", "hostNodeId": 1061, "hostName": "esxi05.ac.acme", "dataCenter": "ha-datacenter" }, "monitoringEngines": [ { "name": "win", "enabled": true, "status": "OK" }, { "name": "ntsvc", "enabled": true, "status": "unknown" }, { "name": "ntlog", "enabled": true, "status": "OK" }, { "name": "inv", "enabled": true, "status": "OK" }, { "name": "sensors", "enabled": true, "status": "OK" } ], "pendingAlertsCount": 1, "customFields": { "Virtual Machine ID": "ncsrv.ac.acme" }, "interfacesMonitoringEnabled": false, "organizationalUnit": "For Running NC Machine", "snmpTrapCodePage": 4294967295, "sysLogCodePage": 4294967295, "lastNote": null, "nodeType": "IP Node", "osMonitorType": "windows", "hypervisorKind": "none", "probeType": "", "addressSpace": "" }
When API-key is used to execute a request - the name of the application (provided in configuration), the username (to which api-key is linked), and IP address (origin of request) will be logged to the NetCrunch event log.
Nodes API gives many possibilities to access, modify, add, and remove node data from NetCrunch. It can be used to manage NetCrunch through scripts and external applications.
add-node
Add node to the Atlas and enable monitoring, "name" or "networkAddress" are mandatory, the rest of the properties are optional.
POST /api/rest/2/nodes
cURL
curl --request POST --url "https://<netcrunch>/api/rest/2/nodes?api_key=<api key>" --header "Content-Type: application/json" --data "\"name\": \"<node name or address to add>\"}"
curl --request POST --url "http://<netcrunch>/api/rest/2/nodes?api_key=<api-key>&name=<node name or address to add>"
Node.js
const http = require('http'), options = { method: 'POST', hostname: 'localhost', path: '/api/rest/2/nodes', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, nodeData = { "name": "NewNode.ac.acme"
}; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(nodeData)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/nodes" $viewData = @{ name='NewNode.ac.acme' } $body = (ConvertTo-Json $viewData) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Post -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/nodes' data = { "name": "NewNode.ac.acme" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.post(url=url, data=data, headers=headers)
Field Name | Description |
---|---|
name | DNS name |
networkAddress | IP Address |
networkPrefixLength | Network mask prefix length |
displayName | Display name for node |
organization | Name of organization. It must be added to NetCrunch prior to using it in API |
snmpManaged | true/false - enable SNMP monitoring on the node |
snmpProfile | Name of SNMP profile |
snmpPort | SNMP Port |
snmpTimeout | SNMP monitoring timeout in miliseconds |
snmpRetryCount | Retry count for SNMP monitoring |
networkServices | List of network services to monitor |
monitoringTime | monitoring time represented in minutes |
dependsOnNode | ID/name/address of node this node depends on |
identification | node identification for monitoring. Values: 'default', 'ipAddress', 'dnsName', 'macAddress' |
deviceType | object representing type of device |
parentNode | ID/name/address of parent node |
children | List of node ID/name/address representing children of this node |
enabled | enable/disable monitoring of this node (true/false) |
simplifiedMonitoring | Set simplified monitoring to true/false |
disabledFrom | Set to disable monitoring (this is in UTC time) |
disabledUntil | Set to disable monitoring (this is in UTC time) |
customFields | object representing list of custom field values |
interfacesMonitoringEnabled | enable/disable monitoring of interfaces on this node |
snmpTrapPage | snmp trap code page |
sysLogPage | syslog code page |
organization | name of organization for this node |
osMonitorType | Name of OS monitor to use. Values: 'auto', 'none', 'windows', 'macOS', 'linux', 'bsd', 'esx', 'solaris' |
monitoringProvider | Name of the monitoring probe (If parameter is not used or monitoring probe does not exist - NetCrunch Server is a default option |
Example
{ "name": "admin.ad.acme", "networkAddress": "192.168.0.25", "networkPrefixLength": 24, "displayName": "My New Node", "organization": "My organization", "identification": "ipAddress", "deviceType": { "class": "Switch", "manufacturer": "3Com", "model": "LinkSwitch 2200" }, "dependsOnNode": "10.10.10.2", "children": [ "10.10.2.34", "10.10.2.35" ], "snmpManaged": true, "snmpProfile": "Default (read-write)", "snmpPort": 161, "snmpTimeout": 5000, "snmpRetryCount": 3, "interfacesMonitoringEnabled": true, "osMonitorType": "linux", "networkServices": [ { "name": "SSH", "timeout": 5000, "repeat": 3, "additionalRepeat": 0, "monitoringTime": 5 }, { "name": "CIFS/SMB", "isLeading": true } ], "simplifiedMonitoring": false, "enabled": true }
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
Result
The result contains the id of the added node.
{ "id": 1001 }
get-node-properties
Get properties of the node based on the filter.
List of all available properties
GET/api/rest/2/nodes/<node>?properties=<filter>
cURL
curl --request GET --url "http://<netcrunch>/api/rest/2/nodes/192.168.0.25?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&properties=name,status"
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/nodes/192.168.0.25?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6&properties=name,status', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/nodes/192.168.0.25?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6&properties=name,status"
Python
import requests requests.get('http://localhost/api/rest/2/nodes/192.168.0.25?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6&properties=name,status')
Parameters
Name | Description |
---|---|
node identifier | Node ID, DNS name or IP Address |
filter | List of properties. Returns all properties if omitted, a value is "all" or "*" |
Status Codes
Status Code | Description |
---|---|
200 | OK |
412 | Missing or invalid parameter |
401 | Access denied |
404 | Node not found |
Example Result
{ "id": 1001, "name": "admin.ad.acme", "dnsName": "admin.ad.acme", "networkAddress": "192.168.0.25", "networkPrefixLength": 24, "snmpComputerName": "admin.ad.acme", "snmpOsDescription": "Hardware: Intel64 Family 6 Model 26 Stepping 5 AT/AT COMPATIBLE - Software: Windows Version 6.3 (Build 16299 Multiprocessor Free)", "snmpSysObjId": "1.3.6.1.4.1.311.1.1.3.1.1", "snmpLocation": "Development", "snmpAvailable": true, "snmpManaged": true, "snmpProfile": "Default (read-write)", "snmpPort": 161, "snmpTimeout": 5000, "snmpRetryCount": 3, "displayName": "My Node", "monitoringTime": "5", "networkServices": [ { "name": "SSH", "status": "Not Responding", "errorMessage": "Can't open connection" }, { "name": "CIFS/SMB", "isLeading": true, "status": "OK" } ], "dependsOnNode": { "nodeId": 1003, "name": "test.ad.acme", "networkAddress":"10.10.3.72", "dnsName":"test.ad.acme" }, "netBiosName": "ADMIN", "identification": "default", "deviceType":{ "class":"Server/Workstation", "os":"Windows Server", "version":"Windows 2012 R2 Server" }, "parentNode": { "nodeId": 1003, "name": "test.ad.acme", "networkAddress":"10.10.3.72", "dnsName":"test.ad.acme" }, "simplifiedMonitoring": false, "status": "OK", "avgResponseTime":10, "maxResponseTime":100, "alerts24h":{ "count":5, "critical":2, "warning":3, "unacknowledged":1 }, "lastAlert":{ "id":585266, "info":"Node Down", "serverity":"Critical", "time":"2018-10-09T06:15:15.000Z" }, "macAddress":"D89EF32B5ADD", "enabled":true, "disabledFrom":"2018-03-08T13:27:34.876Z", "disabledUntil":"2018-03-09T13:27:34.876Z", "addTime":"2018-02-08T13:27:34.876Z", "lastStatusChange":"2018-10-09T10:08:58.885Z", "issueCount": 2, "connectedToSwitch":{ "nodeId":1078, "interface":"Port-channel2", "port":464, "vlanId":999 }, "virtualization": { "type": "Hyper-V", "hostNodeId": 1023, "hostName": "virt-host.ad.acme", "dataCenter": "Master Data Center" }, "monitoringEngines":[ {"id":"win", "name": "Windows", "enabled":true, "status":"OK", credentials: 'MyPassword', monitoringTime: 4}, {"is":"ntsvc", "name": "Windows Services", "enabled":true, "status":"unknown"}, {"id":"ntlog", "name": "Windows Event Log", "enabled":true, "status":"OK"} ], "children":[1506,1507], "pendingAlertsCount":0, "customFields":{ "Info 1":"Wacław", "Info 2":"Misiek", "Pick":"3", "Date Field":"1990-10-04T23:00:00.000Z" }, "interfacesMonitoringEnabled": true, "organizationalUnit": "Testing", "snmpTrapCodePage":4294967295, "sysLogCodePage":4294967295, "lastNote":null, "templateNode": { "nodeId": 1234, "name": "template", "networkAddress":"", "dnsName":"" }, "addressSpace":"", "sensors":[{ "UId":"Pending Reboot#1546607522658", "Status":"Unknown", "Changed":"2019-01-04T08:37:08.057Z", "Object":"MyObject", "Instance":"MyObject", "Name":"My Sensor", "Alerts":0, "CfgGroup":"sensors", "Message":"", "Enabled":true, "MonitoringTime":1, "Credentials":"<Use from Windows monitor>" }] }{ "id": 1001, "name": "admin.ad.acme", "dnsName": "admin.ad.acme", "networkAddress": "192.168.0.25", "networkPrefixLength": 24, "snmpComputerName": "admin.ad.acme", "snmpOsDescription": "Hardware: Intel64 Family 6 Model 26 Stepping 5 AT/AT COMPATIBLE - Software: Windows Version 6.3 (Build 16299 Multiprocessor Free)", "snmpSysObjId": "1.3.6.1.4.1.311.1.1.3.1.1", "snmpLocation": "Development", "snmpAvailable": true, "snmpManaged": true, "snmpProfile": "Default (read-write)", "snmpPort": 161, "snmpTimeout": 5000, "snmpRetryCount": 3, "displayName": "My Node", "monitoringTime": "5", "networkServices": [ { "name": "SSH", "status": "Not Responding", "errorMessage": "Can't open connection" }, { "name": "CIFS/SMB", "isLeading": true, "status": "OK" } ], "dependsOnNode": { "nodeId": 1003, "name": "test.ad.acme", "networkAddress":"10.10.3.72", "dnsName":"test.ad.acme" }, "netBiosName": "ADMIN", "identification": "default", "deviceType":{ "class":"Server/Workstation", "os":"Windows Server", "version":"Windows 2012 R2 Server" }, "parentNode": { "nodeId": 1003, "name": "test.ad.acme", "networkAddress":"10.10.3.72", "dnsName":"test.ad.acme" }, "simplifiedMonitoring": false, "status": "OK", "avgResponseTime":10, "maxResponseTime":100, "alerts24h":{ "count":5, "critical":2, "warning":3, "unacknowledged":1 }, "lastAlert":{ "id":585266, "info":"Node Down", "serverity":"Critical", "time":"2018-10-09T06:15:15.000Z" }, "macAddress":"D89EF32B5ADD", "enabled":true, "disabledFrom":"2018-03-08T13:27:34.876Z", "disabledUntil":"2018-03-09T13:27:34.876Z", "addTime":"2018-02-08T13:27:34.876Z", "lastStatusChange":"2018-10-09T10:08:58.885Z", "issueCount": 2, "connectedToSwitch":{ "nodeId":1078, "interface":"Port-channel2", "port":464, "vlanId":999 }, "virtualization": { "type": "Hyper-V", "hostNodeId": 1023, "hostName": "virt-host.ad.acme", "dataCenter": "Master Data Center" }, "monitoringEngines":[ {"id":"win", "name": "Windows", "enabled":true, "status":"OK", credentials: 'MyPassword', monitoringTime: 4}, {"id":"ntsvc", "name": "Windows Services", "enabled":true, "status":"unknown"}, {"id":"ntlog", "name": "Windows Event Log", "enabled":true, "status":"OK"} ], "children":[1506,1507], "pendingAlertsCount":0, "customFields":{ "Info 1":"Hello", "Info 2":"Test", "Pick":"3", "Date Field":"1990-10-04T23:00:00.000Z" }, "interfacesMonitoringEnabled": true, "organizationalUnit": "Testing", "snmpTrapCodePage":4294967295, "sysLogCodePage":4294967295, "lastNote":null, "templateNode": { "nodeId": 1234, "name": "template", "networkAddress":"", "dnsName":"" }, "addressSpace":"", "sensors":[{ "UId":"Pending Reboot#1546607522658", "Status":"Unknown", "Changed":"2019-01-04T08:37:08.057Z", "Object":"MyObject", "Instance":"MyObject", "Name":"My Sensor", "Alerts":0, "CfgGroup":"sensors", "Message":"", "Enabled":true, "MonitoringTime":1, "Credentials":"<Use from Windows monitor>" }], "tags": "A,B,C" }
get-node-property
Get a single property value.
GET/api/rest/2/nodes/<node identifier>/<property>
List of all available properties
cURL
curl --request GET --url "http://<netcrunch>/api/rest/2/nodes/1001/name?api_key=<api key>"
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/nodes/1001/name?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/nodes/1001/name?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6"
Python
import requests requests.get('http://localhost/api/rest/2/nodes/1001/name?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6')
Parameters
Name | Description |
---|---|
node identifier | Node ID, DNS name or IP Address |
property | Name of node property to get. The property name can contain a dot to query for sub-property. For example: lastAlert.id or customFields.Info 1 |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
Result
The result contains property and value.
{ "name": "email.ac.acme", }
get-list-of-maps-of-the-node
Get a list of views of the node. The result is returned as an array of JSON objects.
GET/api/rest/2/nodes/<node identifier>/maps
cURL
curl --request GET --url "http://<netcrunch>/api/rest/2/nodes/1001/maps?api_key=<api key>"
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/nodes/1001/maps?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/nodes/1001/maps?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6"
Python
import requests requests.get('http://localhost/api/rest/2/nodes/1001/maps?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6')
Parameters
Name | Description |
---|---|
node identifier | ID, name or IP address of the node |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
Result
[ {"id":75,"name":"Locations"}, {"id":219,"name":"Operating System Monitoring"}, {"id":1005,"name":"Monitoring Dependencies"}, {"id":1006,"name":"10.10.8.13/26"}, {"id":1007,"name":"Windows 10"}, {"id":1008,"name":"Workstations"} ]
get-list-of-monitoring-packs-of-the-node
Get a list of Monitoring Packs of the node. The list is returned as an array of JSON objects.
GET/api/rest/2/nodes/<node identifier>/policies
cURL
curl --request GET --url "http://<netcrunch>/api/rest/2/nodes/1001/policies?api_key=<api key>"
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/nodes/1001/policies?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/nodes/1001/policies?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6"
Python
import requests requests.get('http://localhost/api/rest/2/nodes/1001/policies?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6')
Parameters
Name | Description |
---|---|
node identifier | ID, name or IP address of the node |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
Result
[ {"id":64,"name":"Basic Windows Monitoring"}, {"id":78,"name":"Node Status"}, {"id":79,"name":"Service Status"} ]
set-node-properties
Use a single request to change or set multiple properties of the node. You can use a JSON file as a payload to change multiple properties at once
List of all available properties
PUT/api/rest/2/nodes/<node identifier>/<property>
cURL
curl --request PUT --url "http://<netcrunch>/api/rest/2/nodes/1001/name?api_key=<api key>&value=<value>"
Node.js
const http = require('http'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/nodes/1001', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, viewData = { "name": "NewNodeName.ac.acme" }; const req = http.request(options, (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(viewData)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/nodes/1001" $viewData = @{ name='NewNodeName.ac.acme' } $body = (ConvertTo-Json $viewData) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Put -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/nodes/1001' data = { "name": "NewNodeName.ac.acme" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.put(url=url, data=data, headers=headers)
Parameters
Name | Description |
---|---|
node identifier | Node ID, DNS name or IP Address |
property | Name of node property to get. The property name can contain a dot to query for sub-property. For Example, lastAlert.id or customFields.Info 1 |
value | Value to set for the field |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
Result
{ "status": "ok" }
set-node-property
Change or set a single property of the node.
List of all available properties
PUT/api/rest/2/nodes/<node identifier>/<property>
cURL
curl --request PUT --url "http://<netcrunch>/api/rest/2/nodes/1001/name?api_key=<api key>&value=<value>"
Node.js
const http = require('http'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/nodes/1001', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, viewData = { "name": "NewNodeName.ac.acme" }; const req = http.request(options, (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(viewData)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/nodes/1001" $viewData = @{ name='NewNodeName.ac.acme' } $body = (ConvertTo-Json $viewData) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Put -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/nodes/1001' data = { "name": "NewNodeName.ac.acme" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.put(url=rul, data=data, headers=headers)
Parameters
Name | Description |
---|---|
node identifier | Node ID, DNS name or IP Address |
property | Name of node property to get. The property name can contain a dot to query for sub-property. For example, lastAlert.id or customFields.Info 1 |
value | Value to set for the field |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
Result
{ "id": 1001 }
manage-node-monitoring-state
Enable or disable node monitoring by setting the value to on or off, additionally schedule a time when a node should be disabled and re-enabled
PUT/api/rest/2/nodes/<node identifier>/monitoring/<value>
cURL
curl --request PUT --url "http://<netcrunch>/api/rest/2/nodes/1001/monitoring/off?api_key=fb91cd1f6479db477276761a93e878d0b65e2039" --header "Content-Type: application/json" --data "{\"disabledFrom\": \"2018-03-08T13:27:34.876Z\", \"disabledUntil\": \"2018-03-09T13:27:34.876Z\"}"
Node.js
const http = require('http'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/nodes/1001/monitoring/off', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, viewData = { "disabledFrom": "2018-03-08T13:27:34.876Z", "disabledUntil": "2018-03-09T13:27:34.876Z" }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(viewData)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/nodes/1001/monitoring/off" $viewData = @{ disabledFrom='2018-03-08T13:27:34.876Z' disabledUntil='2018-03-09T13:27:34.876Z' } $body = (ConvertTo-Json $viewData) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Put -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/nodes/1001/monitoring/off' data = { "disabledFrom": "2018-03-08T13:27:34.876Z", "disabledUntil": "2018-03-09T13:27:34.876Z" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.put(url=url, data=data, headers=headers)
Parameters
Name | Description |
---|---|
node identifier | Node ID, DNS name or IP Address |
value | Values "on" / "off" |
disabledFrom | Date to disable from. Ignored when value is "on" |
disabledUntil | Date to disable until. Ignored when value is "on" |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
Result
{ "status": "ok" }
add-network-service
Add network service to the node. Node identifier and network service name are mandatory if no other parameters are provided program defaults will be used.
POST/api/rest/2/nodes/<node>/network-services
curl --request POST --url "https://localhost/api/rest/2/nodes/192.168.3.10/network-services?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&name=SSH"
Node.js
const http = require('http'); http.post('http://localhost/api/rest/2/nodes/1001/network-services?api_key=6e1190b27df8f5eb2bebeddea87426001e6f4b3d&name=SSH', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Method Post -Uri "http://localhost/api/rest/2/nodes/1001/network-services?api_key=6e1190b27df8f5eb2bebeddea87426001e6f4b3d&name=SSH"
Python
import requests requests.post('http://localhost/api/rest/2/nodes/1001/network-services?api_key=6e1190b27df8f5eb2bebeddea87426001e6f4b3d&name=SSH')
Parameters
Name | Description |
---|---|
node identifier | ID or name or IP address of node |
name | Name of service to monitor |
timeout | Specifies the maximum time in milliseconds that NetCrunch should wait for a reply |
repeat | Number of requests to send in a single check |
additionalRepeat | Number of additional checks when service is not responding (Error, timeout, etc.) |
monitoringTime | Time set in minutes |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
Result
{ "status": "ok" }
set-network-services-parameters
Modify parameters of monitored network service of a node
PUT/api/rest/2/nodes/<node identifier>/network-services/<service>
cURL
curl --request PUT --url "https://localhost/api/rest/2/nodes/192.168.3.10/network-services/SSH?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&timeout=2000"
Node.js
const http = require('http'); http.put('https://localhost/api/rest/2/nodes/192.168.3.10/network-services/SSH?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&timeout=2000', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Method put -Uri "https://localhost/api/rest/2/nodes/192.168.3.10/network-services/SSH?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&timeout=2000"
Python
import requests requests.put('https://localhost/api/rest/2/nodes/192.168.3.10/network-services/SSH?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&timeout=2000')
Parameters
Name | Description |
---|---|
node identifier | ID or name or IP address of node |
service | Name of network service to modify |
timeout | Specifies the maximum time in milliseconds that NetCrunch should wait for a reply |
repeat | Number of requests to send in a single check |
additionalRepeat | Number of additional checks when service is not responding (Error, timeout, etc.) |
monitoringTime | Time set in minutes |
isLeading | true/false, Leading Service is a network service designed to be checked as the only service when the node is DOWN |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
Result
{ "status": "ok" }
remove-network-service
Remove network service monitoring from the node.
DELETE/api/rest/2/nodes/<node identifier>/network-services/<service>
cURL
curl --request DELETE --url "https://localhost/api/rest/2/nodes/192.168.3.10/network-services/SSH?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&name=SSH"
Node.js
const http = require('http'); http.delete('http://localhost/api/rest/2/nodes/1001/network-services?api_key=6e1190b27df8f5eb2bebeddea87426001e6f4b3d&name=SSH', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Method delete -Uri "http://localhost/api/rest/2/nodes/1001/network-services?api_key=6e1190b27df8f5eb2bebeddea87426001e6f4b3d&name=SSH"
Python
import requests requests.delete('http://localhost/api/rest/2/nodes/1001/network-services?api_key=6e1190b27df8f5eb2bebeddea87426001e6f4b3d&name=SSH')
Parameters
Name | Description |
---|---|
node identifier | ID or name or IP address of node |
service | Name of network service to remove |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
Result
{ "status": "ok" }
set-custom-field-value
Set the value of the custom field on a node. The custom field needs to be defined in NetCrunch in order to set the value on it.
PUT/api/rest/2/nodes/<node>/custom-fields/<field>
cURL
curl --request PUT --url "https://localhost/api/rest/2/nodes/192.168.3.10/custom-fields/Info%201?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&value=My_Value"
Node.js
const http = require('http'); http.put('https://localhost/api/rest/2/nodes/192.168.3.10/custom-fields/Info%201?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&value=My_Value', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Method put -Uri "https://localhost/api/rest/2/nodes/192.168.3.10/custom-fields/Info%201?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&value=My_Value"
Python
import requests requests.put('https://localhost/api/rest/2/nodes/192.168.3.10/custom-fields/Info%201?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&value=My_Value')
Parameters
Name | Description |
---|---|
node identifier | ID or name or IP address of node |
field | Name of the custom field to set |
value | New field value |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node or field not found |
Result
{ "status": "ok" }
remove-custom-field-value
Remove custom field value from the node.
DELETE/api/rest/2/nodes/<node identifier>/custom-fields/<field>
cURL
curl --request DELETE --url "https://localhost/api/rest/2/nodes/192.168.3.10/custom-fields/Info%201?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'); http.delete('https://localhost/api/rest/2/nodes/192.168.3.10/custom-fields/Info%201?api_key=fb91cd1f6479db477276761a93e878d0b65e2039', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Method delete -Uri "https://localhost/api/rest/2/nodes/192.168.3.10/custom-fields/Info%201?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Python
import requests requests.delete('https://localhost/api/rest/2/nodes/192.168.3.10/custom-fields/Info%201?api_key=fb91cd1f6479db477276761a93e878d0b65e2039')
Parameters
Name | Description |
---|---|
node identifier | ID or name or IP address of node |
field | Name of the custom field to remove |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node or field not found |
Result
{ "status": "ok" }
add-secondary-interface
Add a secondary interface to a node. Both primary and secondary interface nodes must be monitored by NetCrunch.
POST/api/rest/2/nodes/<node identifier>/children/<child>
cURL
curl --request POST --url "https://localhost/api/rest/2/nodes/192.168.3.10/children/192.168.3.20?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'); http.post('https://localhost/api/rest/2/nodes/192.168.3.10/children/192.168.3.20?api_key=fb91cd1f6479db477276761a93e878d0b65e2039', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Method Post -Uri "https://localhost/api/rest/2/nodes/192.168.3.10/children/192.168.3.20?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Python
import requests requests.post('https://localhost/api/rest/2/nodes/192.168.3.10/children/192.168.3.20?api_key=fb91cd1f6479db477276761a93e878d0b65e2039')
Parameters
Name | Description |
---|---|
node identifier | ID or name or IP address of node |
child | ID or name or IP address of a node to add |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node or child not found |
Result
{ "status": "ok" }
remove-secondary-interface
Remove the secondary interface node from the node.
DELETE/api/rest/2/nodes/<node identifier>/children/<child>
cURL
curl --request DELETE --url "https://localhost/api/rest/2/nodes/192.168.3.10/children/192.168.3.20?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'); http.delete('https://localhost/api/rest/2/nodes/192.168.3.10/children/192.168.3.20?api_key=fb91cd1f6479db477276761a93e878d0b65e2039', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Method delete -Uri "https://localhost/api/rest/2/nodes/192.168.3.10/children/192.168.3.20?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Python
import requests requests.delete('https://localhost/api/rest/2/nodes/192.168.3.10/children/192.168.3.20?api_key=fb91cd1f6479db477276761a93e878d0b65e2039')
Parameters
Name | Description |
---|---|
node identifier | ID or name or IP address of node |
child | ID or name or IP address of a node to remove |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node or child not found |
Result
{ "status": "ok" }
set-sensor-parameters
Set parameters of the sensor
PUT/api/rest/2/nodes/<node>/sensors/<sensor>
curl --request PUT --url "https://localhost/api/rest/2/nodes/192.168.3.10/sensors/My%20Sensor?api_key=fb91cd1f6479db477276761a93e878d0b65e2039 --header "Content-Type: application/json" --data "{\"enabled\": true, \"credentials\": \"CredentialsProfile\", \"monitoringTime\": 5 }""
Node.js
const http = require('http'), querystring = require('querystring'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/nodes/192.168.3.10/sensors/' + querystring.escape('My Sensor') , headers: { "content-type": "application/json", "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, data = { "enabled": true, "credentials": "CredentialsProfile", "monitoringTime": 5 }; const req = http.request(options, (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(data)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/nodes/192.168.3.10/sensors/My%20Sensor" $data = @{ enabled=true, credentials='CredentialsProfile', monitoringTime=5 } $body = (ConvertTo-Json $viewData) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Put -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/nodes/192.168.3.10/sensors/My%20Sensor' data = { "enabled": true, "credentials": "CredentialsProfile", "monitoringTime": 5 } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } response = requests.put(url=url, data=data, headers=headers)
URL Parameters
Name | Description |
---|---|
node identifier | ID, name, or IP address of a node to delete |
sensor | UID or name of the sensor to modify. It may be a list of values separated by comma |
Data Parameters
Name | Description |
---|---|
enabled | True/False |
credentials | Name of Credentials profile |
monitoring time | Value in minutes |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node or sensor not found |
set-monitoring-engine-parameters
PUT/api/rest/2/nodes/<node>/monitoring-engines/<engine>
curl --request PUT --url "https://localhost/api/rest/2/nodes/192.168.3.10/monitoring-engines/Windows?api_key=fb91cd1f6479db477276761a93e878d0b65e2039 --header "Content-Type: application/json" --data "{\"enabled\": true, \"credentials\": \"CredentialsProfile\", \"monitoringTime\": 5}""
Node.js
const http = require('http'), querystring = require('querystring'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/nodes/192.168.3.10/monitoring-engines/Windows', headers: { "content-type": "application/json", "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, data = { "enabled": true, "credentials": "CredentialsProfile", "monitoringTime": 5 }; const req = http.request(options, (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(data)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/nodes/192.168.3.10/monitoring-engines/Windows" $data = @{ enabled=true, credentials='CredentialsProfile', monitoringTime=5 } $body = (ConvertTo-Json $viewData) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Put -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/nodes/192.168.3.10/monitoring-engines/Windows' data = { "enabled": true, "credentials": "CredentialsProfile", "monitoringTime": 5 } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } response = requests.put(url=url, data=data, headers=headers)
URL Parameters
Name | Description |
---|---|
node identifier | ID, name or IP address of a node |
engine | ID or name of monitoring engine |
DataParameters
Name | Description |
---|---|
enabled | true/false |
credentials | Name of credentials profile |
monitoringTime | Value in minutes. Set to null to use the default (from the node) |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node or monitoring engine not found |
delete-sensor
DELETE/api/rest/2/nodes/<node>/sensors/<sensor>
curl --request DELETE --url "https://localhost/api/rest/2/nodes/192.168.3.10/sensors/My%20Sensor?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'), querystring = require('querystring'), options = { method: 'DELETE', hostname: 'localhost', path: '/api/rest/2/nodes/192.168.3.10/sensors/' + querystring.escape('My Sensor') , headers: { "content-type": "application/json", "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }; const req = http.request(options, (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.end();
PowerShell
$url = "http://localhost/api/rest/2/nodes/192.168.3.10/sensors/My%20Sensor" $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Delete -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/nodes/192.168.3.10/sensors/My%20Sensor' headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } response = requests.delete(url=url, headers=headers)
Parameters
Name | Description |
---|---|
node identifier | ID, name or IP address of a node to delete |
sensor | UID or name of the sensor to modify. It may be a list of values separated by comma |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node or sensor not found |
add-node-tag
Add tag to a node.
PUT/api/rest/2/nodes/<node>/tags/<tag>
cURL
curl --request PUT --url "https://localhost/api/rest/2/nodes/192.168.3.10/tags/tagA?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'); http.put('https://localhost/api/rest/2/nodes/192.168.3.10/tags/tagA?api_key=fb91cd1f6479db477276761a93e878d0b65e2039', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Method put -Uri "https://localhost/api/rest/2/nodes/192.168.3.10/tags/tagA?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Python
import requests requests.put('https://localhost/api/rest/2/nodes/192.168.3.10/tags/tagA?api_key=fb91cd1f6479db477276761a93e878d0b65e2039')
Parameters
Name | Description |
---|---|
node identifier | ID or name or IP address of node |
tag | Name of the tag to add |
Status Codes
Status Code | Description |
---|---|
200 | OK, tag already exists |
201 | Created |
401 | Access denied |
404 | Node or field not found |
Result
{ "status": "ok" }
remove-node-tag
Remove tag the node.
DELETE/api/rest/2/nodes/<node identifier>/tags/<tag>
cURL
curl --request DELETE --url "https://localhost/api/rest/2/nodes/192.168.3.10/tags/tagA?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'); http.delete('https://localhost/api/rest/2/nodes/192.168.3.10/tags/tagA?api_key=fb91cd1f6479db477276761a93e878d0b65e2039', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Method delete -Uri "https://localhost/api/rest/2/nodes/192.168.3.10/tags/tagA?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Python
import requests requests.delete('https://localhost/api/rest/2/nodes/192.168.3.10/tags/tagA?api_key=fb91cd1f6479db477276761a93e878d0b65e2039')
Parameters
Name | Description |
---|---|
node identifier | ID or name or IP address of node |
tag | Name of the tag to remove |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node or field not found |
Result
{ "status": "ok" }
delete-node
Remove the node from the Atlas.
DELETE/api/rest/2/nodes/<node identifier>
curl --request DELETE --url "https://localhost/api/rest/2/nodes/192.168.3.15?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'); http.delete('https://localhost/api/rest/2/nodes/192.168.3.15?api_key=fb91cd1f6479db477276761a93e878d0b65e2039', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Method delete -Uri "https://localhost/api/rest/2/nodes/192.168.3.15?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Python
import requests requests.delete('https://localhost/api/rest/2/nodes/192.168.3.15?api_key=fb91cd1f6479db477276761a93e878d0b65e2039')
Parameters
Name | Description |
---|---|
node identifier | ID, name or IP address of a node to delete |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
Views and Folders API allows us to add, remove, access, and retrieve Folders and Views data in NetCrunch.
add-view
Add a static view to the atlas. If no parent folder is provided, the view will be added on the "Custom Views" level.
POST/api/rest/2/views
cURL
curl --request POST --url "https://localhost/api/rest/2/views?api_key=fb91cd1f6479db477276761a93e878d0b65e2039" --header "Content-Type: application/json" --data "{\"name\": \"New view\"}"
Node.js
const http = require('http'), options = { method: 'POST', hostname: 'localhost', path: '/api/rest/2/views', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, viewData = { "name": "New View" }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(viewData)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/views" $viewData = @{ name='New View' } $body = (ConvertTo-Json $viewData) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Post -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/views' data = { "name": "New View" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.post(url=url, data=data, headers=headers)
Parameters
Name | Description |
---|---|
name | Name of the new view |
parent | ID, name or path to parent map |
Status Codes
Status Code | Description |
---|---|
200 | OK |
400 | Name is missing |
400 | NetCrunch request error |
401 | Access Denied |
412 | No API Key |
Result
The result contains the id of the added (or found) view.
{ "id": 1001 }
add-folder
Add empty folder to the Atlas. If no parent folder is provided, it will be added on the "Custom Views" level.
POST/api/rest/2/views/folder
cURL
curl --request POST --url "https://localhost/api/rest/2/views/folder?api_key=fb91cd1f6479db477276761a93e878d0b65e2039" --header "Content-Type: application/json" --data "{\"name\": \"New Folder\"}"
Node.js
const http = require('http'), options = { method: 'POST', hostname: 'localhost', path: '/api/rest/2/views/folder', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, viewData = { "name": "New Folder" }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(viewData)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/views/folder" $viewData = @{ name='New Folder' } $body = (ConvertTo-Json $viewData) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Post -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/views/folder' data = { "name": "New Folder" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.post(url=url, data=data, headers=headers)
Parameters
Name | Description |
---|---|
name | Name of the new view |
parent | ID, name or path to parent map |
Status Codes
Status Code | Description |
---|---|
200 | OK |
400 | Name is missing |
400 | NetCrunch request error |
401 | Access denied |
412 | No API key |
Result
{ "id": 1001 }
get-view-properties
Get properties of the view based on the filter.
List Of Additional IP Network View Properties
GET/api/rest/2/views/<view>?properties=<filter>
cURL
curl --url "http://localhost/api/rest/2/views/Custom%20Views?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&properties=name,status"
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/views/Custom%20Views?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6&properties=name,status', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/views/Custom%20Views?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6&properties=name,status"
Python
import requests requests.get('http://localhost/api/rest/2/views/Custom%20Views?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6&properties=name,status')
Parameters
Name | Description |
---|---|
view | View ID, name or path |
filter | List of properties. Returns all properties if omitted, value is "all" or "*" |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | View not found |
Result
{ "id": 1050, "name": "Parent", "status": "unknown", "stats": { "nodes": { "total": 0, "ok": 0, "unknown": 0, "warning": 0, "error": 0, "alerts": { "active": 0, "las24Hours": { "unacknowledged": 0, "total": 0, "critical": 0, "warning": 0 } } }, "maps": { "total": 2, "ok": 0, "unknown": 0, "warning": 2, "error": 0 } }, "path": "/Custom Views", "isDynamic": true, "isFolder": true, "readOnly": false }
get-view-property
Get a single property of the view.
GET/api/rest/2/views/<view>/<property>
[List Of Additional IP Network View Properties](@api-data-properties:ip-network-property-list
cURL
curl --url "http://localhost/api/rest/2/views/Custom%20Views/id?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/views/Custom%20Views/id?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/views/Custom%20Views/id?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6"
Python
import requests requests.get('http://localhost/api/rest/2/views/Custom%20Views/id?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6')
Parameters
Name | Description |
---|---|
view | Name ID or path to the view |
property | Name of the property to get |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | View not found |
Result
{ "id": 1001 }
set-view-property
Set or change view property
PUT/api/rest/2/views/<view>/<property>
cURL
curl --request PUT --url "https://localhost/api/rest/2/views/1050?api_key=fb91cd1f6479db477276761a93e878d0b65e2039" --header "Content-Type: application/json" --data "{\"name\": \"New Name\"}"
Node.js
const http = require('http'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/views/1050', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, viewData = { "name": "New Name" }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(viewData)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/views/1050" $viewData = @{ name='New Name' } $body = (ConvertTo-Json $viewData) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Put -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/views/1050' data = { "name": "New Name" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.put(url=url, data=data, headers=headers)
Parameters
Name | Description |
---|---|
view | View ID, name or path |
property | Name of view property to set |
Status Codes
Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | View Not Found |
Result
{ "status": "ok" }
get-list-of-nodes-in-the-view
Get the list of Nodes Present in the View.
GET/api/rest/2/views/<view>/nodes
cURL
curl --url "http://localhost/api/rest/2/views/Unresponding%20Nodes/nodes?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/views/Unresponding%20Nodes/nodes?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/views/Unresponding%20Nodes/nodes?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6"
Python
import requests requests.get('http://localhost/api/rest/2/views/Unresponding%20Nodes/nodes?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6')
Parameters
Name | Description |
---|---|
view | Name ID or path to the view |
property | Name of the property to get |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | View not found |
Result
[ {"id":1019,"name":"foo","networkAddress":"192.168.1.89","dnsName":"foo.test.com"}, {"id":1019,"name":"bar","networkAddress":"192.168.1.10","dnsName":"bar.test.com"} ]
add-node-to-the-view
Add the monitored node to a view.
POST/api/rest/2/views/<view>/nodes/<node>
cURL
curl --request POST --url "https://localhost/api/rest/2/views/New%20View/nodes/192.168.1.10?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'), querystring = require('querystring'), options = { method: 'POST', hostname: 'localhost', path: '/api/rest/2/views/' + querystring.escape('New View') + '/nodes/192.168.1.10' , headers: { "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.end();
PowerShell
$url = "http://localhost/api/rest/2/views/New%20View/nodes/192.168.1.10" $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Post -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/views/New%20View/nodes/192.168.1.10' headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } response = requests.post(url=url, headers=headers)
Parameters
Name | Description |
---|---|
view | Name ID or path to the view |
node | ID, name or IP address of the node to add |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | View or node not found |
Result
{ "status":"ok" }
remove-node-from-the-view
Remove the node from a view.
DELETE/api/rest/2/views/<view>/nodes/<node>
cURL
curl --request DELETE --url "https://localhost/api/rest/2/views/New%20View/nodes/192.168.1.10?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'), querystring = require('querystring'), options = { method: 'DELETE', hostname: 'localhost', path: '/api/rest/2/views/' + querystring.escape('New View') + '/nodes/192.168.1.10', headers: { "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.end();
PowerShell
$url = "http://localhost/api/rest/2/views/New%20View/nodes/192.168.1.10" $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Delete -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/views/New%20View/nodes/192.168.1.10' headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } response = requests.delete(url=url, headers=headers)
Parameters
Name | Description |
---|---|
view | Name ID or path to the view |
node | ID, name or IP address of the node to remove |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | View or node not found |
Result
{ "status":"ok" }
enable-ip-network-view-monitoring
Enable monitoring of all nodes in the view
PUT/api/rest/2/views/<view>/enable
cURL
curl --request PUT --url "https://localhost/api/rest/2/views/1050/enable?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/views/1050/enable', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.end();
PowerShell
$url = "http://localhost/api/rest/2/views/1050/enable" $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Put -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/views/1050/enable' headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.put(url=url, headers=headers)
Parameters
Name | Description |
---|---|
view | View ID, name or path |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | View not found |
Result
{ "status":"ok" }
disable-ip-network-view-monitoring
Disable monitoring of all nodes in the view
PUT/api/rest/2/views/<view>/disable
cURL
curl --request PUT --url "https://localhost/api/rest/2/views/1050/disable?api_key=fb91cd1f6479db477276761a93e878d0b65e2039" --header "Content-Type: application/json" --data "{\"disabledFrom\": \"2019-01-01T13:27:34.876Z\", \"disabledUntil\": \""2019-02-01T13:27:34.876Z"\"}"
Node.js
const http = require('http'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/views/1050/disable', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, viewData = { "disabledFrom": "2019-01-01T13:27:34.876Z", "disabledUntil": "2019-02-01T13:27:34.876Z" };
PowerShell
$url = "http://localhost/api/rest/2/views/1050/disable" $viewData = @{ disabledFrom: '2019-01-01T13:27:34.876Z', disabledUntil: '2019-02-01T13:27:34.876Z' } $body = (ConvertTo-Json $viewData) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Put -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/views/1050/disable' data = { "disabledFrom": "2019-01-01T13:27:34.876Z", "disabledUntil": "2019-02-01T13:27:34.876Z" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.put(url=url, data=data, headers=headers)
Parameters
Name | Description |
---|---|
view | View ID, name or path |
disabledFrom | 2018-03-08T13:27:34.876Z |
disabledUntil | 2018-03-09T13:27:34.876Z |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | View not found |
Result
{ "status":"ok" }
delete-view
Remove view from the Atlas.
DELETE/api/rest/2/views/<view>
cURL
curl --request DELETE --url "https://localhost/api/rest/2/views/New%20View?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'), querystring = require('querystring'), options = { method: 'DELETE', hostname: 'localhost', path: '/api/rest/2/views/' + querystring.escape('New View'), headers: { "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.end();
PowerShell
$url = "http://localhost/api/rest/2/views/New%20View" $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Delete -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/views/New%20View' headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } response = requests.delete(url=url, headers=headers)
Parameters
Name | Description |
---|---|
view | Name or ID or path of the view to delete |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | View not found |
Result
{ "id": 1001 }
Monitoring Packs and Map Policies API allows us to get and set various properties of Monitoring Packs and Map Policies; additionally, it can be used to get a list of nodes that are using monitoring pack or policy and modify such list by adding and removing nodes.
All requests below can be applied to Monitoring Packs and Map Policies
get-monitoring-pack-properties
Get properties of the monitoring pack based on the filter.
GET/api/rest/2/policies/<policy>?properties=<filter>
List Of Additional Monitoring Pack Properties
cURL
curl --url "http://localhost/api/rest/2/policies/CPU?api_key=fb91cd1f6479db477276761a93e878d0b65e2039&properties=name,status"
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/policies/CPU?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6&properties=name,status', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/policies/CPU?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6&properties=name,status"
Python
import requests requests.get('http://localhost/api/rest/2/policies/CPU?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6&properties=name,status')
Parameters
Name | Description |
---|---|
policy | Policy ID, name or path |
filter | List of properties. Returns all properties if omitted, value is "all" or "*" |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Policy not found |
Result
{ "id": 1050, "name": "CPU", "status": "unknown", "stats": { "nodes": { "total": 0, "ok": 0, "unknown": 0, "warning": 0, "error": 0, "alerts": { "active": 0, "las24Hours": { "unacknowledged": 0, "total": 0, "critical": 0, "warning": 0 } } }, "maps": { "total": 2, "ok": 0, "unknown": 0, "warning": 2, "error": 0 } }, "path": "/Monitoring Packs/Operating Systems/Windows/CPU", "isDynamic": true, "isFolder": true, "readOnly": false }
get-monitoring-pack-property
Get a single property of the monitoring pack.
GET/api/rest/2/policies/<policy>/<property>
List Of Additional Monitoring Pack Properties
cURL
curl --url "http://localhost/api/rest/2/policies/CPU/id?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/policies/CPU/id?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/policies/CPU/id?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6"
Python
import requests requests.get('http://localhost/api/rest/2/policies/CPU/id?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6')
Parameters
Name | Description |
---|---|
policy | Policy ID, name or path |
property | Name of the property to get |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Policy not found |
Result
{ "id": 78 }
set-monitoring-pack-property
Set a property of the monitoring pack
PUT/api/rest/2/policies/<policy>/<property>
cURL
curl --request PUT --url "https://localhost/api/rest/2/policies/1050?api_key=fb91cd1f6479db477276761a93e878d0b65e2039" --header "Content-Type: application/json" --data "{\"name\": \"New Name\"}"
Node.js
const http = require('http'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/policies/1050', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, viewData = { "name": "New Name" }; const req = http.request(options, (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(viewData)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/policies/1050" $viewData = @{ name='New Name' } $body = (ConvertTo-Json $viewData) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Put -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/policies/1050' data = { "name": "New Name" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.put(url=url, data=data, headers=headers)
Parameters
Name | Description |
---|---|
view | Policy ID, name or path |
property | Name of policy property to set |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Policy not found |
Result
{ "status":"ok" }
enable-monitoring-pack
Enable all alerting and data collection of the monitoring pack
PUT/api/rest/2/policies/<view>/enable
cURL
curl --request PUT --url "https://localhost/api/rest/2/policies/1050/enable?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/policies/1050/enable', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.end();
PowerShell
$url = "http://localhost/api/rest/2/policies/1050/enable" $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Put -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/policies/1050/enable' headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.put(url=url, headers=headers)
Parameters
Name | Description |
---|---|
view | View ID, name or path |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Policy not found |
Result
{ "status":"ok" }
disable-monitoring-pack
Disable all alerting and data collection of the monitoring pack
PUT/api/rest/2/policies/<policy>/disable
cURL
curl --request PUT --url "https://localhost/api/rest/2/views/1050/disable?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/views/1050/disable', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.end();
PowerShell
$url = "http://localhost/api/rest/2/views/1050/disable" $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Put -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/views/1050/disable' headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.put(url=url, headers=headers)
Parameters
Name | Description |
---|---|
policy | Policy ID, name or path |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Policy not found |
Result
{ "status":"ok" }
get-list-of-nodes-of-the-monitoring-pack
Get all nodes that belong to the monitoring pack.
GET/api/rest/2/policies/<policy>/nodes
cURL
curl --url "http://localhost/api/rest/2/policies/CPU/nodes?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/policies/CPU/nodes?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/policies/CPU/nodes?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6"
Python
import requests requests.get('http://localhost/api/rest/2/policies/CPU/nodes?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6')
Parameters
Name | Description |
---|---|
policy | Name ID or path to the policy |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Policy not found |
Result
[ {"id":1019,"name":"foo","networkAddress":"192.168.1.89","dnsName":"foo.test.com"}, {"id":1019,"name":"bar","networkAddress":"192.168.1.10","dnsName":"bar.test.com"} ]
add-node-to-the-monitoring-pack
Add node to the monitoring pack (applies to manual monitoring packs only)
POST/api/rest/2/policies/<view>/nodes/<node>
cURL
curl --request POST --url "https://localhost/api/rest/2/policies/CPU/nodes/192.168.1.10?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'), querystring = require('querystring'), options = { method: 'POST', hostname: 'localhost', path: '/api/rest/2/policies/CPU/nodes/192.168.1.10' , headers: { "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.end();
PowerShell
$url = "http://localhost/api/rest/2/policies/CPU/nodes/192.168.1.10" $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Post -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/policies/CPU/nodes/192.168.1.10' headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } response = requests.post(url=url, headers=headers)
Parameters
Name | Description |
---|---|
view | Name ID or path to the policy |
node | ID, name or IP address of the node to add |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Policy or node not found |
Result
{ "status":"ok" }
remove-node-from-the-monitoring-pack
Remove a node from the monitoring pack (applies to manual monitoring packs only)
DELETE/api/rest/2/policies/<policy>/nodes/<node>
cURL
curl --request DELETE --url "https://localhost/api/rest/2/policies/CPU/nodes/192.168.1.10?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'), querystring = require('querystring'), options = { method: 'DELETE', hostname: 'localhost', path: '/api/rest/2/policies/CPU/nodes/192.168.1.10' , headers: { "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }; const req = http.request(options, (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.end();
PowerShell
$url = "http://localhost/api/rest/2/policies/CPU/nodes/192.168.1.10" $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Delete -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/policies/CPU/nodes/192.168.1.10' headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } response = requests.delete(url=url, headers=headers)
Parameters
Name | Description |
---|---|
policy | Name ID or path to the policy |
node | ID, name or IP address of the node to remove |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Policy or node not found |
Result
{ "status":"ok" }
Credentials API allows to get information and remove credentials in NetCrunch
get-credential-types
Get a list of credential types as an array.
GET/api/rest/2/credentials
cURL
curl --url "http://localhost/api/rest/2/credentials?api_key=fb91cd1f6479db477276761a93e878d0b65e2039""
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/credentials?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/credentials?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6"
Python
import requests requests.get('http://localhost/api/rest/2/credentials?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6')
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
Result
["BSD","ESXi","Linux","macOS","Solaris","Windows","Database Connection","HTTP","Email incoming","IPMI","Email (SMTP)","Simple"]
get-names-of-defined-credentials
Get a list of defined credentials.
GET/api/rest/2/credentials/<type>
cURL
curl --url "http://localhost/api/rest/2/credentials/Windows?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'); http.get('http://localhost/api/rest/2/credentials/Windows?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6', (res) => { res.setEncoding('utf8'); res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/credentials/Windows?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6"
Python
import requests requests.get('http://localhost/api/rest/2/credentials/Windows?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6')
Parameters
Name | Description |
---|---|
type | Type of credentials |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
Result
["<Default>","New Profile"]
remove-credentials
DELETE/api/rest/2/credentials/<type>/<name>
cURL
curl --request DELETE --url "http://localhost/api/rest/2/credentials/Windows/MyCredentials?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http'), options = { method: 'DELETE', hostname: 'localhost', path: '/api/rest/2/credentials/Windows/MyCredentials' , headers: { "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }; const req = http.request(options, (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.end();
PowerShell
$url = "http://localhost/api/rest/2/credentials/Windows/MyCredentials" $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Delete -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/credentials/Windows/MyCredentials' headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } response = requests.delete(url=url, headers=headers)
Parameters
Name | Description |
---|---|
type | Type of credentials |
name | Name of credentials to delete |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
Result
{"status":"success","message":"OK"}
Notes API allows to add, edit, retrieve, and archive notes
add-note
Notes added this way can't be later changed, retrieved or archived, for such functionalities, see: Add or update note using reference id
POST /api/rest/2/notes/<node>
cURL
curl --request POST --url "http://localhost/api/rest/2/notes/1001?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"--header "Content-Type: application/json" --data "{\"subject\": \"Note\ Subject\"}"
Node.js
const http = require('http'), options = { method: 'POST', hostname: 'localhost', path: '/api/rest/2/notes/1001', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, data = { "subject": "Note Subject", "text": "Note Text", "label": "red", "category": "Note Category" }; const req = http.request(options, (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(data)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/notes/1001" $ata = @{ subject='Note Subject' } $body = (ConvertTo-Json $data) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method Post -Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/notes/1001' data = { "subject": "Note Subject" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.post(url=url, data=data, headers=headers)
Parameters
Name | Description |
---|---|
node | Node ID, DNS name or IP Address |
POST parameters
Name | Description |
---|---|
subject | Note subject |
text | Note text |
label | leave empty or use predefined labels: 'red', 'green', 'blue', 'yellow' |
due | Note due date |
category | Note category |
archived | true or false |
Parameters in JSON format
{ "subject": "Note Subject", "text": "Note Text", "label": "red", "due": "2020-01-10T18:02:36.017Z", "category": "My Category", "archived": false }
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
update-or-add-note-using-reference-id
This method will add a note to the node with reference id - passed in the URL. Reference ID can be any string or numeric value, and it's used to identify note in NetCrunch for further modification or archivization of the note
PUT /api/rest/2/notes/<node>/<refId>
cURL
curl --request put--url "http://localhost/api/rest/2/notes/1001/myNoteID?api_key=fb91cd1f6479db477276761a93e878d0b65e2039" --header "Content-Type: application/json" --data "{\"subject\": \"Note\ Subject\"}"
Node.js
const http = require('http'), options = { method: 'put', hostname: 'localhost', path: '/api/rest/2/notes/1001/myNoteID', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, data = { "subject": "Note Subject", "text": "Note Text", "label": "red", "category": "Note Category" };
const req = http.request(options, (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(data)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/notes/1001/myNoteID" $data = @{ subject='Note Subject' } $body = (ConvertTo-Json $data) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method put-Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/notes/1001/myNoteID' data = { "subject": "Note Subject" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.put(url=url, data=data, headers=headers)
Parameters
Name | Description |
---|---|
node | Node ID, DNS name or IP Address |
refid | Note reference ID |
PUT parameters
Name | Description |
---|---|
subject | Note subject |
text | Note text |
label | leave empty or use predefined labels: 'red', 'green', 'blue', 'yellow' |
due | Note due date |
category | Note category |
archived | true or false |
Parameters in JSON format
{ "subject": "Note Subject", "text": "Note Text", "label": "red", "due": "2020-01-10T18:02:36.017Z", "category": "My Category", "archived": false }
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
update-or-add-note-property-using-reference-id
PUT /api/rest/2/notes/<node>/<refId>/<property>
cURL
curl --request PUT--url "http://localhost/api/rest/2/notes/1001/myNoteID/subject?api_key=fb91cd1f6479db477276761a93e878d0b65e2039" --header "Content-Type: application/json" --data "{\"value\": \"New\ Subject\"}"
Node.js
const http = require('http'), options = { method: 'PUT', hostname: 'localhost', path: '/api/rest/2/notes/1001/myNoteID/subject', headers: { 'Content-Type': 'application/json', "x-api-key": 'fb91cd1f6479db477276761a93e878d0b65e2039' } }, data = { "value": "New Subject" };
const req = http.request(options, (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) }); req.write(JSON.stringify(data)); req.end();
PowerShell
$url = "http://localhost/api/rest/2/notes/1001/myNoteID/subject" $data = @{ value='New Subject' } $body = (ConvertTo-Json $data) $hdrs = @{} $hdrs.Add("X-API-KEY","fb91cd1f6479db477276761a93e878d0b65e2039") $hdrs.Add("Content-Type","application/json") Invoke-RestMethod -Uri $url -Method put-Body $body -Headers $hdrs
Python
import requests url = 'http://localhost/api/rest/2/notes/1001/myNoteID/subject' data = { "value": "New Subject" } headers = { 'Content-Type': 'application/json', 'x-api-key': 'fb91cd1f6479db477276761a93e878d0b65e2039' } requests.post(url=url, data=data, headers=headers)
Parameters
Name | Description |
---|---|
node | Node ID, DNS name or IP Address |
refid | Note reference ID |
property | Name of property to change. Values: 'subject', 'text', 'label', 'due', 'category', 'archived' |
PUT parameters
Name | Description |
---|---|
subject | Note subject |
text | Note text |
label | leave empty or use predefined labels: 'red', 'green', 'blue', 'yellow' |
due | Note due date |
category | Note category |
archived | true or false |
Parameters in JSON format
{ "subject": "Note Subject", "text": "Note Text", "label": "red", "due": "2020-01-10T18:02:36.017Z", "category": "My Category", "archived": false }
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node not found |
get-note-by-reference-id
GET GET /api/rest/2/notes/<node>/<refId>
cURL
curl --url "http://localhost/api/rest/2/notes/1001/myNoteID?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http');
http.get('http://localhost/api/rest/2/notes/1001/myNoteID?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6', (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/notes/1001/myNoteID?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6"
Python
import requests requests.get('http://localhost/api/rest/2/notes/1001/myNoteID?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6')
Parameters
Name | Description |
---|---|
node | Node ID, DNS name or IP Address |
refid | Note reference ID |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node or note not found |
Result
{ subject: "Note subject", archived: false, category: 'Note Category', due: '2020-04-17T15:53:19.773Z', label: 'red', refId: 'myNoteID', text: 'My Note Text' }
get-note-property-by-reference-id
GET /api/rest/2/notes/<node>/<refId>/<property>
cURL
curl --url "http://localhost/api/rest/2/notes/1001/myNoteID/subject?api_key=fb91cd1f6479db477276761a93e878d0b65e2039"
Node.js
const http = require('http');
http.get('http://localhost/api/rest/2/notes/1001/myNoteID/subject?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6', (res) => { res.setEncoding('utf8');
res.on('error', (error) => { console.log(error) }); res.on('data', (chunk) => { console.log(chunk) }) });
PowerShell
Invoke-RestMethod -Uri "http://localhost/api/rest/2/notes/1001/myNoteID/subject?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6"
Python
import requests requests.get('http://localhost/api/rest/2/notes/1001/myNoteID/subject?api_key=0406adfb5e41f58092a38483f61f702602d8a4b6')
Parameters
Name | Description |
---|---|
node | Node ID, DNS name or IP Address |
refid | Note reference ID |
property | Name of property to change. Values: 'subject', 'text', 'label', 'due', 'category', 'archived' |
Status Codes
Status Code | Description |
---|---|
200 | OK |
401 | Access denied |
404 | Node or note not found |
Result
{ subject: "Note subject", archived: false, category: 'Note Category', due: '2020-04-17T15:53:19.773Z', label: 'red', refId: 'myNoteID', text: 'My Note Text' }
Find a reference to all NetCrunch object properties exposed through API
This section describes fields that can be accessed via API. Please refer to examples to see the results of a query of the individual fields.
node-property-list
The list below shows all node fields accessible via API. Not every field can be modified. These are marked as Read-Only
API Name | Data Type | Read-Only | Comments | |
---|---|---|---|---|
networkServices | string | |||
monitoringTime | number | minutes | ||
dependsOnNode | number | |||
netBiosName | string | x | ||
identification | string | values: 'default', 'ipAddress', 'dnsName', 'macAddress' | ||
deviceType | string | |||
parentNode | number | |||
children | number | |||
id | number | x | ||
name | string | |||
networkAddress | string | |||
snmpComputerName | string | x | ||
snmpSysObjId | string | x | ||
snmpOsDescription | string | x | ||
snmpLocation | string | x | ||
dnsName | string | x | ||
avgResponseTime | number | x | miliseconds(ms) | |
maxResponsetime | number | x | miliseconds(ms) | |
alerts24h | number | x | ||
lastAlert | number/string | x | result contains both number and string values | |
snmpManaged | bool | |||
snmpProfile | string | Name of SNMP profile | ||
snmpPort | number | |||
snmpTimeout | number | miliseconds(ms) | ||
snmpRetryCount | number | |||
macAddress | string | x | ||
networkPrefixLength | number | i.e. 24 = 255.255.255.0 | ||
connectedToSwitch | string | x | ||
enabled | bool | |||
status | string | x | values: 'unknown', 'down', 'warning', 'OK', 'disabled', 'disabled by dependency', 'disabled by time restrictions', 'disabled by user', 'waiting for response', 'disabled by network', 'disabled by unknown IP address', 'monitoring paused' | |
simplifiedMonitoring | bool | true/false | ||
disabledFrom | number | |||
disabledUntil | number | |||
addTime | number | x | ||
lastStatusChange | number | x | ||
parentNode | string | x | ||
issueCount | number | x | ||
virtualization | string | x | ||
monitoringEngines | string | See column Monitoring Engines in Nodes view | ||
children | object | It can be an array - depends on how many child nodes are provided, e.g. [1021,1022] | ||
pendingAlertsCount | number | x | ||
customFields | object | It's possible to set multiple custom fields with one object e.g. {"Info 1": "TestVal1", "Info 2": "TestVal2", "Pick": "3"} | ||
interfacesMonitoringEnabled | bool | |||
organizationalUnit | string | x | ||
snmpTrapCodePage | number | |||
sysLogCodePage | number | |||
lastNote | object | x | ||
displayName | string | |||
organization | number | |||
nodeType | string | x | values: 'IP Node', 'Business Status', 'Remote Sensor Node', 'Monitoring Probe', 'Node Monitoring Template' | |
probeType | string | x | ||
addressSpace | number | x | ||
templateNode | number | x | ||
osMonitorType | string | values: 'auto', 'none', 'windows', 'macOS', 'linux', 'bsd', 'esx', 'solaris' | ||
hypervisorKind | number | x | ||
sensors | object | |||
tags | string | values: A,B,C |
Example
{ "id": 1001, "name": "admin.ad.acme", "dnsName": "admin.ad.acme", "networkAddress": "192.168.0.25", "networkPrefixLength": 24, "snmpComputerName": "admin.ad.acme", "snmpOsDescription": "Hardware: Intel64 Family 6 Model 26 Stepping 5 AT/AT COMPATIBLE - Software: Windows Version 6.3 (Build 16299 Multiprocessor Free)", "snmpSysObjId": "1.3.6.1.4.1.311.1.1.3.1.1", "snmpLocation": "Development", "snmpAvailable": true, "snmpManaged": true, "snmpProfile": "Default (read-write)", "snmpPort": 161, "snmpTimeout": 5000, "snmpRetryCount": 3, "displayName": "My Node", "monitoringTime": "5", "networkServices": [ { "name": "SSH", "status": "Not Responding", "errorMessage": "Can't open connection" }, { "name": "CIFS/SMB", "isLeading": true, "status": "OK" } ], "dependsOnNode": { "nodeId": 1003, "name": "test.ad.acme", "networkAddress": "10.10.3.72", "dnsName": "test.ad.acme" }, "netBiosName": "ADMIN", "identification": "default", "deviceType": { "class": "Server/Workstation", "os": "Windows Server", "version": "Windows 2012 R2 Server" }, "parentNode": { "nodeId": 1003, "name": "test.ad.acme", "networkAddress": "10.10.3.72", "dnsName": "test.ad.acme" }, "simplifiedMonitoring": false, "status": "OK", "avgResponseTime": 10, "maxResponseTime": 100, "alerts24h": { "count": 5, "critical": 2, "warning": 3, "unacknowledged": 1 }, "lastAlert": { "id": 585266, "info": "Node Down", "serverity": "Critical", "time": "2018-10-09T06:15:15.000Z" }, "macAddress": "D89EF32B5ADD", "enabled": true, "disabledFrom": "2018-03-08T13:27:34.876Z", "disabledUntil": "2018-03-09T13:27:34.876Z", "addTime": "2018-02-08T13:27:34.876Z", "lastStatusChange": "2018-10-09T10:08:58.885Z", "issueCount": 2, "connectedToSwitch": { "nodeId": 1078, "interface": "Port-channel2", "port": 464, "vlanId": 999 }, "virtualization": { "type": "Hyper-V", "hostNodeId": 1023, "hostName": "virt-host.ad.acme", "dataCenter": "Master Data Center" }, "monitoringEngines": [ { "name": "win", "enabled": true, "status": "OK" }, { "name": "ntsvc", "enabled": true, "status": "unknown" }, { "name": "ntlog", "enabled": true, "status": "OK" } ], "children": [ 1506, 1507 ], "pendingAlertsCount": 0, "customFields": { "Info 1": "TestVal1", "Info 2": "TestVal2", "Pick": "3", "Date Field": "1990-10-04T23:00:00.000Z" }, "interfacesMonitoringEnabled": true, "organizationalUnit": "Testing", "snmpTrapPage": 4294967295, "sysLogPage": 4294967295, "lastNote": null, "templateNode": { "nodeId": 1234, "name": "template", "networkAddress": "", "dnsName": "" }, "addressSpace": "" }
view-property-list
The list below shows view fields accessible via API
Property | Read-Only | Comments |
---|---|---|
id | x | ID of the view |
name | Name of the View | |
status | x | values: 'unknown', 'down', 'OK', 'warning' |
stats | x | object representing view stats |
path | x | "/" seperated path to view/policy |
isDynamic | x | |
isFolder | x | |
readOnly | x |
Example
{ "id": 50, "name": "CPU", "status": "unknown", "stats": { "nodes": { "total": 0, "ok": 0, "unknown": 0, "warning": 0, "error": 0, "alerts": { "active": 0, "las24Hours": { "unacknowledged": 0, "total": 0, "critical": 0, "warning": 0 } } }, "maps": { "total": 22, "ok": 0, "unknown": 16, "warning": 3, "error": 3 } }, "path": "/Monitoring Packs/Operating Systems/Windows/CPU", "isDynamic": true, "isFolder": false, "readOnly": false }
ip-network-property-list
Besides of view property list table below shows an additional field for IP network views.
Property | Read-Only | Comments |
---|---|---|
networkAddress | x | Network Address of the View |
networkPrefixLength | x | Prefix length of the view |
monitoringEnabled | Monitoring enabled true/false | |
addressSpace | x | Address Space of the view |
Example
{ "networkAddress": "10.10.4.64", "networkPrefixLength": 26, "monitoringEnabled": true, "addressSpace": "" }
monitoring-pack-property-list
Besides of view, the property list table below shows an additional field for policies.
Property | Read-Only | Comments |
---|---|---|
policyType | x | values: '', 'windows', 'linux', 'macos', 'bsd', 'esx', 'solaris' |
enabled | true/false | |
snmpRequired | x | true/false |
Example
{ "policyType": "windows", "enabled": true, "snmpRequired": false }
This Python script puts a node into the 'monitoring disabled' state and sets the given message in the info1 field.
This can be utilized to avoid false-positive alarms on nodes under maintenance and inform other people that a disabled state is intentional.
./python maintenance-mode.py 192.168.0.50 disable -msg Maintenance
First thing is to add an application in NetCrunch. Please visit Getting Started topic to see how to add an application
The script contains two lines of code that needs to be updated:
Required arguments to run the script are:
Optionally you can set the message Info1 field by providing an -msg argument. If the -msg argument is omitted - the info1 field will be cleared.
maintenance-mode.py
import requests import argparse
nc_address = "NetCrunch Address" nc_api_key = "Application API-Key"
def send_request(url, request): headers = { "content-type":"application/json", 'x-api-key': nc_api_key } if (request == "delete"): r = requests.delete(url, headers = headers) print ("Status Code:" , r.status_code ,", " "NetCrunch response: " , r.text) return r elif(request == "put"): r = requests.put(url, headers = headers) print ("Status Code:" , r.status_code ,", " "NetCrunch response: " , r.text) return r
parser = argparse.ArgumentParser() parser.add_argument("node", help="Node to change monitoring state (id, ip address or dns name)") parser.add_argument("action", help="Monitoring state (enable, disable)") parser.add_argument("-msg", nargs='?', help="Optional: Set Message to info1 field, ommit to clear info1 field") args = parser.parse_args()
if(args.action == "enable"): action = "on" else: action = "off"
url = "http://" + nc_address + "/api/rest/2/nodes/" + args.node + "/monitoring/" + action print("Changing monitoring state:") send_request(url, "put")
if(args.msg != None): url = "http://" + nc_address + "/api/rest/2/nodes/" + args.node + "/custom-fields/Info%201?"+ nc_api_key +"&value="+args.msg print("Adding Info1 field:") send_request(url, "put") else: url = "http://" + nc_address + "/api/rest/2/nodes/" + args.node + "/custom-fields/Info%201?"+ nc_api_key print("Removing data from Info1 Field: ") send_request(url, "delete")
This javascript file can be used to add nodes to NetCrunch from the CSV file. Additionally, this use case will add extra data to two custom fields (Location and Info2) to each node.
The script comes with two files - script file (add-nodes.js) and data (data.csv).
node add-nodes.js
Additional info
The script can be modified to include more custom fields. Add additional 'else if' statements to the example with the desired field name. Make sure that the custom field that should be added is defined in NetCrunch before executing the script.
For additional fields like Display Name - add the parameter's name in the first row of CSV and add it in the rows. (see Node Property List for available parameters)
add-nodes.js
const http = require('http'), fs = require('fs'), readline = require('readline'), nodesFile = (__dirname + '/data.csv'), options = { method: 'POST', hostname: 'IP or Name of NC server', path: '/api/rest/2/nodes', headers: { 'Content-Type': 'application/json', "x-api-key": 'api-key' } };
function sendRequest(payload) { const req = http.request(options, res => { res.setEncoding('utf8'); if(res.statusCode != 200){ console.log("Error: Status code: " + res.statusCode + " Message: " + res.statusMessage); } else{ res.on('data', chunk => { let response = JSON.parse(chunk); if (response.info === "Already monitored"){ console.log("Warning: ID: "+ response.id + " Node is already monitored"); } else console.log("OK: ID:" + response.id + " Node is added"); }); } }); req.write(JSON.stringify(payload)); req.end(); } function addNodesFromFile(fileName) { console.log("Reading file and adding nodes..."); const rl = readline.createInterface({ input: fs.createReadStream(fileName), crlfDelay: Infinity }); let header; rl.on('line', line => { if (header == null) { header = line.split(','); } else { const properties = line.split(','), data = { customFields: {} }; header.forEach((name, ix) => { if (properties[ix] || "" !== "") { if (name === "Location") { data.customFields.Location = properties[ix]; } else if (name === "Info 2") { data.customFields["Info 2"] = properties[ix]; } else { data[name] = properties[ix]; } } }); sendRequest(data); } }); } addNodesFromFile(nodesFile);
data.csv
name,networkAddress,osMonitorType,Location,Info 2
vm-test-0066.ac.acme,10.20.16.66,Windows,Server-Rack-1,VM-For-Testing
vm-test-0084.ac.acme,10.20.16.84,Windows,Server-Rack-2,VM-For-Testing
cam-0015.ac.acme,10.20.16.15,none,Server-Room,Camera
cam-0057.ac.acme,10.20.16.57,none,Office,Camera
This is slightly modified 'add nodes from CSV' script that allows you to add multiple nodes monitored by the remote probe.
The script comes with two files - script file (add-nodes.js) and data (data.csv).
node add-nodes.js
add-nodes.js
const http = require('http'), fs = require('fs'), readline = require('readline'), nodesFile = (__dirname + '/data.csv'), options = { method: 'POST', hostname: 'IP Address or DNS name of NC Server', path: '/api/rest/2/nodes', headers: { 'Content-Type': 'application/json', "x-api-key": 'apikey' } };
function sendRequest(payload) { const req = http.request(options, res => { res.setEncoding('utf8'); if(res.statusCode != 200){ console.log("Error: Status code: " + res.statusCode + " Message: " + res.statusMessage); } else{ res.on('data', chunk => { let response = JSON.parse(chunk); if (response.info === "Already monitored"){ console.log("Warning: ID: "+ response.payload.id + " Node is already monitored"); } else console.log("OK: ID:" + response.payload.id + " Node is added"); }); } }); req.write(JSON.stringify(payload)); req.end(); } function addNodesFromFile(fileName) { console.log("Reading file and adding nodes..."); const rl = readline.createInterface({ input: fs.createReadStream(fileName), crlfDelay: Infinity }); let header; rl.on('line', line => { if (header == null) { header = line.split(','); } else { const properties = line.split(','), data = { name : null, monitoringProvider: null }; header.forEach((name, ix) => { if (properties[ix] || "" !== "") { if (name === "monitoringProvider"){ data.monitoringProvider = properties[ix]; } else { data.name = properties[ix]; } } }); sendRequest(data); } }); } addNodesFromFile(nodesFile);
data.csv
name,monitoringProvider
vm-test-0066.ac.acme,remoteProbe1
vm-test-0084.ac.acme,remoteProbe1
cam-0015.ac.acme,remoteProbe1
cam-0057.ac.acme,remoteProbe1
List below contains all requests currently available in REST API, divided into scopes
Get List of Monitoring Packs of The Node
Set Network Services Parameters
Set Monitoring Engine Parameters
Enable IP Network View Monitoring
Disable IP Network View Monitoring
Get Monitoring Pack Properties
Get List of Nodes of the Monitoring Pack
Add Node to the Monitoring Pack
Remove Node From The Monitoring Pack
Get Names of Defined credentials
Update or add note using reference ID
The reference lists of NetCrunch resources, operations and definitions.
alerting actionmonitoring glossaryprecompiled mib
NetCrunch allows many alerting actions such as notifications, remote execution, creating tickets in help desk systems, sending messages to other systems, and NetCrunch configuration actions.
Actions are executed in response to an alert, and they organized in sequences by Alert Action Lists.
Alerts are grouped to make finding the desired Action easier. Groups refer to their purpose and are gathered into tabs in the Add Action window.
These actions can extend alert information, so when the next action would be sending an email notification, it will also contain diagnostic action results.
To use the SMS notification via GSM phone, you need to connect and set up the GSM phone or modem to the NetCrunch Server.
add the list of supported modems and phones (reference)
NetCrunch can execute various control actions remotely. Event descriptions can be passed to action when needed in XML format. By default, actions can be executed on a node causing an alert or on any other node from the Atlas.
This section allows integration with the external service desk, productivity, and notification systems:
Actions execute on the NetCrunch Server machine. You can specify the desired message format for the action.
write to file or append?
Please note that while selecting the Write to File and Write to Unique File actions on a remote Administration Console, the Filename or Directory fields require providing the path manually. The Select Directory or Open File icons are grayed out.
actionactionsalertasanacampfireclickatellconnectwisecontroldiagnosticsfileflowdockfreshdeskfreshservicehelp deskiftttintegrationjirajitbitliveagentlocalloggingmessagebirdmessagingmojo helpdesknotificationsops geniepagerdutypushoverremoterestartryverscriptserviceservice deskshutdownslacksmssmseaglesnmpstart servicesyslogtraptrellotwitterwebhookzendesk
NetCrunch Monitoring Packs allow efficient management of monitoring settings. You can use them to create monitoring policies by setting node filters. They can also be assigned manually to the node (or multiple nodes using multiselection). Currently, the program includes more than 223 ready-to-use Monitoring Packs for monitoring devices, applications, and operating systems.
!! Remove details - nobody can read this. Use Active Directory as an example. You can keep those packs with up to 3 conditions.
!! There is no description that is Automatic, which is not - this is important.
If this is an automatic pack, describe the filtering condition.
for example:
Operating System must be Windows Server.
Monitored network services list contains LDAP.
The monitoring pack can monitor all nodes, a single Atlas view (a group of nodes), or a single node.
A Monitoring Pack consists of two types of elements: alerts (alerting rules) and data collectors (used for reports).
Each alert is defined by the condition describing the event, and the action is taken when the alert occurs. By default, all alerts are written to the NetCrunch event log. Data collectors define performance metrics that should be collected to present data on dashboards and reports. Each data collector entry can have a schedule to generate reports automatically. If a schedule is not set, data are collected, and reports are available on demand.
NetCrunch includes many predefined Monitoring Packs with associated events and reports.
The monitoring pack created by the user (or duplicated from the pre-created monitoring pack) can be freely converted between static and dynamic mode. Use the option in the top right corner in Monitoring Pack settings to covert)
There are several Monitoring Packs that you can use for monitoring different aspects of your Windows environment.
Monitor the most important Linux performance indicators such as processor and memory utilization, free disk space, and available swap, and create a Linux Server Report.
Monitor the most important macOS performance indicators. Such as processor and memory utilization, and free disk space, and then create a macOS Report.
Monitor the most important BSD performance indicators, such as processor and memory utilization, and free disk space, and create a BSD Report.
Monitor most important Solaris performance indicators such as processor and memory utilization, and free disk space, and create Solaris Report.
Settings for all nodes in the Atlas.
alcatelaltoantivirusapacheapcarcserveasaauditavastavayaavgavirabarracudabladebsdbuffalocarelciscociscoworkscitrixcorrelationcorrelationscrmdefenderdelldynamiceatoneltekemcequallogicesetesxesxieventsexchangef-securefortigatefortinetfujitsug datagudehillstonehpibmidraciisimminterseptorirmcisaisilonjavajuniperk7kasperskykingsoftlavasoftlenovolinuxmac os xmacosmcafeemeinbergmikrotikmonitoringms sqlnetappnetflownetgearnexusnormannortononefsoraclepackpalopandaposeidonpredefinedprocurveprojectproliantqnapreadynasreportsrmonsharepointsocomecsolarissonicwallsophossqlsquidsrxstationsymantecsynologyvirusterratraffictrend microucsveeamviprevmwarewebrootwindowszonealarm
Alert - the condition being watched for action to react to potential danger or get attention.
Site is a group of private networks usually behind NAT. When two locations use the same private network address, they create two distinct address spaces.
Atlas Node View shows various aspects of the group of nodes in the Network Atlas and consists of multiple pages such as nodes, maps, dashboards, and others.
Cloud Service represents a single cloud service status and metrics.
Composite Status is an atlas node representing an aggregated state of the group of other status objects. The status depends on group type, which can be critical, redundant, or influential.
Event is a thing that happens or takes place, especially one of importance.
Event Suppression is the technique of preventing false alarms caused by network intermediate connection failure.
Leading Service is a network service designed to be checked as the only service when the Node is DOWN.
Network Atlas is a central database containing all your network data. It's organized by the hierarchy of the Atlas Node Views.
Node is a single address network endpoint (interface).
Node Monitoring Template is a settings node. Its sole purpose is to provide settings to other nodes. When parameters change, they propagate automatically to associated nodes.
Monitoring Provider is a software component responsible for monitoring a part of the Atlas. NetCrunch Server and Monitoring Probe are examples of monitoring providers.
Monitoring Dependencies reflect network connections and allow for preventing false alarms and disabling monitoring of unreachable network components.
Monitoring Engine is a software component responsible for the specific type of monitoring.
Monitoring Issue is a problem related to the monitoring process, like missing credentials or improper response from the device.
Monitoring Pack is a group of performance parameters and events monitored and collected for the reports.
Monitoring Sensor is a software module focused on monitoring a single object, service, or device (web page, file, folder, query, etc.).
Remote Probe is Monitoring Probe now.
Monitoring Probe is a monitoring agent software installed on a separate machine to increase the monitoring capabilities of the server or monitor a remote location within isolated networks otherwise not accessible by the primary monitoring system. It connects to the parent system.
REST Receiver is a passive monitored node that only waits for data to be received for it. It doesn't need to know the sender's IP address so that data can come from nodes directly connected to the Internet, such as mobile devices, kiosks, cameras, or hardware sensors.
Threshold is the limit or a boundary point that must be exceeded or dropped below to trigger some response action.
Performance Trigger generates an event upon the condition set on the performance counter value.
Response Processing Time is the estimate of the service's time spent generating the response. It's calculated by subtracting an average PING RTT from the service response time.
address spacealertatlasbusinessdependenciesengineeventissueleadingmonitoring packmonitoring probemonitoring providermonitoring sensornodenode monitoring templatepackprobeproviderremote proberemote sensor noderesponse processing timesensorservicesitestatussuppressionthresholdtriggerview
Checks a Windows Domain Controller (DC) for replication errors
Monitors Apache Server connections and internal performance metrics
Sends basic HTTP requests (GET, HEAD, or POST) and validates response code or content.
This sensor monitors system temperatures, voltages, and fan rpm using IPMI.
This sensor monitors the connectivity to the SIP server and can test the session initiation.
This advanced sensor can trigger alerts by filtering emails and parsing email's subject and body for metrics and statuses.
Loads data from a file using one of the predefined file formats or by using Data Parsers. It also Alerts on retrieved metrics and status values.
This sensor allows receiving data from an external source - a remote agent using REST API.
Monitors the overall health status and power consumption and the filesystem, LUN, and pool parameters of a Dell EMC storage device using the Unisphere® Management REST API.
This sensor is preconfigured to monitor a Dell iDRAC device over IPMI.
Tracks changes to device configurations and stores multiple backups of device configurations using the telnet or SSH protocol.
The sensor monitors the availability of DICOM-capable devices by sending C-ECHO requests.
Sends a query to the DNS to check the name or address and validates the response. It allows checking any DNS record response.
Monitors CPU, Memory, Interface utilization, and status of a Docker container, using REST.
Tests email server by sending an email to the mailbox and reading it back. It can measure the time to send and receive the message, availability, and authentication.
Checks remote file presence, size, and modification.
Monitors Windows file share state.
Checks folder content, authentication settings, and other conditions.
This sensor monitors system temperatures, voltages, fan rpm, current, altitude, etc. using IPMI.
This sensor is preconfigured to monitor an HP iLO device over IPMI.
Allows monitoring of the performance of HPE 3PAR StoreServ storage using Web Services API.
This sensor is preconfigured to monitor an IBM IMM device over IPMI.
The sensor sends a series of ICMP packets and calculates Jitter based on responses.
This sensor monitors free inodes on Unix/Linux-based systems.
The sensor gets a snapshot image from the camera and checks the connectivity.
This sensor monitors the IPMI System Event Log(SEL)
Checks the user authentication process via LDAP which is used by directory services such as MS Active Directory.
The sensor monitors the email mailbox. It checks the authentication process and mailbox usage. It can track counters such as 'Number of Messages', 'Size of Largest Message', and 'Size of Messages'.
Allows monitoring of a specific SQL server instance without duplicating the monitoring pack. You can add a sensor for each SQL server instance with a single click.
Allows monitoring of NetApp storage components' health and performance using ONTAP REST API.
Allows monitoring of NetApp storage components' health and performance using SANtricity REST API.
monitors IPSec site-to-site VPN tunnels configured on the Palo Alto device. By default, the sensor alarms when the state of any tunnel becomes non-active. It also allows for the monitoring of specific tunnels.
This sensor checks a Windows machine for pending reboots using WMI.
The sensor monitors printer status, alarms, and properties using SNMP. The device must support the Printer MIB.
Monitors process instances.
Monitors the summary values of multiple processes (process group).
The sensor checks the user authentication process and validates the response from the RADIUS server.
This sensor uses WMI to monitor Windows Registry objects specified by path. It alerts you when a subkey or value list is changed.
This sensor uses WMI to get and monitor numeric values from the Windows Registry. It can utilize thresholds in NetCrunch to monitor various scenarios
Checks connectivity from a remote system by running Ping remotely.
The sensor executes the script on the remote system and processes the output through the selected Data Parser to get alerts, metrics, or status objects.
Sends HTTP request and validates response code or response content.
The sensor executes the script locally on the NetCrunch Server machine and processes the output through the selected Data Parser to get alerts, metrics, or status objects.
Allows executing a query returning multiple rows. Columns can be used as a source for metrics. These metrics enable the triggering of various threshold alerts.
Allows executing a query returning a single row. It can also be used with an empty query for checking database authentication and connectivity. The row can represent an object, and columns represent object properties. The sensor allows setting an alert on object properties status.
Checks connectivity from a remote system by running Ping remotely.
Checks the SSL certificate expiration date and the certificate properties.
This sensor has a customizable status.
Checks the device's uptime.
Checks the server connectivity and tries to authenticate a specified user. Sensor requires - TACACS server shared secret - used for data encryption and user (with password) to authenticate
Checks file content, authentication parameters, file size, change time, file presence, and more.
Monitors text log entries. For FTP or HTTP, the whole file is downloaded. Otherwise, it will be analyzed remotely and incrementally.
Checks the time difference between a remote machine and the reference machine (NetCrunch Server or NTP server).
The sensor traces the route to the node's IP address and measures the number of hops.
This sensor allows monitoring of the state of all Veeam jobs and the capacity of backup repositories.
The sensor will automatically switch detected ESXi machines to vCenter mode. Features such as active alarms and configuration issues are available in vCenter mode only.
The sensor allows selecting the WBEM class and instance key to retrieve object properties. It allows setting alerts on performance counters only.
The sensor monitors a specific WBEM object without writing a WQL query. In addition to the WBEM Data sensor, it requires you to set an instance value. The sensor also allows setting alerts on performance counters and treats each object's numeric property as a counter. Additionally, it allows for tracking status properties.
It allows retrieving data of specific CIM Class instances by writing a WQL query. The query must point to only a single instance.
Checks web page contents and page loading process. The sensor loads the page just like a regular browser, with all resources (images and scripts)—the sensor alerts on-page content, authentication errors, or resource load errors.
Monitors the internal battery or UPS connected to the computer system. Alerts when AC power is lost, the battery level is low or discharged, and on battery hardware error.
Provides Perfmon interface through WMI. Allows selecting objects and counters like in regular Perfmon - no cryptic WMI classes needed
Monitors the status of Windows Task Scheduler tasks. It allows triggering an alert if task configuration has changed or if a task has not run as scheduled.
The sensor monitors the status of Windows updates on a computer and triggers alerts when updates have not been installed.
Allows selecting WMI class and instance key to retrieve object properties (no query).
Monitors HDD Health with S.M.A.R.T technology.
Monitors specific WMI objects without the need to write a WQL query.
It allows querying WMI for an object and triggering alerts on the object state.
This policy explains how NetCrunch interacts with monitored systems and data within the end user’s environment, as well as optional internet-dependent features
Last Updated: April 11, 2025
This privacy policy applies to NetCrunch, an on-premises or self-hosted network monitoring platform deployed and managed by end users (businesses or organizations) to monitor their infrastructure.
AdRem Software does not access, store, or process data monitored by NetCrunch.
NetCrunch accesses data solely for operational monitoring purposes, as configured by the end user (Art. 6 Sec. 1 Letter b GDPR).
No personal user data or sensitive content from the monitored environment is stored by NetCrunch or AdRem Software. The only exception is voluntarily transferred data files for technical support or program diagnostics, which constitutes consent for processing such data (Art. 6 Sec. 1 Letter a and Letter b GDPR).
The following services are disabled by default, except for bug reports:
NetCrunch employs several measures to safeguard data:
NetCrunch supports compliance with enterprise security standards through:
Monitored Data: Retained temporarily within the end user’s deployment; retention periods are user-configurable.
Bug Reports at AdRem Software: Retained temporarily for diagnostics and software improvement.
Acts as the data controller, determining what is monitored and managing retention/deletion policies.
Provides the NetCrunch software and optional services (NCC, AI alerts). Acts as a data processor only when optional services are enabled and uses bug reports solely to improve functionality.
AdRem Software may update this policy to reflect changes in NetCrunch functionality. End users will be notified via release notes or direct communication.
For questions about NetCrunch’s data handling, contact AdRem Software at sales@adremsoft.com.
Tools IP Tools
I have added small explanations for every tool. I need to provide some screens.
NetCrunch IP Tools is the application containing a set of usable network diagnostic, subnet and scanner tools.
Each tool can be run from the perspective of a machine where the console is installed or the NetCrunch server machine.
This tool tests the reachability of a host and measures round-trip between origin and destination.
This tool displays the route (path) and measures transit delays of packets across the IP Network.
This tool sends a packet that allows remotely turning on computers in the network.
Make sure that the computer that needs to be turned on via this tool has Wake on Lan enabled (in bios)
The only requirement is a MAC address of the remote computer.
This tool will query DNS selected server to get full information about the given domain or address.
This field specifies the DNS server that will be used in the test.
This tool will require information about the given domain. The program will automatically find the WHOIS server, and it will follow referrals.
This tool will scan the range of IP addresses, and it will perform a reverse DNS lookup for each address. When the tool receives a response from the address, it will perform a forward DNS lookup to ensure that the address matches the name.
This tool will scan a given range of addresses and resolve MAC addresses for each scanned IP Address. Additionally, the name of the network card vendor will be displayed for known MAC prefixes.
This tool takes an IP address and netmask with parameters to calculate Subnets, their masks sizes ranges, and broadcast addresses.
This tool will scan a range of IP addresses periodically, so it's possible to see which IP addresses are in use. Additionally, it will perform a reverse DNS lookup for each IP address.
This tool can discover 70 TCP and UDP well-known network services running on a given range of machines.
This tool will scan a given range of IP addresses to discover open TCP ports. Unlike Network Service Scanner will check only if the port is opened.
This tool will check for basic system info taken from SNMP of a range of the IP Address.
bandwidthconnectiondnsitoolslookupmonitoringpingport scannerscannerservice scannersnmptraceroute
Settings Monitoring SNMP SNMP MIB Data & Compiler
I do not think that is appropriate documentation
NetCrunch delivers a pre-compiled MIB library containing over 8700 MIBs, including Cisco, Nortel, 3Com, Alcatel, and others.
You can add new MIBs to the NetCrunch MIB database using the SNMP MIB Compiler program. The compiler is part of the server and should be run from the Administration Console.
The program allows you to do the following:
To find the newly compiled MIB, use the search window (CRTL+F) and type in the desired variable.
compilermibmib compilersnmp
I have added the topic taking into consideration that Device List Editor is actually the Device Types List Editor, but still, there is a question what else should be added/changed here? What about the "Match String" field in the description. Should I change it to "Matching String"?
Settings Resources Device Type Manager
NetCrunch Device List Editor displays the list of all currently defined network device types recognized by the device group.
NOT implemented You can update the device types list directly from the AdRem Software Website:
To update the device list click the Update icon and follow the directions specified in the Device Update wizard.
Each device definition contains the following information:
Read to configure emails and text messages (SMS) with NetCrunch notifications.
The sensor enables tracking changes to device configurations and stores multiple backups of device configurations using the telnet
or ssh
protocol.
deviceeditorlistsysdescrsysobjectidtype
This topic describes how to add nodes to the NetCrunch monitoring setup. It covers accessing the Add Node popup and adding nodes via the Networking and Logical sections. Networking options include adding IP devices, scanning IPv4 subnets, searching AD containers, and importing from a file. Logical options include adding cloud services, composite status nodes, REST receivers, and monitoring probes.
NetCrunch allows you to efficiently add nodes to your monitoring setup through a straightforward interface. Here’s how to add a node to your monitoring:
Locate the Add Button: You will find a blue plus button at the top of the NetCrunch application window. Click on this button to open a pop-up window.
Popup Overview: The pop-up window contains several icons, each representing elements you can add to your monitoring setup. To add nodes, focus on the first sections labeled "Device" and "Monitoring"
The Device section provides four options for adding nodes:
#### Monitoring Template
By following these steps, you can efficiently add various types of nodes to your NetCrunch monitoring setup, ensuring comprehensive network and service oversight.
add nodecloud servicecomposite statusimport from fileip devicelogical nodesmonitoringmonitoring probenetcrunchnetwork monitoringnon-ip nodesrest receiverscan ipv4 subnetssearch ad containers
This topic provides instructions for adding monitoring targets to NetCrunch, including nodes, sensors, and data collectors. It covers configuring network services, operating systems, SNMP, virtualization platforms, and setting up alerting rules and dependencies. Additionally, it guides users on adding new nodes and using monitoring templates for efficient management.
In NetCrunch, various elements that can be monitored are called monitoring targets. Monitoring involves setting up alerts or data collectors regarding these targets, which may require adding sensors or enabling monitors such as SNMP or operating system monitors.
Data collectors in NetCrunch are sets of performance counters that gather specific data from monitored targets. These collected metrics can be utilized in reports or for troubleshooting purposes. Data collectors are often added automatically when a specific report is enabled.
Node Settings
window. You can find the node using the search button at the top right of the console window.Monitoring
tab.Monitoring
tab contains sections with small tiles. Each tile represents a monitoring element.Dependencies ensure accurate and efficient monitoring by specifying relationships between nodes, avoiding false alerts, and understanding the impact of a node's failure.
Determines criteria for when a node is considered DOWN or Critical. - Customize based on specific monitored elements. - Default policy marks a node as DOWN if no network service is responding.
These are tiles in the section called Windows.
To configure the section, click on a specific tile or use the tile dropdown indicated by the vertical ellipsis button on the right side of the sensor tile.
Always present, but can be grayed out as not configured:
You need to add the following sensor using + on the right side of the section header:
To add a new node, click the plus button at the top right corner of the NetCrunch console. The interface is divided into three sections: Networking, Logical, and Monitoring.
NetCrunch supports monitoring network services, including PING, SNMP, SSH, HTTP, HTTPS, FTP, SMTP, DNS, and many more.
Following these guidelines, you can efficiently add and configure monitoring targets in NetCrunch, ensuring comprehensive network monitoring and alerting.
active directoryalerting rulesalertsbandwidth sensorcloud servicecomposite statusdata collectorsdependenciesesxievent loghardware monitoringhotfixeshyper-vimport devicesip devicemonitoringmonitoring packsmonitoring probemonitoring templatenetcrunchnetwork servicesnew nodeoperating systemperformance countersrest receiverscan subnetssensorssnmpsoftware monitoringstatus monitortargetswindows services
NetCrunch's Network Atlas offers four main view types: Graphical Data Views, Node Group Views, IP Network Views, and Folders. These views provide intuitive network visualization, dynamic node categorization, and automatic IP grouping. Adding new views is easy with a wizard or keyboard shortcut, enhancing efficient network monitoring and management.
NetCrunch offers a comprehensive Network Atlas, a complete database of monitored elements. The Network Atlas consists of various types of views organized into several sections, providing a detailed overview of the monitored elements. These views can be categorized into four main groups:
Graphical Data Views are designed to present live diagrams, free-form dashboards, image-based maps, and geographical maps. These views can include region contour maps with the ability to geo-locate elements. They visually represent the network's current state, offering an intuitive and immediate understanding of network performance and issues.
Node Group Views help organize nodes into categories based on location, criteria, or other classification. These views can be manually created or dynamically updated based on filtering expressions.
IP Network Views automatically group nodes by the network they belong to. This includes a separate automatic view for public nodes, providing a structured overview of the network's IP layout.
Folders are primarily used for management purposes. They help organize various views and extend the capabilities of filtered dynamic views by creating multiple sub-views. For example, a folder dedicated to servers can contain views categorized by the operating system.
To add a new view in NetCrunch, follow these steps:
Following these steps, you can efficiently add and manage various views in your Network Atlas, ensuring a well-organized and comprehensive monitoring setup.
adding viewsatlas viewsdashboardsdynamic foldersfiltering expressionsfoldersgeographical mapsgraphical data viewsimage-based mapsip network viewskeyboard shortcutlive diagramsmonitoring setupnetcrunchnetwork atlasnetwork elementsnetwork managementnetwork monitoringnetwork visualizationnode group viewsnode organizationpublic nodesstatic foldersview creation wizardview types
Effective notification management is essential for maintaining a well-functioning network monitoring system. By implementing the strategies and best practices outlined in this documentation, you can optimize your notification processes, ensure timely responses to alerts, and continuously refine your approach to meet evolving needs.
Notification management is a critical component of effective network monitoring and incident response. Properly configured notifications ensure that the right people are informed at the right time, which helps swiftly address issues and minimize downtime. This documentation provides an in-depth look at effective notification management strategies within NetCrunch, common pitfalls to avoid, and practical steps to optimize your notification settings.
Notifications serve as alerts to inform users about important events and incidents within the network. Effective notification management ensures timely and relevant information delivery, enhancing team efficiency and response times. Poor management, however, can lead to missed alerts, overwhelming noise, and ultimately reduced operational effectiveness.
Notification groups streamline alerting multiple users based on their roles and responsibilities.
Notification Groups
tab.Each user has a notification profile and a list of definitions allowing customized notification settings. These definitions include:
The administrator manages User notification profiles in the User & Access Rights
window. Each user can also manage their profile in the console. The Notification Profile
section within the user profile window allows for adding, editing, or removing specific profiles.
Access User Profile:
User & Access Rights
window.Notification Profile Section:
Notification Profile
section.Add a Profile:
Edit a Profile:
Remove a Profile:
Administrators and users can effectively manage notification profiles and ensure that notifications are received in the preferred format, at the right time, and through the appropriate channels.
NetCrunch manages alerting actions through action scripts, defined as an escalation list of actions to be executed after a specified time from when an alert occurs. These actions can include notifications and automation tasks such as executing remote programs, running scripts, or integrating with external systems. Actions are categorized into:
Integration actions include:
Users can also add actions to be executed at the end of the alert after the alert has been closed.
In addition to setting the time after an action is executed, NetCrunch allows setting filters so actions are executed only if all conditions are met. The filter criteria include:
Scripts can be defined hierarchically, allowing them to inherit actions and extend the base script. Additionally, scripts can enable the repetition of the last action at a given interval.
A simple example script might include the following steps:
Immediately
After 15 minutes
On Closure
By utilizing these advanced notification strategies, NetCrunch ensures that critical alerts are addressed promptly and appropriately while providing flexibility for complex notification and automation requirements.
Add link create/edit new views