Run a single command across the entire fleet to diagnose incidents more quickly.
Shoreline lets your on-call team:
Arm your production ops team with real time debug data, and guide them with pre-approved repair actions
Here's an example of a Shoreline debugging command in action:
This command has three parts: a resource query, a metric query and a Linux command. Pod | app=”cc_processor” finds tagged pods running the credit card processing app. The second part is filtering on a metric, CPU usage, to narrow this down to those with high CPU. Finally, Shoreline is executing a ‘last deployment’ script on each pod meeting the first two criteria to learn whether the last deployment on the pod happened less than 24 hours ago, which could indicate deployment of a bad build. This is how you run a precise action across your entire fleet in seconds.
Precisely target resources such as hosts, pods, and containers.
Run expressive one-liners that select and filter resources based on the metadata you have about them, using both metrics and Linux commands. This makes it easy to quickly discover “needle in a haystack” issues.
Run commands simultaneously across the entire fleet.
Parallel execution means that Shoreline eliminates the need to SSH into individual boxes one after another. By simultaneously executing commands across the fleet, SRE and DevOps teams identify potential problems in seconds.
Gain faster insights with real-time data.
Shoreline gives on-call teams a wide variety of per-second metrics that are easily accessible through a GUI, Notebook, or CLI. This real-time data delivers faster understanding, and ultimately, higher reliability.