My experience and walkthroughs with the GCP Skillsboost Challange Labs.
See Challenge Lab
We’re told that we have a Cloud Function that is responsible for subscriber video file upload and subsequent transcoding. We need to make use of Google Cloud Operations to provide insight around use of this Function.
In this lab, we’re going to:
We’re given some tips. These definitely make life easier! We’re told that:
startup_script
.custom.googleapis.com/opencensus/my.videoservice.org/measure/input_queue_size
associated with the gce_instance resource type.Custom Metric Name
.This is a fairly quick lab, so I didn’t bother doing it with the Cloud CLI. All the steps described below are done using the Google Cloud Console.
A basic Cloud Monitoring dashboard, called Media_Dashboard
has already been made available to us, but we have to enable Cloud Monitoring in our project.
This is super-easy to do! Just navigate to Monitoring
from the Console.
We’re told we have a monitoring service which creates a custom metric called opencensus/my.videoservice.org/measure/input_queue_size
. This service runs as a Go application on a Compute Engine instance called video-queue-monitor
. The Go application uses OpenCensus to write custom metrics to GCO. The instance has already been deployed for us, and a startup-script installs and runs the monitoring application. The startup-script is incomplete.
From the console, edit the instance, and navigate to Management -> Metadata. Look at the startup-script
.
You’ll see that the script requires you to supply the project ID, instance ID, and zone. We’re given the project ID and zone already. We can get the instance Id from the Basic information view of the VM.
Add these values, save your changes, and now reset the instance.
Then check tha metric input_queue_size
is being written, in the Metrics Explorer. Note that it might take a few minutes before the metric is visible. So refresh the Metrics Explorer, then from the Metrics dropdown, you can search for input_queue
.
Here we’re going to create a custom metric from logs. We’re told that a Cloud Function creates a log entry that includes metadata. We need to create a custom metric by reading the log entries, and using these to determine the rate at which high resolution video files are uploaded.
Navigate to Logging -> Logs Explorer.
Click on Create metric. Use the following values:
Type = Counter (the default)
Name = large_video_upload_rate
(your metric name might be different)textPayload=~"file_format\: ([4,8]K).*"
(Note that we’ve already been given the advanced filter query.)
Select Monitoring -> Metrics Scope -> Dashboard. Then select Media_Dashboard
from the list of dashboards. (This dashboard has been created for us already.)
large
(or whatever your logs-based metric was called), to find your metric. Apply.Here we need to trigger an alert if our upload rate exceeds the specified value. We can create an alert using the logs-based metric we created previously
Navigate to Logging -> Logs-Based Metrics. Select the three dots next to the logs-based metric for video upload rate, and select Create alert from metric
.
Select New condition, and use these parameters:
Select Configure trigger, and use these parameters:
Select Notifications:
Use notification channel
.Name the alert policy. You can choose whatever you like. I called it Uploads exceeded threshold
.
Finally, click on Create policy.
And that’s it!