Welcome to the Nubis Morocco blog! Here you’ll find the latest articles, tutorials, and insights on AWS, cloud computing, DevOps, and more.
Browse our content by categories:
- AWS - Amazon Web Services tutorials and best practices
- DevOps - CI/CD, automation, and infrastructure management
- Kubernetes - Container orchestration and cloud-native development
- Cloud Security - Best practices for securing cloud infrastructure
- Tutorials - Step-by-step guides and hands-on learning
Photo by Sigmund on Unsplash
In this tutorial, we will learn how to set up Prometheus rules and configure Alertmanager to send alerts to a Slack channel. Prometheus is a popular monitoring and alerting solution in the Kubernetes ecosystem, while Alertmanager handles alert management and routing. By integrating with Slack, you can receive real-time notifications for any issues or anomalies in your Kubernetes cluster. Let’s get started 👨🏻💻!
Table of Contents: Prerequisites Setting Up Prometheus Rules Configuring Alertmanager Integrating with Slack Testing the Setup Conclusion 🚦Prerequisites: Access to a Kubernetes cluster Prometheus and Alertmanager installed in the cluster Basic knowledge of Kubernetes and YAML syntax 1 — Setting Up Prometheus Rules : Prometheus rules define conditions for generating alerts based on metrics collected by Prometheus. In this example, we will create a PrometheusRule resource named z4ck404-alerts in the monitoring namespace.
...
Photo by Danist Soh on Unsplash
As defined in the Terraform documentation, provisioners can be used to model specific actions on the local machine running the Terraform Core or on a remote machine to prepare servers or other infrastructure objects. But HashiCorp clearly states in its documentation that they should be used as the last solution ! which I will explain in this article.
Provisioners are the feature to use when what’s needed is not clearly addressed by Terraform. You can copy data with to the newly created resources, run scripts or specific tasks like installing or upgrading modules.
...
Exposed Elasticsearch cluster ! For the last couple of months, I have been exploring Elasticsearch and I even shared some articles about it talking about how impressive the technology behind it is and how it can be used with other projects such as Spark to expand the search capabilities Elasticsearch offers with the real-time distributed analytics and machine learning Spark offers. According to a blog post from 2017 by elastic, the ELK (Elasticsearch, Log-stash and Kibana) Stack has exceeded 100 million downloads (Knowing that in early 2017 the stack wasn’t as mature as it is now with all the ecosystem that grew around it to expand it’s capabilities and make it even more attractive).
...
In the previous article (Part1), we installed the ELK stack along with the ES-Hadoop connector and spark, then we did some visualizations in Kibana with the houses price prediction data set from kaggle.
In this part we will start with adding Search Guard to the stack in order to define permissions and access to our data and configurations, then we will implement our models with the help of Spark Ml lib, and we will finish with deploying our models in a pipeline in order to predict the prices for new entries to our Elasticsearch.
...
Before digging into any technical details, I will start with brief descriptions of the tools that I will be using for the tutorials (this part and the coming ones).
Cover Photo by Marius Masalar on Unsplash
1 — E(Elasticsearch).L(Logstash).K(Kibana) Stack ! The ELK Suite is an acronym for a combination of three widely used open source projects. E = Elasticsearch (inspired by Lucene), L = Logstash and K = Kibana. All developed in Java and published as Open Source under the Apache license. The addition of Beats turned the stack into a four-legged project and led to its renaming as “Elastic Stack”, but for us in this article we will at least use the official name of ELK.
...