DevOps for Open Source
Many Enterprise customers adopt third-party libraries without vulnerability scanning or long-term support plans. That is where Solid Potential’s talent comes in - we can provide security, enterprise support and long-term peace of mind for your service. Our vast experience in the Open Source community can also leave a positive mark on your organisation's contribution culture.
With Solid Potential, you will never have to worry about implementing or maintaining open source technology in your company. We guarantee that your solutions will be maintained in accordance with all SRE standards.
Our Tools
-
Linux is open source operating system. Creating the right image for your enterprise service can be a challenge. It requires appropriate configuration of packages or network dependencies. For our customers we are building special images with distriutions like Debian, Ubuntu, Arch Linux, Fedora, RedHat. We are host them and deploy them with open source or cloud service tools.
-
Docker is containerization technology that enables the creation and use of Linux containers that allow you to package and isolate applications with their entire runtime environment.We will help you use Docker in your CI pipelines at all stages as infrastructure setup, deployment and testing.
-
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and managing containerised applications. Kubernetes is the most popular platform to manage and orchestrate solutions based on containers. Solid Potential will implement and support Kubernetes in your organisation according to your infrastructure in open source or cloud-based versions (GCP, AWS, Azure).
-
Helm is open sourced solution that helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.We can integrate helm with your existing and new Kubernetes based solutions.
-
Argo is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition). It can be used to define workflows where each step in the workflow is a container, model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a directed acyclic graph (DAG) and easily run compute-intensive jobs for machine learning or data processing in a fraction of the time.
-
With Hashicorp Terraform you can automate infrastructure in infrastructure as code standard or create multi-cloud deployments. Solid Potential can write for you special modules that can be used between projects, provision templates of projects and create deployments. Check our success stories in are of infrastructure automation you will not be disappointed.
-
Hasicorp Vault was created to Manage Secrets and Protect Sensitive data. For our clients, we are integraitng HashiCorp Vault with their systems and writing custom plugins. We can provid for you a centralised deployment that supports all the divisions in full isolation. Check as we did that for other customers.
-
Pulumi’s universal infrastructure as code platform helps teams tame the cloud’s complexity using the world’s most popular programming languages (TypeScript, Go, .NET, Python, and Java) and markup languages (YAML, CUE). Terraform and Pulumi have some similarities in that you can create, deploy and manage infrastructure as code on any cloud. As a client you can choose one or both solutions we will be happy to advise you.
-
Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. Thanks to its powerful configuration and transparent proxy support, we use it as a tool of choice for filtering egress traffic.
-
Free open source CI/CD server. This open sourced solutions is used by many enterprise clients. If you looking for expirienced DevOps that can help you maitain current jobs, support solution or deploy it from scratch you are in right place we can do it for you.
-
GitLab is a self-hosted system for managing your code. It remains a popular, open-source Git hosting solution implemented by over 50,000 organisations. During last few years, it has evolved with solid community support and growth, handling thousands of users on a single server and several such servers on an active cluster. You are in the right place if you need GitLab pipelines for the SDLC processes.
-
Cucumber is a software tool that supports Behavior-Driven Development (BDD). Central to the Cucumber BDD approach is its standard language parser called Gherkin. It allows expected software behaviours to be specified in a logical language that customers can understand. We will help you implement Cucumber in your test pipelines. Book the meeting.
-
Prometheus is an open source systems monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organisations have adopted Prometheus, and the project has a very active developer and user community. It is now a standalone open source project maintained independently of any company. To emphasise this and to clarify the project's governance structure, Prometheus joined the Cloud Native Computing Foundation in 2016 as the second hosted project after Kubernetes. We can help you deploy, integrate and maitain Prometheus or any other monitoring open source or cloud-based solution.
-
Initially released in 2010, Elasticsearch (ES) is a modern search and analytics engine based on Apache Lucene. Completely open source and built with Java, Elasticsearch is a NoSQL database. We can help you deploy, integrate and maintain ES or any other modern search engine in open source or cloud-based version.
-
The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. The goal of its creators was not to recreate other services but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Would you like to know how Kubeflow can help your organization make ML deployments? Book a meeting with us.
-
Kafka is a distributed system consisting of servers and clients that communicate via a high-performance TCP network protocol. It can be deployed on bare-metal hardware, virtual machines, and containers in on-premise as well as cloud environments. Kafka is run as a cluster of one or more servers that can span multiple datacenters or cloud regions.
-
Apache Hadoop, or simply Hadoop is an open source framework that is used to efficiently store and process large datasets ranging in size from gigabytes to petabytes of data. Instead of using one large computer to store and process the data, Hadoop allows clustering multiple computers to analyze massive datasets in parallel more quickly. It consists of four main modules: Hadoop Distributed File System (HDFS), Yet Another Resource Negotiator (YARN), MapReduce and Hadoop Common.
-
CDAP is an integrated, open source application development platform for the Hadoop ecosystem that provides developers with data and application abstractions to simplify and accelerate application development, address a broader range of real-time and batch use cases, and deploy applications into production while satisfying enterprise requirements. CDAP exposes developer APIs (Application Programming Interfaces) for creating applications and accessing core CDAP services. CDAP defines and implements a diverse collection of services that land applications and data on existing Hadoop infrastructure such as HBase, HDFS, YARN, MapReduce, Hive, and Spark. If you need CDAP in your organisation, book free consultation to discuss your needs and deployment possibilities.
-
Apache Spark is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters and an unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, pandas API on Spark for pandas workloads, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing.
-
Apache Airflow (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. Airflow can be used to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.
-
Apache Avro is a data serialization system that provides rich data structures, a compact, fast, binary data format, a container file to store persistent data, and a remote procedure call (RPC). Avro allows simple integration with dynamic languages and code generation that is not required to read or write data files nor to use or implement RPC protocols.
Creating non-biased algorithms is a complicated matter and a goal that we’re still far from achieving. To do that, the data has to be bias-free, and the engineers creating these algorithms need to ensure they’re not leaking any of their own biases. Needles to say, that AI tends to reflect human societal prejudices.