Skip to content

Instantly share code, notes, and snippets.

@pringshia
Created March 30, 2022 17:44
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save pringshia/dcf1edcd9e37df76f83e8588eed26dd4 to your computer and use it in GitHub Desktop.
Save pringshia/dcf1edcd9e37df76f83e8588eed26dd4 to your computer and use it in GitHub Desktop.
name,ring,quadrant,isNew,description
Four key metrics,Adopt,Techniques,FALSE,"<p>To measure software delivery performance, more and more organizations are defaulting to the <strong>four key metrics</strong> as defined by the <a href=""https://www.devops-research.com/"">DORA research</a> program: change lead time, deployment frequency, mean time to restore (MTTR) and change fail percentage. This research and its statistical analysis have shown a clear link between high-delivery performance and these metrics; they provide a great leading indicator for how a delivery organization as a whole is doing.</p>
<p>We're still big proponents of these metrics, but we've also learned some lessons. We're still observing misguided approaches with tools that help teams measure these metrics based purely on their continuous delivery (CD) pipelines. In particular when it comes to the stability metrics (MTTR and change fail percentage), CD pipeline data alone doesn't provide enough information to determine what a deployment failure with real user impact is. Stability metrics only make sense if they include data about real incidents that degrade service for the users.</p>
<p>We recommend always to keep in mind the ultimate intention behind a measurement and use it to reflect and learn. For example, before spending weeks building up sophisticated dashboard tooling, consider just regularly taking the <a href=""https://www.devops-research.com/quickcheck.html"">DORA quick check</a> in team retrospectives. This gives the team the opportunity to reflect on which <a href=""https://www.devops-research.com/research.html#capabilities"">capabilities</a> they could work on to improve their metrics, which can be much more effective than overdetailed out-of-the-box tooling. Keep in mind that these four key metrics originated out of the organization-level research of high-performing teams, and the use of these metrics at a team level should be a way to reflect on their own behaviors, not just another set of metrics to add to the dashboard.</p>"
Single team remote wall,Adopt,Techniques,FALSE,"<p>A <strong>single team remote wall</strong> is a simple technique to reintroduce the team wall virtually. We recommend that distributed teams adopt this approach; one of the things we hear from teams who moved to remote working is that they miss having the physical team wall. This was a single place where all the various story cards, tasks, status and progress could be displayed, acting as an information radiator and hub for the team. The wall acted as an integration point with the actual data being stored in different systems. As teams have become remote, they've had to revert to looking into the individual source systems and getting an ""at a glance"" view of a project has become very difficult. While there might be some overhead in keeping this up-to-date, we feel the benefits to the team are worth it. For some teams, updating the physical wall formed part of the daily ""ceremonies"" the team did together, and the same can be done with a remote wall.</p>"
Data mesh,Trial,Techniques,FALSE,"<p><a href=""https://martinfowler.com/articles/data-monolith-to-mesh.html""><strong>Data mesh</strong></a> is a <em>decentralized</em> organizational and technical approach in sharing, accessing and managing data for analytics and ML. Its objective is to create a <em>sociotechnical</em> approach that scales out getting value from data as the organization's complexity grows and as the use cases for data proliferate and the sources of data diversify. Essentially, it creates a <em>responsible</em> data-sharing model that is in step with organizational growth and continuous change. In our experience, interest in the application of data mesh has grown tremendously. The approach has inspired many organizations to embrace its adoption and technology providers to repurpose their existing technologies for a mesh deployment. Despite the great interest and growing experience in data mesh, its implementations face high cost of integration. Moreover, its adoption remains limited to sections of larger organizations and technology vendors are distracting the organizations from the hard <em>socio</em> aspects of data mesh — decentralized data ownership and a federated governance operating model.</p>
<p>These ideas are explored in <em><a href=""https://www.amazon.com/Data-Mesh-Delivering-Data-Driven-Value/dp/1492092398"">Data Mesh, Delivering Data-Driven Value at Scale</a></em>, which guides practitioners, architects, technical leaders and decision makers on their journeys from traditional big data architecture to data mesh. It provides a complete introduction to data mesh principles and its constituents; it covers how to design a data mesh architecture, guide and execute a data mesh strategy and navigate organizational design to a decentralized data ownership model. The goal of the book is to create a TRUE framework for deeper conversations and lead to the next phase in maturity of data mesh.</p>"
Definition of production readiness,Trial,Techniques,TRUE,"<p>In an organization that practices the ""you build it, you run it"" principle, a <strong>definition of production readiness</strong> (DPR) is a useful technique to support teams in assessing and preparing the operational readiness of new services. Implemented as a checklist or a template, a DPR gives teams guidance on what to think about and consider before they bring a new service into production. While DPRs do not define specific service-level objectives (SLOs) to fulfill (those would be hard to define one-size-fits-all), they remind teams what categories of SLOs to think of, what organizational standards to comply with and what documentation is required. DPRs provide a source of input that teams turn into respective product-specific requirements around, for example, observability and reliability, to feed into their product backlogs.</p>
<p>DPRs are closely related to Google's concept of a <a href=""https://sre.google/sre-book/evolving-sre-engagement-model/#:%7E:text=The%20most%20typical,of%20a%20service"">production readiness review (PRR)</a>. In organizations that are too small to have a dedicated site reliability engineering team, or who are concerned that a review board process could negatively impact a team's flow to go live, having a DPR can at least provide some guidance and document the agreed-upon criteria for the organization. For highly critical new services, extra scrutiny on fulfilling the DPR can be added via a PRR when needed.</p>"
Documentation quadrants,Trial,Techniques,TRUE,"<p>Writing good documentation is an overlooked aspect of software development that is often left to the last minute and done in a haphazard way. Some of our teams have found <strong><a href=""https://documentation.divio.com/"">documentation quadrants</a></strong> a handy way to ensure the right artifacts are being produced. This technique classifies artifacts along two axes: The first axis relates to the nature of the information, practical or theoretical; the second axis describes the context in which the artifact is used, studying or working. This defines four quadrants in which artifacts such as tutorials, how-to guides or reference pages can be placed and understood. This classification system not only ensures that critical artifacts aren't overlooked but also guides the presentation of the content. We've found this particularly useful for creating onboarding documentation that brings developers up to speed quickly when they join a new team.</p>"
Rethinking remote standups,Trial,Techniques,TRUE,"<p>The term <em>standup</em> originated from the idea of standing up during this daily sync meeting, with the goal of making it short. It's a common principle many teams try to abide by in their standups: keep it crisp and to the point. But we're now seeing teams challenge that principle and <strong>rethinking remote standups</strong>. When co-located, there are lots of opportunities during the rest of the day to sync up with each other spontaneously, as a complement to the short standup. Remotely, some of our teams are now experimenting with a longer meeting format, similar to what the folks at Honeycomb call a “<a href=""https://www.honeycomb.io/blog/standup-meetings-are-dead/"">meandering team sync</a>.”</p>
<p>It's not about getting rid of a daily sync altogether; we still find that very important and valuable, especially in a remote setup. Instead, it's about extending the time blocked in everybody's calendars for the daily sync to up to an hour, and use it in a way that makes some of the other team meetings obsolete and brings the team closer together. Activities can still include the well-tried walkthrough of the team board but are then extended by more detailed clarification discussions, quick decisions, and taking time to socialize. The technique is considered successful if it reduces the overall meeting load and improves team bonding.</p>"
Server-driven UI,Trial,Techniques,TRUE,"<p>When putting together a new volume of the Radar, we're often overcome by a sense of déjà vu, and the technique of <strong>server-driven UI</strong> sparks a particularly strong case with the advent of frameworks that allow mobile developers to take advantage of faster change cycles while not falling foul of an app store's policies around revalidation of the mobile app itself. We've blipped about this before from the perspective of enabling mobile development to <a href=""/radar/techniques/micro-frontends-for-mobile"">scale across teams</a>. Server-driven UI separates the rendering into a generic container in the mobile app while the structure and data for each view is provided by the server. This means that changes that once required a round trip to an app store can now be accomplished via simple changes to the responses the server sends. Note, we're not recommending this approach for all UI development, indeed we've experienced some horrendous, overly configurable messes, but with the backing of behemoths such as AirBnB and Lyft, we suspect it's not only us at Thoughtworks getting tired of <a href=""/radar/techniques/spa-by-default"">everything being done client side</a>. Watch this space.</p>"
Software Bill of Materials,Trial,Techniques,FALSE,"<p>With continued pressure to keep systems secure and no reduction in the general threat landscape, a machine-readable <strong>Software Bill of Materials</strong> (SBOM) may help teams stay on top of security problems in the libraries that they rely on. The recent <a href=""https://en.wikipedia.org/wiki/Log4Shell"">Log4Shell</a> zero-day remote exploit was critical and widespread, and if teams had had an SBOM ready, it could have been scanned for and fixed quickly. We've now had production experience using SBOMs on projects ranging from small companies to large multinationals and even government departments, and we're convinced they provide a benefit. Tools such as <a href=""/radar/tools/syft"">Syft</a> make it easy to use an SBOM for vulnerability detection.</p>"
Tactical forking,Trial,Techniques,TRUE,"<p><strong><a href=""https://faustodelatog.wordpress.com/2020/10/16/tactical-forking/"">Tactical forking</a></strong> is a technique that can assist with restructuring or migrating from monolithic codebases to microservices. Specifically, this technique offers one possible alternative to the more common approach of fully modularizing the codebase first, which in many circumstances can take a very long time or be very challenging to achieve. With tactical forking a team can create a new fork of the codebase and use that to address and extract one particular concern or area while deleting the unnecessary code. Use of this technique would likely be just one part of a longer-term plan for the overall monolith.</p>"
Team cognitive load,Trial,Techniques,FALSE,"<p>A system's architecture mimics an organizational structure and its communication. It's not big news that we should be intentional about how teams interact — see, for instance, the <a href=""/radar/techniques/inverse-conway-maneuver"">Inverse Conway Maneuver</a>. Team interaction is one of the variables for how fast and how easily teams can deliver value to their customers. We were happy to find a way to measure these interactions; we used the <a href=""https://teamtopologies.com/book"">Team Topologies</a> author's <a href=""https://github.com/TeamTopologies/Team-Cognitive-Load-Assessment"">assessment</a> which gives you an understanding of how easy or difficult the teams find it to build, test and maintain their services. By measuring <strong>team cognitive load</strong>, we could better advise our clients on how to change their teams' structure and evolve their interactions.</p>"
Transitional architecture,Trial,Techniques,TRUE,"<p>A <strong><a href=""https://martinfowler.com/articles/patterns-legacy-displacement/transitional-architecture.html"">transitional architecture</a></strong> is a useful practice used when replacing legacy systems. Much like scaffolding might be built, reconfigured and finally removed during construction or renovation of a building, you often need interim architectural steps during legacy displacement. Transitional architectures will be removed or replaced later on, but they're not just throwaway work given the important role they play in reducing risk and allowing a difficult problem to be broken into smaller steps. Thus they help with avoiding the trap of defaulting to a ""big bang"" legacy replacement approach, because you cannot make smaller interim steps line up with a final architectural vision. Care is needed to make sure the architectural ""scaffolding"" is eventually removed, lest it just become technical debt later on.</p>"
CUPID,Assess,Techniques,TRUE,"<p>How do you approach writing good code? How do you judge if you've written good code? As software developers, we're always looking for catchy rules, principles and patterns that we can use to share a language and values with each other when it comes to writing simple, easy-to-change code.</p>
<p>Daniel Terhorst-North has recently made a new attempt at creating such a checklist for good code. He argues that instead of sticking to a set of rules like <a href=""https://en.wikipedia.org/wiki/SOLID"">SOLID</a>, using a set of properties to aim for is more generally applicable. He came up with what he calls the <strong><a href=""https://dannorth.net/2022/02/10/cupid-for-joyful-coding/"">CUPID</a></strong> properties to describe what we should strive for to achieve ""joyful"" code: Code should be composable, follow the Unix philosophy and be predictable, idiomatic and domain based.</p>"
Inclusive design,Assess,Techniques,TRUE,"<p>We recommend organizations assess <a href=""https://www.microsoft.com/design/inclusive/""><strong>inclusive design</strong></a> as a way of making sure accessibility is treated as a first-class requirement. All too often requirements around accessibility and inclusivity are ignored until just before, if not just after, the release of software. The cheapest and simplest way to accommodate these requirements, while also providing early feedback to teams, is to incorporate them fully into the development process. In the past, we've highlighted techniques that perform a ""shift-left"" for security and cross-functional requirements; one perspective on this technique is that it achieves the same goal for accessibility.</p>"
Operator pattern for nonclustered resources,Assess,Techniques,FALSE,"<p>We're continuing to see increasing use of the <a href=""/radar/tools/kubernetes-operators"">Kubernetes Operator</a> pattern for purposes other than managing applications deployed on the cluster. Using the <strong>Operator pattern for nonclustered resources</strong> takes advantage of custom resource definitions and the event-driven scheduling mechanism implemented in the Kubernetes control plane to manage activities that are related to yet outside of the cluster. This technique builds on the idea of <a href=""/radar/techniques/kube-managed-cloud-services"">Kube-managed cloud services</a> and extends it to other activities, such as continuous deployment or reacting to changes in external repositories. One advantage of this technique over a purpose-built tool is that it opens up a wide range of tools that either come with Kubernetes or are part of the wider ecosystem. You can use commands such as diff, dry-run or apply to interact with the operator's custom resources. Kube's scheduling mechanism makes development easier by eliminating the need to orchestrate activities in the proper order. Open-source tools such as <a href=""/radar/tools/crossplane"">Crossplane</a>, <a href=""https://fluxcd.io/"">Flux</a> and <a href=""/radar/platforms/argo-cd"">Argo CD</a> take advantage of this technique, and we expect to see more of these emerge over time. Although these tools have their use cases, we're also starting to see the inevitable misuse and overuse of this technique and need to repeat some old advice: Just because you <em>can</em> do something with a tool doesn't mean you <em>should</em>. Be sure to rule out simpler, conventional approaches before creating a custom resource definition and taking on the complexity that comes with this approach.</p>"
Service mesh without sidecar,Assess,Techniques,TRUE,"<p><a href=""/radar/techniques/service-mesh"">Service mesh</a> is usually implemented as a reverse-proxy process, aka sidecar, deployed alongside each service instance. Although these sidecars are lightweight processes, the overall cost and operational complexity of adopting service mesh increases with every new instance of the service requiring another sidecar. However, with the advancements in <a href=""/radar/platforms/ebpf"">eBPF</a>, we're observing a new <a href=""https://isovalent.com/blog/post/2021-12-08-ebpf-servicemesh""><strong>service mesh without sidecar</strong></a> approach where the functionalities of the mesh are safely pushed down to the OS kernel, thereby enabling services in the same node to communicate transparently via sockets without the need of additional proxies. You can try this with <a href=""https://github.com/cilium/cilium-service-mesh-beta"">Cilium service mesh</a> and simplify the deployment from one proxy-per-service to one proxy-per-node. We're intrigued by the capabilities of eBPF and find this evolution of service mesh to be important to assess.</p>"
SLSA,Assess,Techniques,TRUE,"<p>As software continues to grow in complexity, the threat vector of software dependencies becomes increasingly challenging to guard against. The recent Log4J vulnerability showed how difficult it can be to even <em>know</em> those dependencies — many companies who didn't use Log4J directly were unknowingly vulnerable simply because other software in their ecosystem relied on it. Supply chain Levels for Software Artifacts, or <strong><a href=""https://slsa.dev"">SLSA</a></strong> (pronounced ""salsa""), is a consortium-curated set of guidance for organizations to protect against supply chain attacks, evolved from internal guidance Google has been using for years. We appreciate that SLSA doesn't promise a ""silver bullet,"" tools-only approach to securing the supply chain but instead provides a checklist of concrete threats and practices along a maturity model. The <a href=""https://slsa.dev/spec/v0.1/threats"">threat model</a> is easy to follow with real-world examples of attacks, and the <a href=""https://slsa.dev/spec/v0.1/requirements"">requirements</a> provide guidance to help organizations prioritize actions based on levels of increasing robustness to improve their supply chain security posture. We think SLSA provides applicable advice and look forward to more organizations learning from it.</p>"
The streaming data warehouse,Assess,Techniques,TRUE,"<p>The need to respond quickly to customer insights has driven increasing adoption of event-driven architectures and stream processing. Frameworks such as <a href=""/radar/platforms/apache-spark"">Spark</a>, <a href=""/radar/platforms/apache-flink"">Flink</a> or <a href=""/radar/platforms/kafka-streams"">Kafka Streams</a> offer a paradigm where simple event consumers and producers can cooperate in complex networks to deliver real-time insights. But this programming style takes time and effort to master and when implemented as single-point applications, it lacks interoperability. Making stream processing work universally on a large scale can require a significant engineering investment. Now, a new crop of tools is emerging that offers the benefits of stream processing to a wider, established group of developers who are comfortable using SQL to implement analytics. Standardizing on SQL as the universal streaming language lowers the barrier for implementing streaming data applications. Tools like <a href=""/radar/languages-and-frameworks/ksqldb"">ksqlDB</a> and <a href=""/radar/platforms/materialize"">Materialize</a> help transform these separate applications into unified platforms. Taken together, a collection of SQL-based streaming applications across an enterprise might constitute a <strong>streaming data warehouse</strong>.</p>"
TinyML,Assess,Techniques,TRUE,"<p>Until recently, executing a machine-learning (ML) model was seen as computationally expensive and in some cases required special-purpose hardware. While creating the models still broadly sits within this classification, they can be created in a way that allows them to be run on small, low-cost and low-power consumption devices. This technique, called <strong><a href=""https://towardsdatascience.com/an-introduction-to-tinyml-4617f314aa79"">TinyML</a></strong>, has opened up the possibility of running ML models in situations many might assume infeasible. For example, on battery-powered devices, or in disconnected environments with limited or patchy connectivity, the model can be run locally without prohibitive cost. If you've been considering using ML but thought it unrealistic because of compute or network constraints, then this technique is worth assessing.</p>"
Azure Data Factory for orchestration,Hold,Techniques,FALSE,"<p>For organizations using Azure as their primary cloud provider, <a href=""https://azure.microsoft.com/en-us/services/data-factory/"">Azure Data Factory</a> is currently the default for orchestrating data-processing pipelines. It supports data ingestion, copying data from and to different storage types on prem or on Azure and executing transformation logic. Although we've had adequate experience with Azure Data Factory for simple migrations of data stores from on prem to the cloud, we discourage the use of <strong>Azure Data Factory for orchestration</strong> of complex data-processing pipelines and workflows. We've had some success with Azure Data Factory when it's used primarily to move data between systems. For more complex data pipelines, it still has its challenges, including poor debuggability and error reporting; limited observability as Azure Data Factory logging capabilities don't integrate with other products such as Azure Data Lake Storage or Databricks, making it difficult to get an end-to-end observability in place; and availability of data source-triggering mechanisms only to certain regions. At this time, we encourage using other open-source orchestration tools (e.g., <a href=""/radar/tools/airflow"">Airflow</a>) for complex data pipelines and limiting Azure Data Factory for data copying or snapshotting. Our teams continue to use Data Factory to move and extract data, but for larger operations we recommend other, more well-rounded workflow tools.</p>"
Miscellaneous platform teams,Hold,Techniques,TRUE,"<p>We previously featured <a href=""/radar/techniques/platform-engineering-product-teams"">platform engineering product teams</a> in Adopt as a good way for internal platform teams to operate, thus enabling delivery teams to self-service deploy and operate systems with reduced lead time and stack complexity. Unfortunately we're seeing the ""platform team"" label applied to teams dedicated to projects that don't have clear outcomes or a well-defined set of customers. As a result, these <strong>miscellaneous platform teams</strong>, as we call them, struggle to deliver due to high cognitive loads and a lack of clearly aligned priorities as they're dealing with a miscellaneous collection of unrelated systems. They effectively become just another general support team for things that don't fit or that are unwanted elsewhere. We continue to believe platform engineering product teams focused around a clear and well-defined (internal) product offer a better set of outcomes.</p>"
Production data in test environments,Hold,Techniques,FALSE,"<p>We continue to perceive <strong>production data in test environments</strong> as an area for concern. Firstly, many examples of this have resulted in reputational damage, for example, where an incorrect alert has been sent from a test system to an entire client population. Secondly, the level of security, specifically around protection of private data, tends to be less for test systems. There is little point in having elaborate controls around access to production data if that data is copied to a test database that can be accessed by every developer and QA. Although you <em>can</em> obfuscate the data, this tends to be applied only to specific fields, for example, credit card numbers. Finally, copying production data to test systems can break privacy laws, for example, where test systems are hosted or accessed from a different country or region. This last scenario is especially problematic with complex cloud deployments. Fake data is a safer approach, and tools exist to help in its creation. We do recognize there are reasons for <em>specific</em> elements of production data to be copied, for example, in the reproduction of bugs or for training of specific ML models. Here our advice is to proceed with caution.</p>"
SPA by default,Hold,Techniques,TRUE,"<p>We generally avoid putting blips in Hold when we consider that advice too obvious, including blindly following an architectural style without paying attention to trade-offs. However, the sheer prevalence of teams choosing a single-page application (SPA) by default when they need a website has us concerned that people aren't even recognizing SPAs as an architectural style to begin with, instead immediately jumping into framework selection. SPAs incur complexity that simply doesn't exist with traditional server-based websites: search engine optimization, browser history management, web analytics, first page load time, etc. That complexity is often warranted for user experience reasons, and tooling continues to evolve to make those concerns easier to address (although the churn in the React community around state management hints at how hard it can be to get a generally applicable solution). Too often, though, we don't see teams making that trade-off analysis, blindly accepting the complexity of <strong>SPAs by default</strong> even when the business needs don't justify it. Indeed, we've started to notice that many newer developers aren't even aware of an alternative approach, as they've spent their entire career in a framework like React. We believe that many websites will benefit from the simplicity of server-side logic, and we're encouraged by techniques like <a href=""/radar/techniques/hotwire"">Hotwire</a> that help close the gap on user experience.</p>"
Azure DevOps,Trial,Platforms,FALSE,"<p>As the <strong><a href=""https://azure.microsoft.com/en-us/services/devops/"">Azure DevOps</a></strong> ecosystem keeps growing, our teams are using it more with success. These services contain a set of managed services, including hosted Git repos, build and deployment pipelines, automated testing tooling, backlog management tooling and artifact repository. We've seen our teams gaining experience in using this platform with good results, which means Azure DevOps is maturing. We particularly like its flexibility; it allows you to use the services you want even if they're from different providers. For instance, you could use an external Git repository while still using the Azure DevOps pipeline services. Our teams are especially excited about <a href=""https://azure.microsoft.com/en-us/services/devops/pipelines/"">Azure DevOps Pipelines</a>. As the ecosystem matures, we're seeing an uptick in onboarding teams that are already on the Azure stack as it easily integrates with the rest of the Microsoft world.</p>"
Azure Pipeline templates,Trial,Platforms,TRUE,"<p><strong><a href=""https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops"">Azure Pipeline templates</a></strong> allow you to remove duplication in your Azure Pipeline definition through two mechanisms. With ""includes"" templates, you can reference a template such that it will expand inline like a parameterized C++ macro, allowing a simple way of factoring out common configuration across stages, jobs and steps. With ""extends"" templates, you can define an outer shell with common pipeline configuration, and with the <a href=""https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass#required-template"">required template approval</a>, you can fail the build if the pipeline doesn't extend certain templates, preventing malicious attacks against the pipeline configuration itself. Along with <a href=""/radar/platforms/circleci"">CircleCI</a> Orbs and the newer <a href=""/radar/platforms/reusable-workflows-in-github-actions"">GitHub Actions Reusable Workflows</a>, Azure Pipeline templates are part of the trend of creating modularity in pipeline design across multiple platforms, and several of our teams have been happy using them.</p>"
CircleCI,Trial,Platforms,FALSE,"<p>Many of our teams choose <strong><a href=""http://circleci.com/"">CircleCI</a></strong> for their continuous integration needs, and they appreciate its ability to run complex pipelines efficiently. The CircleCI developers continue to add new features with CircleCI, now in version 3.0. <a href=""https://circleci.com/docs/2.0/concepts/#orbs"">Orbs</a> and <a href=""https://circleci.com/docs/2.0/executor-types/"">executors</a> were called out by our teams as being particularly useful. Orbs are reusable snippets of code that automate repeated processes, speed up project setup and make it easy to integrate with third-party tools. The wide variety of executor types provides flexibility to set up jobs in Docker, Linux, macOS or Windows VMs.</p>"
Couchbase,Trial,Platforms,FALSE,"<p>When we originally blipped <strong><a href=""https://www.couchbase.com/"">Couchbase</a></strong> in 2013, it was seen primarily as a persistent cache that evolved from a merger of <a href=""https://github.com/membase"">Membase</a> and <a href=""https://couchdb.apache.org/"">CouchDB</a>. Since then, it has undergone steady improvement and an ecosystem of related tools and commercial offerings has grown up around it. Among the additions to the product suite are Couchbase Mobile and the Couchbase Sync Gateway. These features work together to keep persistent data on edge devices up-to-date even when the device is offline for periods of time due to intermittent connectivity. As these devices proliferate, we see increasing need for embedded persistence that continues to work whether or not the device happens to be connected. Recently, one of our teams evaluated Couchbase for its offline sync capability and found that this off-the-shelf capability saved them considerable effort that they otherwise would have had to invest themselves.</p>"
eBPF,Trial,Platforms,FALSE,"<p>For several years now, the Linux kernel has included the extended Berkeley Packet Filter (<strong><a href=""https://ebpf.io/"">eBPF</a></strong>), a virtual machine that provides the ability to attach filters to particular sockets. But eBPF goes far beyond packet filtering and allows custom scripts to be triggered at various points within the kernel with very little overhead. Although this technology isn't new, it's now coming into its own with the increasing use of microservices deployed as orchestrated containers. Kubernetes and service mesh technology such as <a href=""/radar/platforms/istio"">Istio</a> are commonly used, and they employ sidecars to implement control functionality. With new tools — <a href=""https://github.com/solo-io/bumblebee"">Bumblebee</a> in particular makes building, running and distributing eBPF programs much easier — eBPF can be seen as an alternative to the traditional sidecar. A maintainer of <a href=""/radar/tools/cilium"">Cilium</a>, a tool in this space, has even proclaimed the <a href=""https://isovalent.com/blog/post/2021-12-08-ebpf-servicemesh"">demise of the sidecar</a>. An approach based on eBPF reduces some overhead in performance and operation that comes with sidecars, but it doesn't support common features such as SSL termination.</p>"
GitHub Actions,Trial,Platforms,FALSE,"<p><strong><a href=""https://docs.github.com/en/actions"">GitHub Actions</a></strong> has grown considerably last year. It has proven that it can take on more complex workflows and call other actions in composite actions among other things. It still has some shortcomings, though, such as its inability to re-trigger a single job of a workflow. Although the ecosystem in the <a href=""https://github.com/marketplace?type=actions"">GitHub Marketplace</a> has its obvious advantages, giving third-party GitHub Actions access to your build pipeline risks sharing secrets in insecure ways (we recommend following GitHub's advice on <a href=""https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions"">security hardening</a>). However, the convenience of creating your build workflow directly in GitHub next to your source code combined with the ability to run GitHub Actions locally using open-source tools such as <a href=""https://github.com/nektos/act"">act</a> is a compelling option that has facilitated setup and onboarding of our teams.</p>"
GitLab CI/CD,Trial,Platforms,TRUE,"<p>If you're using <a href=""https://gitlab.com/"">GitLab</a> to manage your software delivery, you should also look at <strong><a href=""https://docs.gitlab.com/ee/ci/"">GitLab CI/CD</a></strong> for your continuous integration and continuous delivery needs. We've found it especially useful when used with on-premise GitLab and self-hosted runners, as this combination gets around authorization headaches often caused by using a cloud-based solution. Self-hosted runners can be fully configured for your purposes with the right OS and dependencies installed, and as a result pipelines can run much faster than using a cloud-provisioned runner that needs to be configured each time.</p>
<p>Apart from the basic build, test and deploy pipeline, GitLab's product supports Services, Auto Devops and ChatOps among other advanced features. Services are useful in running Docker services such as Postgres or <a href=""/radar/languages-and-frameworks/testcontainers"">Testcontainer</a> linked to a job for integration and end-to-end testing. Auto Devops creates pipelines with zero configuration which is very useful for teams that are new to continuous delivery or for organizations with many repositories that would otherwise need to create many pipelines manually.</p>"
Google BigQuery ML,Trial,Platforms,FALSE,"<p>Since we last blipped about <strong><a href=""https://cloud.google.com/bigquery-ml/docs"">Google BigQuery ML</a></strong>, more sophisticated models such as Deep Neural Networks and AutoML Tables have been added by connecting BigQuery ML with TensorFlow and Vertex AI as its backend. BigQuery has also introduced support for time series forecasting. One of our concerns previously was <a href=""/radar/techniques/explainability-as-a-first-class-model-selection-criterion"">explainability</a>. Earlier this year, <a href=""https://cloud.google.com/bigquery-ml/docs/reference/standard-sql/bigqueryml-syntax-xai-overview"">BigQuery Explainable AI</a> was announced for general availability, taking a step in addressing this. We can also export BigQuery ML models to Cloud Storage as a Tensorflow SavedModel and use them for online prediction. There remain trade-offs like ease of ""continuous delivery for machine learning"" but with its low barrier to entry, BigQuery ML remains an attractive option, particularly when the data already resides in BigQuery.</p>"
Google Cloud Dataflow,Trial,Platforms,FALSE,"<p><strong><a href=""https://cloud.google.com/dataflow/"">Google Cloud Dataflow</a></strong> is a cloud-based data-processing service for both batch and real-time data-streaming applications. Our teams are using Dataflow to create processing pipelines for integrating, preparing and analyzing large data sets, with <a href=""https://beam.apache.org/"">Apache Beam</a>'s unified programming model on top to ease manageability. We first featured Dataflow in 2018, and its stability, performance and rich feature set make us confident to move it to Trial in this edition of the Radar.</p>"
Reusable workflows in Github Actions,Trial,Platforms,TRUE,"<p>We've seen increased interest in <a href=""/radar/platforms/github-actions"">GitHub Actions</a> since we first blipped it two Radars ago. With the release of <a href=""https://docs.github.com/en/actions/using-workflows/reusing-workflows"">reusable workflows</a>, GitHub continues to evolve the product in a way that addresses some of its early shortcomings. <strong>Reusable workflows in Github Actions</strong> bring modularity to pipeline design, allowing parameterized reuse even across repositories (as long as the workflow repository is public). They support explicit passing of confidential values as secrets and can pass outputs to the calling job. With a few lines of YAML, GitHub Actions now gives you the type of flexibility you see with <a href=""/radar/platforms/circleci"">CircleCI</a> Orbs or <a href=""/radar/platforms/azure-pipeline-templates"">Azure Pipeline Templates</a>, but without having to leave GitHub as a platform.</p>"
Sealed Secrets,Trial,Platforms,FALSE,"<p><a href=""/radar/platforms/kubernetes"">Kubernetes</a> natively supports a key-value object known as a secret. However, by default, Kubernetes secrets aren't really secret. They're handled separately from other key-value data so that precautions or access control can be applied separately. There is support for encrypting secrets before they are stored in <a href=""https://etcd.io/"">etcd</a>, but the secrets start out as plain text fields in configuration files. <strong><a href=""https://github.com/bitnami-labs/sealed-secrets"">Sealed Secrets</a></strong> is a combination operator and command-line utility that uses asymmetric keys to encrypt secrets so that they can only be decrypted by the controller in the cluster. This process ensures that the secrets won't be compromised while they sit in the configuration files that define a Kubernetes deployment. Once encrypted, these files can be safely shared or stored alongside other deployment artifacts.</p>"
VerneMQ,Trial,Platforms,TRUE,"<p><strong><a href=""https://github.com/vernemq/vernemq"">VerneMQ</a></strong> is an open-source, high-performance, distributed MQTT broker. We've blipped other MQTT brokers in the past like <a href=""/radar/platforms/mosquitto"">Mosquitto</a> and <a href=""/radar/platforms/emq"">EMQ</a>. Like EMQ and RabbitMQ, VerneMQ is also based on Erlang/OTP which makes it highly scalable. It scales horizontally and vertically on commodity hardware to support a high number of concurrent publishers and consumers while maintaining low latency and fault tolerance. In our internal benchmarks, we've been able to achieve a few million concurrent connections in a single cluster. While it's not new, we've used it in production for some time now, and it has worked well for us.</p>"
actions-runner-controller,Assess,Platforms,TRUE,"<p><strong><a href=""https://github.com/actions-runner-controller/actions-runner-controller"">actions-runner-controller</a></strong> is a Kubernetes <a href=""https://kubernetes.io/docs/concepts/architecture/controller/"">controller</a> that operates <a href=""https://docs.github.com/en/actions/hosting-your-own-runners"">self-hosted runners</a> for <a href=""/radar/platforms/github-actions"">GitHub Actions</a> on your Kubernetes cluster. With this tool you create a runner resource on Kubernetes, and it will run and operate the self-hosted runner. Self-hosted runners are helpful in scenarios where the job that your GitHub Actions runs needs to access resources that are either not accessible to GitHub cloud runners or have specific operating system and environmental requirements that are different from what GitHub provides. In those cases where you have a Kubernetes cluster, you can run your self-hosted runners as a Kubernetes pod, with the ability to scale up or down hooking into GitHub webhook events. actions-controller-runner is lightweight and scalable.</p>"
Apache Iceberg,Assess,Platforms,TRUE,"<p><strong><a href=""https://iceberg.apache.org/"">Apache Iceberg</a></strong> is an open table format for very large analytic data sets. Iceberg supports modern analytical data operations such as record-level insert, update, delete, <a href=""https://iceberg.apache.org/docs/latest/spark-queries/#time-travel"">time-travel queries</a>, ACID transactions, <a href=""https://iceberg.apache.org/docs/latest/partitioning/#icebergs-hidden-partitioning"">hidden partitioning</a> and <a href=""https://iceberg.apache.org/docs/latest/evolution/"">full schema evolution</a>. It supports multiple underlying file storage formats such as <a href=""https://parquet.apache.org/"">Apache Parquet</a>, <a href=""https://orc.apache.org/"">Apache ORC</a> and <a href=""https://avro.apache.org/docs/1.2.0/"">Apache Avro</a>. Many data-processing engines support Apache Iceberg, including SQL engines such as <a href=""https://www.dremio.com/"">Dremio</a> and <a href=""https://trino.io/"">Trino</a> as well as (structured) streaming engines such as <a href=""https://spark.apache.org/"">Apache Spark</a> and <a href=""https://flink.apache.org/"">Apache Flink</a>.</p>
<p>Apache Iceberg falls in the same category as <a href=""https://delta.io/"">Delta Lake</a> and <a href=""https://hudi.apache.org/"">Apache Hudi</a>. They all more or less support similar features, but each differs in the underlying implementations and detailed feature lists. Iceberg is an independent format and is not native to any specific processing engine, hence it's supported by an increasing number of platforms, including <a href=""https://docs.aws.amazon.com/athena/latest/ug/querying-iceberg.html"">AWS Athena</a> and <a href=""https://www.snowflake.com/"">Snowflake</a>. For the same reason, Apache Iceberg, unlike native formats such as Delta Lake, may not benefit from optimizations when used with Spark.</p>"
Blueboat,Assess,Platforms,TRUE,"<p><strong><a href=""https://github.com/losfair/blueboat"">Blueboat</a></strong> is a multitenant platform for serverless web applications. It leverages the popular V8 JavaScript engine and implements commonly used web application libraries natively in <a href=""/radar/languages-and-frameworks/rust"">Rust</a> for security and performance. You can think of Blueboat as an alternative to <a href=""/radar/platforms/cloudflare-workers"">CloudFlare Workers</a> or <a href=""https://deno.com/deploy"">Deno Deploy</a> but with an important distinction — you have to operate and manage the underlying infrastructure. That said, we recommend you carefully assess Blueboat for your on-prem serverless needs.</p>"
Cloudflare Pages,Assess,Platforms,TRUE,"<p>When <a href=""/radar/platforms/cloudflare-workers"">Cloudflare Workers</a> was released, we highlighted it as an early function as a service (FaaS) for edge computing with an interesting implementation. The release of <strong><a href=""https://pages.cloudflare.com/"">Cloudflare Pages</a></strong> last April didn't feel as noteworthy, because Pages is just one in a class of many Git-backed site-hosting solutions. It did have continuous previews, a useful feature not found in most alternatives. Now, though, Cloudflare has more tightly <a href=""https://blog.cloudflare.com/cloudflare-pages-goes-full-stack/"">integrated Workers and Pages</a>, creating a fully integrated <a href=""/radar/techniques/jamstack"">Jamstack</a> solution running on the CDN. The inclusion of a key-value store and a strongly consistent coordination primitive further enhance the attractiveness of the new version of Cloudflare Pages.</p>"
Colima,Assess,Platforms,TRUE,"<p><strong><a href=""https://github.com/abiosoft/colima"">Colima</a></strong> is becoming a popular open alternative to Docker for Desktop. It provisions the Docker container runtime in a Lima VM, configures the Docker CLI on macOS and handles port-forwarding and volume mounts. Colima uses <a href=""https://containerd.io/"">containerd</a> as runtime, which is also the runtime on most managed Kubernetes services (thus improved dev-prod parity). With Colima you can easily use and test the latest features of containerd, such as lazy loading for container images. With its good performance, we're watching Colima as a strong potential for the open-source choice alternative to Docker for Desktop.</p>"
Collibra,Assess,Platforms,TRUE,"<p>In the increasingly crowded space that is the enterprise data catalog market, our teams have enjoyed working with <strong><a href=""https://www.collibra.com/us/en"">Collibra</a></strong>. They liked the deployment flexibility of either a SaaS or self-hosted instance, the wide range of functionality included out of the box, including data governance, lineage, quality and observability. Users also have the option to use a smaller subset of capabilities required by a more decentralized approach such as a <a href=""/radar/techniques/data-mesh"">data mesh</a>. The real feather in its cap has been their often overlooked customer support, which our people have found to be collaborative and supportive. Of course, there's a tension between simple data catalogs and more full featured enterprise platforms, but so far the teams using it are happy with how Collibra has supported their needs.</p>"
CycloneDX,Assess,Platforms,TRUE,"<p><strong><a href=""https://cyclonedx.org/"">CycloneDX</a></strong> is a standard for describing a machine-readable <a href=""/radar/techniques/software-bill-of-materials"">Software Bill of Materials</a> (SBOM). As software and compute fabrics increase in complexity, <em>software</em> becomes harder to define. Originating with OWASP, CycloneDX improves on the older SPDX standard with a broader definition that extends beyond the local machine dependencies to include runtime service dependencies. You'll also find implementations in several languages, an <a href=""https://cyclonedx.org/tool-center/"">ecosystem</a> of supporting integrations and a <a href=""https://github.com/CycloneDX/cyclonedx-cli"">CLI tool</a> that lets you analyze and change SBOMs with appropriate signing and verification.</p>"
Embeddinghub,Assess,Platforms,TRUE,"<p><strong><a href=""https://github.com/featureform/embeddinghub"">Embeddinghub</a></strong> is a vector database for machine-learning <a href=""https://www.featureform.com/post/the-definitive-guide-to-embeddings"">embeddings</a>, and quite similar to <a href=""/radar/platforms/milvus-2-0"">Milvus</a>. However, with out-of-the-box support for approximate nearest neighbor operations, partitioning, versioning and access control, we recommend you assess Embeddinghub for your embedding vector use cases.</p>"
Temporal,Assess,Platforms,TRUE,"<p><strong><a href=""https://temporal.io/"">Temporal</a></strong> is a platform for developing long-running workflows, particularly for microservice architectures. A fork of Uber’s previous OSS <a href=""https://github.com/uber/cadence"">Cadence</a> project, it has an event-sourcing model for long-running workflows so they can survive process/machine crashes. Although we don’t recommend using distributed transactions in microservice architectures, if you do need to implement them or long-running <a href=""https://microservices.io/patterns/data/saga.html"">Sagas</a>, you may want to look at Temporal.</p>"
tfsec,Adopt,Tools,FALSE,"<p>For our projects using <a href=""/radar/tools/terraform"">Terraform</a>, <strong><a href=""https://github.com/liamg/tfsec"">tfsec</a></strong> has quickly become a default static analysis tool to detect potential security risks. It's easy to integrate into a CI pipeline and has a growing library of checks against all of the major cloud providers and platforms like Kubernetes. Given its ease of use, we believe tfsec could be a good addition to any Terraform project.</p>"
AKHQ,Trial,Tools,TRUE,"<p><strong><a href=""https://akhq.io/docs/#installation"">AKHQ</a></strong> is a GUI for Apache Kafka that lets you manage topics, topics data, consumer groups and more. Some of our teams have found AKHQ to be an effective tool to watch the real-time status of a Kafka cluster. You can, for example, browse the topics on a cluster. For each topic, you can visualize the name, the number of messages stored, the disk size used, the time of the last record, the number of partitions, the replication factor with the in-sync quantity and the consumer group. With options for Avro and Protobuf deserialization, AKHQ can help you understand the flow of data in your Kafka environment.</p>"
cert-manager,Trial,Tools,FALSE,"<p><strong><a href=""https://cert-manager.io/"">cert-manager</a></strong> is a tool to manage your X.509 certificates within your <a href=""/radar/platforms/kubernetes"">Kubernetes</a> cluster. It models certificates and issuers as first-class resource types and provides certificates as a service securely to developers and applications working within the Kubernetes cluster. The obvious choice when using the Kubernetes default ingress controller, it's also recommended for others and much preferred over hand-rolling your own certificate management. Several of our teams have been using cert-manager extensively, and we've also found that its usability has much improved in the past few months.</p>"
Cloud Carbon Footprint,Trial,Tools,FALSE,"<p><strong><a href=""https://www.cloudcarbonfootprint.org/"">Cloud Carbon Footprint</a></strong> (CCF) is an open-source tool that uses cloud APIs to provide visualizations of estimated carbon emissions based on usage across AWS, GCP and Azure. The Thoughtworks team has <a href=""https://www.thoughtworks.com/clients/Bringing-green-cloud-optimization-to-a-green-energy-business"">successfully used the tool</a> with several organizations, including energy technology companies, retailers, digital service providers and companies that use AI. Cloud platform providers realize that it's important to help their customers understand the carbon impact of using their services, so they've begun to build similar functionality themselves. Because CCF is cloud agnostic, it allows users to view energy usage and carbon emissions for multiple cloud providers in one place, while translating carbon footprints into real-world impact such as flights or trees planted.</p>
<p>In recent releases, CCF has begun to include Google Cloud and AWS-sourced optimization recommendations alongside potential energy and CO2 savings, as well as to support more cloud instance types such as GPU instances. Given the traction the tool has received and the continued addition of new features, we feel confident moving it to Trial.</p>"
Conftest,Trial,Tools,FALSE,"<p><strong><a href=""https://github.com/open-policy-agent/conftest"">Conftest</a></strong> is a tool for writing tests against structured configuration data. It relies on the <a href=""https://www.openpolicyagent.org/docs/latest/policy-language/#what-is-rego"">Rego language</a> from <a href=""/radar/tools/open-policy-agent-opa"">Open Policy Agent</a> to write tests for <a href=""/radar/platforms/kubernetes"">Kubernetes</a> configurations, <a href=""/radar/platforms/tekton"">Tekton</a> pipeline definitions or even <a href=""/radar/tools/terraform"">Terraform</a> plans. We've had great experiences with Conftest — and its shallow learning curve. With fast feedback from tests, our teams iterate quickly and safely on configuration changes to Kubernetes.</p>"
kube-score,Trial,Tools,TRUE,"<p><strong><a href=""https://github.com/zegl/kube-score"">kube-score</a></strong> is a tool that does static code analysis of your Kubernetes object definitions. The output is a list of recommendations for what you can improve to make your application more secure and resilient. It has a list of <a href=""https://github.com/zegl/kube-score/blob/master/README_CHECKS.md"">predefined checks</a> which includes best practices such as running containers with non-root privileges and correctly specifying resource limits. It's been around for some time, and we've used it in a few projects as part of a CD pipeline for Kubernetes manifests. A major drawback of kube-score is that you can't add custom policies. We typically supplement it with tools like <a href=""/radar/tools/conftest"">Conftest</a> in these cases.</p>"
Lighthouse,Trial,Tools,FALSE,"<p><strong><a href=""https://developers.google.com/web/tools/lighthouse/"">Lighthouse</a></strong> is a tool written by Google to assess web applications and web pages, collecting performance metrics and insights on good development practices. We've long advocated for <a href=""/radar/techniques/performance-testing-as-a-first-class-citizen"">performance testing as a first-class citizen</a>, and the additions to Lighthouse that we mentioned five years ago certainly helped with that. Our thinking around <a href=""/radar/techniques/architectural-fitness-function"">architectural fitness functions</a> created strong motivation for tools such as Lighthouse to be run in build pipelines. With the introduction of <a href=""https://github.com/GoogleChrome/lighthouse-ci"">Lighthouse CI</a>, it has become easier than ever to include Lighthouse in pipelines managed by <a href=""https://github.com/GoogleChrome/lighthouse-ci/blob/main/docs/getting-started.md#configure-your-ci-provider"">various tools</a>.</p>"
Metaflow,Trial,Tools,TRUE,"<p><strong><a href=""https://github.com/Netflix/metaflow"">Metaflow</a></strong> is a user-friendly Python library and back-end service that helps data scientists and engineers build and manage production-ready data processing, ML training and inference workflows. Metaflow provides Python APIs that structure the code as a directed graph of steps. Each step can be decorated with flexible configurations such as the required compute and storage resources. Code and data artifacts for each step's run (aka task) are stored and can be retrieved either for future runs or the next steps in the flow, enabling you to recover from errors, repeat runs and track versions of models and their dependencies across multiple runs.</p>
<p>The value proposition of Metaflow is the simplicity of its idiomatic Python library: it fully integrates with the build and run-time infrastructure to enable running data engineering and science tasks in local and scaled production environments. At the time of writing, Metaflow is heavily integrated with AWS services such as S3 for its data store service and step functions for orchestration. Metaflow supports R in addition to Python. Its core features are open sourced.</p>
<p>If you're building and deploying your production ML and data-processing pipelines on AWS, Metaflow is a lightweight full-stack alternative framework to more complex platforms such as <a href=""/radar/tools/mlflow"">MLflow</a>.</p>"
Micrometer,Trial,Tools,TRUE,"<p><strong><a href=""https://micrometer.io/"">Micrometer</a></strong> is a platform-agnostic library for metrics instrumentation on the JVM that supports Graphite, New Relic, CloudWatch and many other integrations. We've found that Micrometer has benefited both library authors and teams: library authors can include metrics instrumentation code in their libraries without needing to support each and every metrics system that their users are using; and teams can support many different metrics on back-end registries which enables organizations to collect metrics in a consistent way.</p>"
NUKE,Trial,Tools,TRUE,"<p><strong><a href=""https://nuke.build/"">NUKE</a></strong> is a build system for .NET and an alternative to either the traditional MSBuild or <a href=""https://cakebuild.net/"">Cake</a> and <a href=""https://fake.build/"">Fake</a> which we've featured previously in the Radar. NUKE represents build instructions as a C# DSL, making it easy to learn and with good IDE support. In our experience, NUKE made it really simple to build automation for .NET projects. We like the accurate static code checks and hints. We also like that we can use any NuGet package seamlessly and that the automation code can be compiled to avoid problems at runtime. NUKE isn't new, but its novel approach — using a C# DSL — and our positive overall experience prompted us to include it here.</p>"
Pactflow,Trial,Tools,FALSE,"<p>We've used <a href=""https://github.com/pact-foundation"">Pact</a> for contract testing long enough to see some of the complexity that comes with scale. Some of our teams have successfully used <strong><a href=""https://pactflow.io/"">Pactflow</a></strong> to reduce that friction. Pactflow runs both as software as a service and as an on-prem deployment with the same features as the SaaS offering, and it adds improved usability, security and auditing on top of the open-source Pact Broker offering. We've been pleased with our use so far and are happy to see continued effort to remove some of the overhead of managing contract testing at scale.</p>"
Podman,Trial,Tools,FALSE,"<p>As an alternative to <a href=""/radar/platforms/docker"">Docker</a>, <strong><a href=""https://github.com/containers/podman"">Podman</a></strong> has been validated by many of our teams. Podman introduces a daemonless engine for managing and running containers which is an interesting approach in comparison to what Docker does. Additionally, Podman can be easily run as a normal user <a href=""/radar/platforms/rootless-containers"">without requiring root privileges</a>, which reduces the attack surface. By using either <a href=""https://opencontainers.org/"">Open Container Initiative</a> (OCI) images built by <a href=""https://github.com/containers/buildah"">Buildah</a> or Docker images, Podman can be adapted to most container use cases. Apart from some compatibility issues with macOS, our team has had generally good experiences with Podman on Linux distributions.</p>"
Sourcegraph,Trial,Tools,FALSE,"<p>In our previous Radar, we featured two tools that search and replace code using an abstract syntax tree (AST) representation, <a href=""/radar/tools/comby"">Comby</a> and <strong><a href=""https://about.sourcegraph.com/"">Sourcegraph</a></strong>. Although they share some similarities, they also differ in several ways. Sourcegraph is a commercial tool (with a 10-user free tier). It's particularly suited for searching, navigating or cross-referencing in large codebases, with an emphasis on an interactive developer experience. In contrast, Comby is a lightweight open-source command-line tool for automating repetitive tasks. Because Sourcegraph is a hosted service, it also has the ability to continuously monitor code bases and send alerts when a match occurs. Now that we've gained more experience with Sourcegraph, we decided to move it into the Trial ring to reflect our positive experience — which doesn't mean that Sourcegraph is better than Comby. Each tool focuses on a different niche.</p>"
Syft,Trial,Tools,TRUE,"<p>One of the key elements of improving ""supply chain security"" is using a <a href=""/radar/techniques/software-bill-of-materials"">Software Bill of Materials (SBOM)</a>, which is why publishing an SBOM along with the software artifact is increasingly important. <strong><a href=""https://github.com/anchore/syft"">Syft</a></strong> is a CLI tool and Go library for generating an SBOM from container images and file systems. It can generate the SBOM output in multiple formats, including JSON, <a href=""/radar/platforms/cyclonedx"">CycloneDX</a> and SPDX. The SBOM output of Syft can be used by <a href=""/radar/tools/grype"">Grype</a> for vulnerability scanning. One way to publish the generated SBOM along with the image is to add it as an attestation using <a href=""/radar/tools/cosign"">Cosign</a>. This allows consumers of the image to verify the SBOM and to use it for further analysis.</p>"
Volta,Trial,Tools,TRUE,"<p>When working on multiple JavaScript codebases at the same time, it's often necessary to use different versions of Node and other JavaScript tools. On developer machines, these tools are usually installed in the user account or the machine itself, which means a solution is needed to switch between multiple installations. For Node itself there's nvm, but we want to highlight <strong><a href=""https://volta.sh/"">Volta</a></strong> as an alternative that we're seeing in use with our teams. Volta has several advantages over using nvm: it can manage other JavaScript tools such as Yarn; it also has the notion of pinning a version of the toolchain on a project basis, which means that developers can simply use the tools in a given code directory without having to worry about manually switching between tool versions — Volta simply uses shims in the path to select the pinned version. Written in Rust, Volta is fast and ships as a single binary without dependencies.</p>"
Web Test Runner,Trial,Tools,TRUE,"<p><strong><a href=""https://modern-web.dev/docs/test-runner/overview/"">Web Test Runner</a></strong> is a package within the <a href=""https://modern-web.dev/"">Modern Web</a> project, which provides several high-quality tools for modern web development with support for web standards like ES Modules. Web Test Runner is a test runner for web applications. One of its advantages compared to existing test runners is that it runs tests in the browser (which could be headless). It supports multiple browser launchers — including <a href=""/radar/languages-and-frameworks/puppeteer"">Puppeteer</a>, <a href=""/radar/tools/playwright"">Playwright</a>, and Selenium — and uses Mocha by default for the test framework. The tests run pretty fast, and we like that we can open a browser window with devtools when debugging. Web Test Runner internally uses <a href=""https://modern-web.dev/docs/dev-server/overview/"">Web Dev Server</a> which allows us to leverage its great plugin API for adding customized plugins for our test suite. Modern Web tools look like a very promising developer toolchain, and we're already using it in a few projects.</p>"
CDKTF,Assess,Tools,TRUE,"<p>By now many organizations have created sprawling landscapes of services in the cloud. Of course, this is only possible when using <a href=""/radar/techniques/infrastructure-as-code"">infrastructure as code</a> and mature tooling. We still like <a href=""/radar/tools/terraform"">Terraform</a>, not the least because of its rich and growing ecosystem. However, the lack of abstractions in HCL, Terraform's default configuration language, effectively creates a glass ceiling. Using <a href=""/radar/tools/terragrunt"">Terragrunt</a> pushes that up a bit further, but more and more often our teams find themselves longing for the abstractions afforded by modern programming languages. <a href=""https://www.terraform.io/cdktf""><strong>Cloud Development Kit for Terraform (CDKTF)</strong></a>, which resulted from a collaboration between AWS's <a href=""/radar/platforms/aws-cloud-development-kit"">CDK</a> team and Hashicorp, makes it possible for teams to use several programming languages, including TypeScript and Java, to define and provision infrastructure. With this approach it follows the lead of <a href=""/radar/platforms/pulumi"">Pulumi</a> while remaining in the Terraform ecosystem. We've had good experiences with CDKTF but have decided to keep it in the Assess ring until it moves out of beta.</p>"
Chrome Recorder panel,Assess,Tools,TRUE,"<p><strong><a href=""https://developer.chrome.com/docs/devtools/recorder/"">Chrome Recorder panel</a></strong> is a preview feature in Google Chrome 97 that allows for simple record and playback of user journeys. While this definitely isn't a new idea, the way in which it is integrated into Chrome allows for quick creation, editing and running of scripts. The panel also integrates nicely with the performance panel, which makes getting repeated consistent feedback on page performance easier. While record/playback style testing always needs to be used with care in order to avoid brittle tests, we think this preview feature is worth assessing, especially if you're already using the Chrome Performance panel to measure your pages.</p>"
Excalidraw,Assess,Tools,TRUE,"<p><strong><a href=""https://excalidraw.com/"">Excalidraw</a></strong> is a simple but powerful online drawing tool that our teams enjoy using. Sometimes teams just need a quick picture instead of a formal diagram, for remote teams Excalidraw provides a quick way to create and share diagrams. Our teams also like the ""lo-fi"" look of the diagrams it can produce, which is reminiscent of the whiteboard diagrams they would have produced when co-located. One caveat: you need to pay attention to the default security — at the time of writing, anyone who has the link can see the diagram. A paid-for version provides further authentication.</p>"
GitHub Codespaces,Assess,Tools,TRUE,"<p><strong><a href=""https://github.com/features/codespaces"">GitHub Codespaces</a></strong> allows developers to create <a href=""/radar/techniques/development-environments-in-the-cloud"">development environments in the cloud</a> and access them through an IDE as though the environment were local. GitHub isn't the first company to implement this idea; we previously blipped about <a href=""/radar/tools/gitpod"">Gitpod</a>. We like that Codespaces allows environments to be standardized by using dotfiles configuration, making it quicker to onboard new team members, and that they offer VMs with up to 32 cores and 64GB memory. These VMs can be spun up in under ten seconds, potentially offering environments more powerful than a developer laptop.</p>"
GoReleaser,Assess,Tools,TRUE,"<p><a href=""https://github.com/goreleaser/goreleaser""><strong>GoReleaser</strong></a> is a tool that automates the process of building and releasing a Go project for different architectures via multiple repositories and channels, a common need for Go projects targeting different platforms. You run the tool either from your local machine or via CI, with the tool available via several CI services thus minimizing set-up and maintenance. GoReleaser takes care of build, packaging, publishing and announcement of each release and supports different combinations of package format, package repository and source control. Although it's been around for a few years, we're surprised that more teams are not using it. If you're regularly releasing a Go codebase, this tool is worth assessing.</p>"
Grype,Assess,Tools,TRUE,"<p>Securing the software supply chain has become a commonplace concern among delivery teams, a concern that is reflected by the growing number of new tools in this space. <strong><a href=""https://github.com/anchore/grype"">Grype</a></strong> is a new lightweight vulnerability scanning tool for Docker and OCI images. It can be installed as a binary, can scan images before they're pushed to a registry, and it doesn't require a Docker daemon to run on your build agents. Grype comes from the same team that is behind <a href=""/radar/tools/syft"">Syft</a>, which generates <a href=""/radar/techniques/software-bill-of-materials"">SBOMs</a> in various formats from container images. Grype can consume the SBOM output of Syft to scan for vulnerabilities.</p>"
Infracost,Assess,Tools,TRUE,"<p>One often-cited advantage of moving to the cloud is transparency around infrastructure spend. In our experience, this is often not the case. Teams don't always think about the decisions they make around infrastructure in terms of financial cost which is why we previously blipped about <a href=""/radar/techniques/run-cost-as-architecture-fitness-function"">run cost as architecture fitness function</a>. We're intrigued by the release of a new tool called <strong><a href=""https://infracost.io/"">Infracost</a></strong> which aims to make cost trade-offs visible in Terraform pull requests. It's open-source software and available for macOS, Linux, Windows and Docker and supports pricing for AWS, GCP and Microsoft Azure out of the box. It also provides a public API that can be queried for current cost data. Our teams are excited by its potential, especially when it comes to gaining better cost visibility in the IDE.</p>"
jc,Assess,Tools,TRUE,"<p>In our previous Radar, we placed <a href=""/radar/tools/modern-unix-commands"">modern Unix commands</a> in Assess. One of the commands featured in that collection of tools was jq, effectively a sed for JSON. <strong><a href=""https://kellyjonbrazil.github.io/jc/docs/"">jc</a></strong> performs a related task: it takes the output of common Unix commands and parses the output into JSON. The two commands together provide a bridge between the Unix CLI world and the raft of libraries and tools that operate on JSON. When writing <em>simple</em> scripts, for example, for software deployment or gathering troubleshooting information, having the myriad of different Unix command output formats mapped into well-defined JSON can save a lot of time and effort. As with jq, you need to make sure the command is available. It can be installed from many of the well-known package repositories.</p>"
skopeo,Assess,Tools,TRUE,"<p><strong><a href=""https://github.com/containers/skopeo"">skopeo</a></strong> is a command line utility that performs various operations on container images and image repositories. It doesn't require a user to be root to do most of its operations nor does it require a daemon to be running. It's a useful part of a CI pipeline; we've used it to copy images from one registry to another as we promote the images. It's better than doing a pull and a push as we don't need to store the images locally. It's not a new tool, but it's useful enough and underutilized that we felt it's worth calling it out.</p>"
SQLFluff,Assess,Tools,TRUE,"<p>While linting is an ancient practice in the software world, it's had slower adoption in the data world. <strong><a href=""https://docs.sqlfluff.com/en/stable/"">SQLFluff</a></strong> is a cross-dialect SQL linter written in Python that ships with a simple command line interface (CLI), making it easy to incorporate into a CI/CD pipeline. If you're comfortable with the default conventions, then SQLFluff works without any additional configuration after installing it and will enforce a strongly opinionated set of formatting standards; setting your own conventions involves adding a configuration dotfile. The CLI can automatically fix certain classes of violations that involve formatting concerns like whitespace or uppercasing of keywords. SQLFluff is still new, but we're excited to see SQL getting some attention in the linting world.</p>"
Terraform Validator,Assess,Tools,TRUE,"<p>Organizations that have adopted <a href=""/radar/techniques/infrastructure-as-code"">infrastructure as code</a> and self-service infrastructure platforms are looking for ways to give teams a maximum of autonomy while still enforcing good security practices and organizational policies. We've highlighted <a href=""/radar/tools/tfsec"">tfsec</a> before and are moving it into the Adopt category in this Radar. For teams working on GCP, <a href=""https://github.com/GoogleCloudPlatform/terraform-validator""><strong>Terraform Validator</strong></a> could be an option when creating a policy library, a set of constraints that are checked against Terraform configurations.</p>"
Typesense,Assess,Tools,TRUE,"<p><strong><a href=""https://github.com/typesense/typesense"">Typesense</a></strong> is a fast, typo-tolerant text search engine. For use cases with large volumes of data, Elasticsearch might still be a good option as it provides a horizontally scalable disk-based search solution. However, if you're building a latency-sensitive search application with a search index size that can fit in memory, Typesense is a powerful alternative and another option to evaluate alongside tools such as <a href=""/radar/platforms/meilisearch"">Meilisearch</a>.</p>"
SwiftUI,Adopt,languages-and-frameworks,FALSE,"<p>When Apple introduced <strong><a href=""https://developer.apple.com/xcode/swiftui/"">SwiftUI</a></strong> a few years ago, it was a big step forward for implementing user interfaces on all kinds of devices made by Apple. From the beginning, we liked the declarative, code-centric approach and the reactive programming model provided by <a href=""/radar/languages-and-frameworks/combine"">Combine</a>. We did notice, though, that writing a lot of view tests, which you still need with a model—view—viewmodel (MVVM) pattern, was not really sensible with the XCUITest automation framework provided by Apple. This gap has been closed by <a href=""/radar/languages-and-frameworks/viewinspector"">ViewInspector</a>. A final hurdle was the minimum OS version required. At the time of release, only the very latest versions of iOS and macOS could run applications written with SwiftUI, but because of Apple’s regular cadence of updates, SwiftUI apps can now run on practically all versions of macOS and iOS that receive security updates.</p>"
Testcontainers,Adopt,languages-and-frameworks,FALSE,"<p>We've had enough experience with <strong><a href=""https://www.testcontainers.org/"">Testcontainers</a></strong> that we think it's a useful default option for creating a reliable environment for running tests. It's a library, ported to <a href=""https://github.com/testcontainers"">multiple languages</a>, that Dockerizes common test dependencies — including various types of databases, queuing technologies, cloud services and UI testing dependencies like web browsers — with the ability to run custom Dockerfiles when needed. It works well with test frameworks like JUnit, is flexible enough to let users manage the container lifecycle and advanced networking and quickly sets up an integrated test environment. Our teams have consistently found this library of programmable, lightweight and disposable containers to make functional tests more reliable.</p>"
Bob,Trial,languages-and-frameworks,TRUE,"<p>When building an app with React Native you sometimes find yourself having to create your own modules. For example, we've encountered this need when building a UI component library for a React Native app. Creating such a module project isn't straightforward, and our teams report success using <strong><a href=""https://github.com/callstack/react-native-builder-bob"">Bob</a></strong> to automate this task. Bob provides a CLI to create the scaffolding for different targets. The scaffolding is not limited to core functionality but, optionally, can include example code, linters, build pipeline configuration and other features.</p>"
Flutter-Unity widget,Trial,languages-and-frameworks,TRUE,"<p>Flutter is increasingly popular for building cross-platform mobile apps, and Unity is great for building AR/VR experiences. A key piece in the puzzle for integrating Unity and Flutter is the <strong><a href=""https://github.com/juicycleff/flutter-unity-view-widget"">Flutter-Unity widget</a></strong>, which allows embedding Unity apps inside Flutter widgets. One of the key capabilities the widget offers is bi-directional communication between Flutter and Unity. We've found its performance to be pretty good as well, and we're looking forward to leveraging Unity in more Flutter apps.</p>"
Kotest,Trial,languages-and-frameworks,FALSE,"<p><strong><a href=""https://kotest.io/"">Kotest</a></strong> (previously KotlinTest) is a stand-alone testing tool for the <a href=""/radar/languages-and-frameworks/kotlin"">Kotlin</a> ecosystem that is continuing to gain traction within our teams across various Kotlin implementations — native, JVM or JavaScript. Key advantages are that it offers a variety of testing styles in order to structure the test suites and that it comes with a comprehensive set of matchers, which allow for expressive tests in an elegant internal DSL. In addition to its support for <a href=""/radar/techniques/property-based-unit-testing"">property-based testing</a> — a technique we've highlighted previously in the Radar — our teams like the solid IntelliJ plugin and the growing community of support.</p>"
Swift Package Manager,Trial,languages-and-frameworks,TRUE,"<p>Some programming languages, especially newer ones, have a package and dependency management solution built in. When it was introduced in 2014, Swift didn't come with a package manager, and so the macOS and iOS developer community simply kept using CocoaPods and <a href=""/radar/tools/carthage"">Carthage</a>, the third-party solutions that had been created for Objective-C. A couple of years later <strong><a href=""https://github.com/apple/swift-package-manager"">Swift Package Manager</a></strong> (SwiftPM) was started as an official Apple open-source project, and it then took another few years before Apple added support for it to Xcode. Even at that point, though, many development teams continued to use CocoaPods and Carthage, mostly because many packages were simply not available via SwiftPM. Now that most packages can be included via SwiftPM and processes have been further streamlined for both creators and consumers of packages, our teams are increasingly relying on SwiftPM.</p>"
Vowpal Wabbit,Trial,languages-and-frameworks,FALSE,"<p><strong><a href=""https://vowpalwabbit.org/"">Vowpal Wabbit</a></strong> is a general-purpose machine-learning library. Originally created at Yahoo! Research over a decade ago, Vowpal Wabbit continues to implement new algorithms in reinforcement learning. We want to highlight <a href=""https://vowpalwabbit.org/blog/vowpalwabbit-9.0.0.html"">Vowpal Wabbit 9.0</a>, a major release after six years, and encourage you to plan the <a href=""https://vowpalwabbit.org/docs/vowpal_wabbit/python/latest/reference/python_8110_900_migration_guide.html"">migration</a> as it has several usability improvements, new reductions and bug fixes.</p>"
Android Gradle plugin - Kotlin DSL,Assess,languages-and-frameworks,TRUE,"<p><strong>Android Gradle plugin Kotlin DSL</strong> added support for Kotlin Script as an alternative to Groovy for Gradle build scripts. The goal of replacing Groovy with Kotlin is to provide better support for refactoring and simpler editing in IDEs as well as ultimately to produce code that is easier to read and maintain. For teams already using Kotlin it also means working on the build in a familiar language. We had a team with an at least seven-year-old 450-line build script <a href=""https://developer.android.com/studio/build/migrate-to-kts"">migrate</a> within a few days. If you have large or complex gradle build scripts, then it's worth assessing whether Kotlin Script will produce better outcomes for your teams.</p>"
Azure Bicep,Assess,languages-and-frameworks,TRUE,"<p>For those who prefer a more natural language than JSON for infrastructure code, <strong><a href=""https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/overview?tabs=bicep"">Azure Bicep</a></strong> is a domain-specific language (DSL) that uses a declarative syntax. It supports reusable parameterized templates for modular resource definitions. A <a href=""https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep"">Visual Studio Code extension</a> provides instant type-safety, intellisense and syntax checking, and the compiler allows bidirectional transpilation to and from ARM templates. Bicep's resource-oriented DSL and native integration with the Azure ecosystem make it a compelling choice for Azure infrastructure development.</p>"
Capacitor,Assess,languages-and-frameworks,TRUE,"<p>We've been debating the merits of cross-platform mobile development tools for nearly as long as we've been publishing the Technology Radar. We first noted a new generation of tools in 2011 when blipping about <a href=""/radar/tools/cross-mobile-platforms"">cross-mobile platforms</a>. Although we were skeptical of them at first, these tools have been perfected and widely adopted over the years. And nobody can debate the enduring popularity and usefulness of <a href=""/radar/languages-and-frameworks/react-native"">React Native</a>. <strong><a href=""https://capacitorjs.com/"">Capacitor</a></strong> is the latest generation of a line of tools starting with PhoneGap, then renamed to <a href=""/radar/platforms/phonegap-apache-cordova"">Apache Cordova</a>. Capacitor is a complete rewrite from Ionic that embraces the <a href=""/radar/techniques/progressive-web-applications"">progressive web app</a> style for stand-alone applications. So far, our developers like that they can address web, iOS and Android applications with a single code base and that they can manage the native platforms separately with access to the native APIs when necessary. Capacitor offers an alternative to React Native, which has many years of cross-platform experience behind it.</p>"
Java 17,Assess,languages-and-frameworks,TRUE,"<p>We don't routinely feature new versions of languages, but we wanted to highlight the new long-term support (LTS) version of Java, version 17. While there are promising new features, such as the preview of <a href=""https://openjdk.java.net/jeps/406"">pattern matching</a>, it's the switch to the new LTS process that should interest many organizations. We recommend organizations assess new releases of Java as and when they become available, making sure they adopt new features and versions as appropriate. Surprisingly many organizations do not routinely adopt newer versions of languages even though regular updates help keep things small and manageable. Hopefully the new LTS process, alongside organizations moving to regular updates, will help avoid the ""too expensive to update"" trap that ends with production software running on an end-of-life version of Java.</p>"
Jetpack Glance,Assess,languages-and-frameworks,TRUE,"<p>Android 12 brought significant changes to app widgets that have improved the user and developer experience. For writing regular Android apps, we've expressed our preference for <a href=""/radar/languages-and-frameworks/jetpack-compose"">Jetpack Compose</a> as a modern way of building native user interfaces. Now, with <strong><a href=""https://developer.android.com/jetpack/androidx/releases/glance"">Jetpack Glance</a></strong>, which is built on top of the Compose runtime, developers can use similar declarative Kotlin APIs for writing widgets. Recently, Glance has been <a href=""https://android-developers.googleblog.com/2022/01/announcing-glance-tiles-for-wear-os.html"">extended</a> to support Tiles for Wear OS.</p>"
Jetpack Media3,Assess,languages-and-frameworks,TRUE,"<p>Android today has several media APIs: Jetpack Media, also known as MediaCompat, Jetpack Media2 and ExoPlayer. Unfortunately, these libraries were developed independently, with different goals but overlapping functionality. Android developers not only had to choose which library to use, they also had to contend with writing adaptors or other connecting code when features from multiple APIs were needed. <a href=""https://developer.android.com/jetpack/androidx/releases/media3""><strong>Jetpack Media3</strong></a> is an effort, currently in early access, to create a new API that takes common areas of functionality from the existing APIs — including UI, playback and media session handling — combining them into a merged and refined API. The player interface from ExoPlayer has also been updated, enhanced and streamlined to act as the common player interface for Media3.</p>"
MistQL,Assess,languages-and-frameworks,TRUE,"<p><strong><a href=""https://github.com/evinism/mistql"">MistQL</a></strong> is a small domain-specific language for performing computations on JSON-like structures. Originally built for handcrafted feature extraction of machine-learning models on the frontend, MistQL currently supports a JavaScript implementation for browsers and a Python implementation for server-side use cases. We quite like its clean composable functional syntax, and we encourage you to assess it based on your needs.</p>"
npm workspaces,Assess,languages-and-frameworks,TRUE,"<p>While many tools support multipackage development in the node.js world, npm 7 adds direct support with the addition of <strong><a href=""https://docs.npmjs.com/cli/v8/using-npm/workspaces"">npm workspaces</a></strong>. Managing related packages together facilitates development, allowing you, for example, to store multiple related libraries in a single repo. With npm workspaces, once you add a configuration in a top-level package.json file to refer to one or more nested package.json files, commands like <code>npm install</code> work across multiple packages, symlinking the dependent source packages into the root node_modules directory. Other npm commands are also now workspace aware, allowing you, for example, to execute <code>npm run</code> and <code>npm test</code> commands across multiple packages with a single command. Having that flexibility out of the box decreases the need for some teams to reach for another package manager.</p>"
Remix,Assess,languages-and-frameworks,TRUE,"<p>We witnessed the migration from server-side rendering website to single-page application in the browser, now the pendulum of web development seems to swing back to the middle. <strong><a href=""https://remix.run/"">Remix</a></strong> is one such example. It's a full-stack JavaScript framework. It provides fast page loads by leveraging distributed systems and native browsers instead of clumsy static builds. It has made some optimizations on nested routing and page loading, which makes page rendering seem especially fast. Many people will compare Remix with <a href=""/radar/languages-and-frameworks/next-js"">Next.js</a>, which is similarly positioned. We're glad to see such frameworks cleverly combining the browser run time with the server run time to provide a better user experience.</p>"
ShedLock,Assess,languages-and-frameworks,TRUE,"<p>Executing a scheduled task once and only once in a cluster of distributed processors is a relatively common requirement. For example, the situation might arise when ingesting a batch of data, sending a notification or performing some regular cleanup activity. But this is a notoriously difficult problem. How does a group of processes cooperate reliably over laggy and less reliable networks? Some kind of locking mechanism is required to coordinate actions across the cluster. Fortunately, a variety of distributed stores can implement a lock. Systems like <a href=""https://zookeeper.apache.org/"">ZooKeeper</a> and <a href=""/radar/tools/consul"">Consul</a> as well as databases such as DynamoDB or <a href=""/radar/platforms/couchbase"">Couchbase</a> have the necessary underlying mechanisms to manage consensus across the cluster. <strong><a href=""https://github.com/lukas-krecan/ShedLock"">ShedLock</a></strong> is a small library for taking advantage of these providers in your own Java code, if you're looking to implement your own scheduled tasks. It provides an API for acquiring and releasing locks as well as connectors to a wide variety of lock providers. If you're writing your own distributed tasks but don't want to take on the complexity of an entire orchestration platform like <a href=""/radar/platforms/kubernetes"">Kubernetes</a>, ShedLock is worth a look.</p>"
SpiceDB,Assess,languages-and-frameworks,TRUE,"<p><strong><a href=""https://github.com/authzed/spicedb"">SpiceDB</a></strong> is a database system, inspired by Google's <a href=""https://research.google/pubs/pub48190"">Zanzibar</a>, for managing application permissions. With SpiceDB, you create a schema to model the permissions requirements and use the <a href=""https://docs.authzed.com/reference/api#client-libraries"">client library</a> to apply the schema to one of the <a href=""https://docs.authzed.com/spicedb/selecting-a-datastore"">supported databases</a>, insert data and query to efficiently answer questions like ""Does this user have access to this resource?"" or even the inverse ""What are all the resources this user has access to?"" We usually advocate separating the authorization policies from code, but SpiceDB takes it a step further by separating data from the policy and storing it as a graph to efficiently answer authorization queries. Because of this separation, you have to ensure that the changes in your application's primary data store are reflected in SpiceDB. Among other Zanzibar-inspired implementations, we find SpiceDB to be an interesting framework to assess for your authorization needs.</p>"
sqlc,Assess,languages-and-frameworks,TRUE,"<p><strong><a href=""https://github.com/kyleconroy/sqlc"">sqlc</a></strong> is a compiler that generates type-safe idiomatic Go code from SQL. Unlike other approaches based on object-relational mapping (ORM), you continue to write plain SQL for your needs. Once invoked, sqlc checks the correctness of the SQL and generates performant Go code, which can be directly called from the rest of the application. With stable support for both PostgreSQL and MySQL, sqlc is worth a look, and we encourage you to assess it.</p>"
The Composable Architecture,Assess,languages-and-frameworks,TRUE,"<p>Developing apps for iOS has become more streamlined over time, and <a href=""https://www.thoughtworks.com/radar/languages-and-frameworks/swiftui"">SwiftUI</a> moving into Adopt is a sign of that. Going beyond the general nature of SwiftUI and other common frameworks, <a href=""https://github.com/pointfreeco/swift-composable-architecture#the-composable-architecture""><strong>The Composable Architecture</strong></a> (TCA) is both a library and an architectural style for building apps. It was designed over the course of a series of videos, and the authors state that they had composition, testing and ergonomics in mind, building on a foundation of ideas from The Elm Architecture and Redux. As expected, the narrow scope and opinionatedness is both a strength and a weakness of TCA. We feel that teams who don't have a lot of expertise in writing iOS apps, which are often teams who may be looking after multiple related codebases with different tech stacks, stand to benefit the most from using an opinionated framework like TCA, and we like the opinions expressed in TCA.</p>"
WebAssembly,Assess,languages-and-frameworks,FALSE,"<p><strong><a href=""http://webassembly.org/"">WebAssembly</a></strong> (WASM) is the W3C standard that provides capabilities of executing code in the browser. Supported by all major browsers and backward compatible, it's a binary compilation format designed to run in the browser at near native speeds. It opens up the range of languages you can use to write front-end functionality, with early focus on C, C++ and Rust, and it's also an <a href=""https://llvm.org/"">LLVM compilation</a> target. When run in the sandbox, it can interact with JavaScript and shares the same permissions and security model. Portability and security are key capabilities that will enable most platforms, including mobile and IoT.</p>"
Zig,Assess,languages-and-frameworks,TRUE,"<p><strong><a href=""https://ziglang.org/"">Zig</a></strong> is a new language that shares many attributes with C but with stronger typing, easier memory allocation, support for namespacing and a host of other features. Its syntax, however, is reminiscent of JavaScript rather than C, which some may hold against it. Zig's aim is to provide a very simple language with straightforward compilation that minimizes side-effects and delivers predictable, easy-to-trace execution. Zig also provides simplified access to LLVM's <a href=""https://llvm.org/"">cross-compilation capability</a>. Some of our developers have found this feature so viable, they're using Zig as a cross-compiler even though they aren't writing Zig code. Zig is a novel language and worth looking into for applications where C is being considered or already in use as well as for low-level systems applications that require explicit memory manipulation.</p>"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment