article thumbnail

Tech Transforms podcast: Energy department CIO talks national cybersecurity strategy

Dynatrace

They’re really focusing on hardware and software systems together,” Dunkin said. How do you make hardware and software both secure by design?” The DOE supports the national cybersecurity strategy’s collective defense initiatives. Tune in to the full episode for more insights from Ann Dunkin. government as a whole.

Energy 221
article thumbnail

What is serverless computing? Driving efficiency without sacrificing observability

Dynatrace

VMware commercialized the idea of virtual machines, and cloud providers embraced the same concept with services like Amazon EC2, Google Compute, and Azure virtual machines. Performing updates, installing software, and resolving hardware issues requires up to 17 hours of developer time every week.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Kubernetes vs Docker: What’s the difference?

Dynatrace

Container technology is very powerful as small teams can develop and package their application on laptops and then deploy it anywhere into staging or production environments without having to worry about dependencies, configurations, OS, hardware, and so on. The time and effort saved with testing and deployment are a game-changer for DevOps.

article thumbnail

Generative AI in the Enterprise

O'Reilly

Even with cloud-based foundation models like GPT-4, which eliminate the need to develop your own model or provide your own infrastructure, fine-tuning a model for any particular use case is still a major undertaking. That pricing won’t be sustainable, particularly as hardware shortages drive up the cost of building infrastructure.

article thumbnail

An analysis of performance evolution of Linux’s core operations

The Morning Paper

Perhaps the most interesting lesson/reminder is this: it takes a lot of effort to tune a Linux kernel. Google’s data center kernel is carefully performance tuned for their workloads. On the exact same hardware, the benchmark suite is then used to test 36 Linux release versions from 3.0 Measuring the kernel.

article thumbnail

Software-defined far memory in warehouse scale computers

The Morning Paper

This makes memory a critical factor in the total cost of ownership (TCO) of large compute clusters, or as Google like to call them “Warehouse-scale computers (WSCs).” ” This paper describes a “far memory” system that has been in production deployment at Google since 2016. Enter zswap!

article thumbnail

Structural Evolutions in Data

O'Reilly

Doubly so as hardware improved, eating away at the lower end of Hadoop-worthy work. Between Google (Vertex AI and Colab) and Amazon (SageMaker), you can now get all of the GPU power your credit card can handle. Google goes a step further in offering compute instances with its specialized TPU hardware.