Kubernetes v1.30: Uwubernetes
Editors: Amit Dsouza, Frederick Kautz, Kristin Martin, Abigail McCarthy, Natali Vlatko
Announcing the release of Kubernetes v1.30: Uwubernetes, the cutest release!
Similar to previous releases, the release of Kubernetes v1.30 introduces new stable, beta, and alpha features. The consistent delivery of top-notch releases underscores the strength of our development cycle and the vibrant support from our community.
This release consists of 45 enhancements. Of those enhancements, 17 have graduated to Stable, 18 are entering Beta, and 10 have graduated to Alpha.
Release theme and logo
Kubernetes v1.30: Uwubernetes
Kubernetes v1.30 makes your clusters cuter!
Kubernetes is built and released by thousands of people from all over the world and all walks of life. Most contributors are not being paid to do this; we build it for fun, to solve a problem, to learn something, or for the simple love of the community. Many of us found our homes, our friends, and our careers here. The Release Team is honored to be a part of the continued growth of Kubernetes.
For the people who built it, for the people who release it, and for the furries who keep all of our clusters online, we present to you Kubernetes v1.30: Uwubernetes, the cutest release to date. The name is a portmanteau of “kubernetes” and “UwU,” an emoticon used to indicate happiness or cuteness. We’ve found joy here, but we’ve also brought joy from our outside lives that helps to make this community as weird and wonderful and welcoming as it is. We’re so happy to share our work with you.
UwU ♥️
Improvements that graduated to stable in Kubernetes v1.30
This is a selection of some of the improvements that are now stable following the v1.30 release.
Robust VolumeManager reconstruction after kubelet restart (SIG Storage)
This is a volume manager refactoring that allows the kubelet to populate additional information about how existing volumes are mounted during the kubelet startup. In general, this makes volume cleanup after kubelet restart or machine reboot more robust.
This does not bring any changes for user or cluster administrators. We used the feature process and
feature gate NewVolumeManagerReconstruction
to be able to fall back to the previous behavior in
case something goes wrong. Now that the feature is stable, the feature gate is locked and cannot be
disabled.
Prevent unauthorized volume mode conversion during volume restore (SIG Storage)
For Kubernetes v1.30, the control plane always prevents unauthorized changes to volume modes when restoring a snapshot into a PersistentVolume. As a cluster administrator, you'll need to grant permissions to the appropriate identity principals (for example: ServiceAccounts representing a storage integration) if you need to allow that kind of change at restore time.
prevent-volume-mode-conversion
feature flag is enabled by
default in the external-provisioner v4.0.0
and external-snapshotter v7.0.0
. Volume mode change
will be rejected when creating a PVC from a VolumeSnapshot unless you perform the steps described in
the the "Urgent Upgrade Notes" sections for the external-provisioner
4.0.0 and the
external-snapshotter
v7.0.0.
For more information on this feature also read converting the volume mode of a Snapshot.
Pod Scheduling Readiness (SIG Scheduling)
Pod scheduling readiness graduates to stable this release, after being promoted to beta in Kubernetes v1.27.
This now-stable feature lets Kubernetes avoid trying to schedule a Pod that has been defined, when the cluster doesn't yet have the resources provisioned to allow actually binding that Pod to a node. That's not the only use case; the custom control on whether a Pod can be allowed to schedule also lets you implement quota mechanisms, security controls, and more.
Crucially, marking these Pods as exempt from scheduling cuts the work that the scheduler would otherwise do, churning through Pods that can't or won't schedule onto the nodes your cluster currently has. If you have cluster autoscaling active, using scheduling gates doesn't just cut the load on the scheduler, it can also save money. Without scheduling gates, the autoscaler might otherwise launch a node that doesn't need to be started.
In Kubernetes v1.30, by specifying (or removing) a Pod's .spec.schedulingGates
, you can control
when a Pod is ready to be considered for scheduling. This is a stable feature and is now formally
part of the Kubernetes API definition for Pod.
Min domains in PodTopologySpread (SIG Scheduling)
The minDomains
parameter for PodTopologySpread constraints graduates to stable this release, which
allows you to define the minimum number of domains. This feature is designed to be used with Cluster
Autoscaler.
If you previously attempted use and there weren't enough domains already present, Pods would be marked as unschedulable. The Cluster Autoscaler would then provision node(s) in new domain(s), and you'd eventually get Pods spreading over enough domains.
Go workspaces for k/k (SIG Architecture)
The Kubernetes repo now uses Go workspaces. This should not impact end users at all, but does have a
impact for developers of downstream projects. Switching to workspaces caused some breaking changes
in the flags to the various k8s.io/code-generator
tools. Downstream consumers should look at
staging/src/k8s.io/code-generator/kube_codegen.sh
to see the changes.
For full details on the changes and reasons why Go workspaces was introduced, read Using Go workspaces in Kubernetes.
Improvements that graduated to beta in Kubernetes v1.30
This is a selection of some of the improvements that are now beta following the v1.30 release.
Node log query (SIG Windows)
To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature that allows fetching
logs of services running on the node. To use the feature, ensure that the NodeLogQuery
feature
gate is enabled for that node, and that the kubelet configuration options enableSystemLogHandler
and enableSystemLogQuery
are both set to true.
Following the v1.30 release, this is now beta (you still need to enable the feature to use it, though).
On Linux the assumption is that service logs are available via journald. On Windows the assumption
is that service logs are available in the application log provider. Logs are also available by
reading files within /var/log/
(Linux) or C:\var\log\
(Windows). For more information, see the
log query documentation.
CRD validation ratcheting (SIG API Machinery)
You need to enable the CRDValidationRatcheting
feature
gate to use this behavior, which then
applies to all CustomResourceDefinitions in your cluster.
Provided you enabled the feature gate, Kubernetes implements validation racheting for CustomResourceDefinitions. The API server is willing to accept updates to resources that are not valid after the update, provided that each part of the resource that failed to validate was not changed by the update operation. In other words, any invalid part of the resource that remains invalid must have already been wrong. You cannot use this mechanism to update a valid resource so that it becomes invalid.
This feature allows authors of CRDs to confidently add new validations to the OpenAPIV3 schema under certain conditions. Users can update to the new schema safely without bumping the version of the object or breaking workflows.
Contextual logging (SIG Instrumentation)
Contextual Logging advances to beta in this release, empowering developers and operators to inject
customizable, correlatable contextual details like service names and transaction IDs into logs
through WithValues
and WithName
. This enhancement simplifies the correlation and analysis of log
data across distributed systems, significantly improving the efficiency of troubleshooting efforts.
By offering a clearer insight into the workings of your Kubernetes environments, Contextual Logging
ensures that operational challenges are more manageable, marking a notable step forward in
Kubernetes observability.
Make Kubernetes aware of the LoadBalancer behaviour (SIG Network)
The LoadBalancerIPMode
feature gate is now beta and is now enabled by default. This feature allows
you to set the .status.loadBalancer.ingress.ipMode
for a Service with type
set to
LoadBalancer
. The .status.loadBalancer.ingress.ipMode
specifies how the load-balancer IP
behaves. It may be specified only when the .status.loadBalancer.ingress.ip
field is also
specified. See more details about specifying IPMode of load balancer
status.
Structured Authentication Configuration (SIG Auth)
Structured Authentication Configuration graduates to beta in this release.
Kubernetes has had a long-standing need for a more flexible and extensible authentication system. The current system, while powerful, has some limitations that make it difficult to use in certain scenarios. For example, it is not possible to use multiple authenticators of the same type (e.g., multiple JWT authenticators) or to change the configuration without restarting the API server. The Structured Authentication Configuration feature is the first step towards addressing these limitations and providing a more flexible and extensible way to configure authentication in Kubernetes. See more details about structured authentication configuration.
Structured Authorization Configuration (SIG Auth)
Structured Authorization Configuration graduates to beta in this release.
Kubernetes continues to evolve to meet the intricate requirements of system administrators and developers alike. A critical aspect of Kubernetes that ensures the security and integrity of the cluster is the API server authorization. Until recently, the configuration of the authorization chain in kube-apiserver was somewhat rigid, limited to a set of command-line flags and allowing only a single webhook in the authorization chain. This approach, while functional, restricted the flexibility needed by cluster administrators to define complex, fine-grained authorization policies. The latest Structured Authorization Configuration feature aims to revolutionize this aspect by introducing a more structured and versatile way to configure the authorization chain, focusing on enabling multiple webhooks and providing explicit control mechanisms. See more details about structured authorization configuration.
New alpha features
Speed up recursive SELinux label change (SIG Storage)
From the v1.27 release, Kubernetes already included an optimization that sets SELinux labels on the contents of volumes, using only constant time. Kubernetes achieves that speed up using a mount option. The slower legacy behavior requires the container runtime to recursively walk through the whole volumes and apply SELinux labelling individually to each file and directory; this is especially noticable for volumes with large amount of files and directories.
Kubernetes v1.27 graduated this feature as beta, but limited it to ReadWriteOncePod volumes. The
corresponding feature gate is SELinuxMountReadWriteOncePod
. It's still enabled by default and
remains beta in v1.30.
Kubernetes v1.30 extends support for SELinux mount option to all volumes as alpha, with a
separate feature gate: SELinuxMount
. This feature gate introduces a behavioral change when
multiple Pods with different SELinux labels share the same volume. See
KEP
for details.
We strongly encourage users that run Kubernetes with SELinux enabled to test this feature and provide any feedback on the KEP issue.
Feature gate | Stage in v1.30 | Behavior change |
---|---|---|
SELinuxMountReadWriteOncePod | Beta | No |
SELinuxMount | Alpha | Yes |
Both feature gates SELinuxMountReadWriteOncePod
and SELinuxMount
must be enabled to test this
feature on all volumes.
This feature has no effect on Windows nodes or on Linux nodes without SELinux support.
Recursive Read-only (RRO) mounts (SIG Node)
Introducing Recursive Read-Only (RRO) Mounts in alpha this release, you'll find a new layer of security for your data. This feature lets you set volumes and their submounts as read-only, preventing accidental modifications. Imagine deploying a critical application where data integrity is key—RRO Mounts ensure that your data stays untouched, reinforcing your cluster's security with an extra safeguard. This is especially crucial in tightly controlled environments, where even the slightest change can have significant implications.
Job success/completion policy (SIG Apps)
From Kubernetes v1.30, indexed Jobs support .spec.successPolicy
to define when a Job can be
declared succeeded based on succeeded Pods. This allows you to define two types of criteria:
succeededIndexes
indicates that the Job can be declared succeeded when these indexes succeeded, even if other indexes failed.succeededCount
indicates that the Job can be declared succeeded when the number of succeeded Indexes reaches this criterion.
After the Job meets the success policy, the Job controller terminates the lingering Pods.
Traffic distribution for services (SIG Network)
Kubernetes v1.30 introduces the spec.trafficDistribution
field within a Kubernetes Service as
alpha. This allows you to express preferences for how traffic should be routed to Service endpoints.
While traffic policies focus on strict
semantic guarantees, traffic distribution allows you to express preferences (such as routing to
topologically closer endpoints). This can help optimize for performance, cost, or reliability. You
can use this field by enabling the ServiceTrafficDistribution
feature gate for your cluster and
all of its nodes. In Kubernetes v1.30, the following field value is supported:
PreferClose
: Indicates a preference for routing traffic to endpoints that are topologically
proximate to the client. The interpretation of "topologically proximate" may vary across
implementations and could encompass endpoints within the same node, rack, zone, or even region.
Setting this value gives implementations permission to make different tradeoffs, for example
optimizing for proximity rather than equal distribution of load. You should not set this value if
such tradeoffs are not acceptable.
If the field is not set, the implementation (like kube-proxy) will apply its default routing strategy.
See Traffic Distribution for more details.
Storage Version Migration (SIG API Machinery)
Kubernetes v1.30 introduces a new built-in API for StorageVersionMigration. Kubernetes relies on API data being actively re-written, to support some maintenance activities related to at rest storage. Two prominent examples are the versioned schema of stored resources (that is, the preferred storage schema changing from v1 to v2 for a given resource) and encryption at rest (that is, rewriting stale data based on a change in how the data should be encrypted).
StorageVersionMigration is alpha API which was available out of tree before.
See storage version migration for more details.
Graduations, deprecations and removals for Kubernetes v1.30
Graduated to stable
This lists all the features that graduated to stable (also known as general availability). For a full list of updates including new features and graduations from alpha to beta, see the release notes.
This release includes a total of 17 enhancements promoted to Stable:
- Container Resource based Pod Autoscaling
- Remove transient node predicates from KCCM's service controller
- Go workspaces for k/k
- Reduction of Secret-based Service Account Tokens
- CEL for Admission Control
- CEL-based admission webhook match conditions
- Pod Scheduling Readiness
- Min domains in PodTopologySpread
- Prevent unauthorised volume mode conversion during volume restore
- API Server Tracing
- Cloud Dual-Stack --node-ip Handling
- AppArmor support
- Robust VolumeManager reconstruction after kubelet restart
- kubectl delete: Add interactive(-i) flag
- Metric cardinality enforcement
- Field
status.hostIPs
added for Pod - Aggregated Discovery
Deprecations and removals
Removed the SecurityContextDeny admission plugin, deprecated since v1.27
(SIG Auth, SIG Security, and SIG Testing) With the removal of the SecurityContextDeny admission plugin, the Pod Security Admission plugin, available since v1.25, is recommended instead.
Release notes
Check out the full details of the Kubernetes v1.30 release in our release notes.
Availability
Kubernetes v1.30 is available for download on GitHub. To get started with Kubernetes, check out these interactive tutorials or run local Kubernetes clusters using minikube. You can also easily install v1.30 using kubeadm.
Release team
Kubernetes is only possible with the support, commitment, and hard work of its community. Each release team is made up of dedicated community volunteers who work together to build the many pieces that make up the Kubernetes releases you rely on. This requires the specialized skills of people from all corners of our community, from the code itself to its documentation and project management.
We would like to thank the entire release team for the hours spent hard at work to deliver the Kubernetes v1.30 release to our community. The Release Team's membership ranges from first-time shadows to returning team leads with experience forged over several release cycles. A very special thanks goes out our release lead, Kat Cosgrove, for supporting us through a successful release cycle, advocating for us, making sure that we could all contribute in the best way possible, and challenging us to improve the release process.
Project velocity
The CNCF K8s DevStats project aggregates a number of interesting data points related to the velocity of Kubernetes and various sub-projects. This includes everything from individual contributions to the number of companies that are contributing and is an illustration of the depth and breadth of effort that goes into evolving this ecosystem.
In the v1.30 release cycle, which ran for 14 weeks (January 8 to April 17), we saw contributions from 863 companies and 1391 individuals.
Event update
- KubeCon + CloudNativeCon China 2024 will take place in Hong Kong, from 21 – 23 August 2024! You can find more information about the conference and registration on the event site.
- KubeCon + CloudNativeCon North America 2024 will take place in Salt Lake City, Utah, The United States of America, from 12 – 15 November 2024! You can find more information about the conference and registration on the eventsite.
Upcoming release webinar
Join members of the Kubernetes v1.30 release team on Thursday, May 23rd, 2024, at 9 A.M. PT to learn about the major features of this release, as well as deprecations and removals to help plan for upgrades. For more information and registration, visit the event page on the CNCF Online Programs site.
Get involved
The simplest way to get involved with Kubernetes is by joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and through the channels below. Thank you for your continued feedback and support.
- Follow us on 𝕏 @Kubernetesio for latest updates
- Join the community discussion on Discuss
- Join the community on Slack
- Post questions (or answer questions) on Stack Overflow
- Share your Kubernetes story
- Read more about what’s happening with Kubernetes on the blog
- Learn more about the Kubernetes Release Team
This blog was updated on April 19th, 2024 to highlight two additional changes not originally included in the release blog.