23 result(s)
Kubernetes v1.35 introduces an alpha feature gate (CloudControllerManagerWatchBasedRoutesReconciliation) in k8s.io/cloud-provider that changes the CCM route controller from a fixed-interval reconciliation to a watch/informer-based trigger model. The change reduces unnecessary cloud API calls by reconciling routes only when relevant node or CIDR/address fields change, with an additional randomized periodic reconcile every 12–24 hours. The reconciliation logic itself is unchanged and the feature is enabled via –feature-gate.
Kubernetes v1.35 introduces workload-aware scheduling with a new Workload API (scheduling.k8s.io/v1alpha1) to describe multi-Pod scheduling requirements, an initial gang scheduling implementation for all-or-nothing placement, and opportunistic batching (Beta) to speed scheduling of identical Pods. Gang scheduling uses podGroups, a workloadRef on Pods, and a Permit gate with a 5-minute timeout; opportunistic batching reuses feasibility checks for identical Pods but has strict identical-field restrictions. The release also outlines a broader roadmap (workload-level preemption, autoscaling integration, topology-aware scheduling) and explains required feature gates and how to test and give feedback.
Kubernetes v1.35 graduates fine-grained supplemental groups control to GA via a new Pod field, supplementalGroupsPolicy, letting clusters choose whether to merge group memberships from the container image (/etc/group) or enforce only groups declared in the Pod (Strict) to reduce implicit, hard-to-audit GIDs and improve security (especially for volume access).
Kubernetes v1.35 graduates the kubelet configuration drop-in directory to GA: the –config-dir kubelet flag is now production-ready and automatically merges drop-in files with the main kubelet configuration. This simplifies managing different kubelet settings across heterogeneous node pools and supports staged rollouts and targeted overrides without complex tooling.
Upgrade to etcd v3.5.26 or later before moving to v3.6 to let etcd auto-sync membership data (v3store) from the legacy v2store and prevent removed nodes from reappearing as “zombie” members that can make the cluster inoperable.
Kubernetes 1.35 graduates In-Place Pod Resize (In-Place Pod Vertical Scaling) to stable (GA). The feature makes CPU and memory requests and limits mutable on a running Pod (via a resize subresource), often without restarting containers. This enables non-disruptive resource adjustments, more powerful autoscaling (e.g., VPA InPlaceOrRecreate), transient boosts like CPU startup boost, and better resource efficiency. The release also adds prioritized retries, allows memory limit decreases (best-effort OOM protection), alpha support for Pod Level Resources, and new kubelet metrics/events. Work remains on runtime support, scheduler/kubelet races, expanded feature support, and integrations with autoscalers and other projects.
Kubernetes v1.35 promotes .spec.managedBy for Jobs to GA, enabling external controllers (e.g., MultiKueue) to take full responsibility for Job reconciliation and enabling multi-cluster batch scheduling patterns while leaving built-in Job controller behavior intact for other Jobs.
Kubernetes v1.35 (“Timbernetes / World Tree” release) ships 60 enhancements (17 stable, 19 beta, 22 alpha), focused on smoother nondisruptive scaling, native pod certificate-based workload identity, improved scheduling correctness, storage and topology improvements, and several important deprecations/removals. The release emphasizes stability, performance, and security hardening (notable GA features, new betas/alphas, and migration guidance for cgroup and containerd).
Kubernetes v1.35 (planned 2025-12-17) introduces several deprecations and new/advanced features. Major removals/deprecations include cgroup v1 support removal, deprecation of kube-proxy ipvs mode, and final support for containerd v1.x (users must move to containerd 2.0+). Notable enhancements likely in v1.35 include node declared features (alpha), in-place Pod resource updates (GA), pod certificates (beta), numeric taint comparisons, continued progress on user namespaces, and image volumes becoming default.
This article collects practical Kubernetes configuration best practices: prefer current stable APIs, keep manifests minimal and version-controlled, write YAML carefully (watch boolean values), group related objects, favor controllers (Deployments/Jobs) over naked Pods, use Services and DNS properly, avoid hostNetwork/hostPort unless necessary, apply semantic/common labels, add helpful annotations, and use kubectl features like applying directories, label selectors and server-side apply to simplify management.
Kubernetes SIG Network and the Security Response Committee announced the retirement of Ingress NGINX. Best-effort maintenance will continue until March 2026; after that there will be no further releases, bug fixes, or security updates. Users are advised to migrate to Gateway API or another Ingress controller.
The 2025 Kubernetes Steering Committee election concluded. Four of seven seats were up for election; the incoming members begin two-year terms immediately. The post announces the newly elected members, continuing members, election officers, emeritus members, and gives ways for the community to follow and participate in Steering Committee work.
Gateway API v1.4.0 (released Oct 6, 2025) advances Kubernetes service networking with three features promoted to Standard (BackendTLSPolicy, supportedFeatures in GatewayClass status, and named rule fields for Routes) and several experimental additions (Mesh/XMesh, default Gateways, and an ExternalAuth filter for HTTPRoute). The release also adds per-port TLS/client-certificate configuration to address connection coalescing security, introduces a couple of breaking validation changes, improves CI/docs, and is usable on Kubernetes >=1.26.
The article outlines seven common Kubernetes pitfalls the author encountered—missing resource requests/limits, inadequate liveness/readiness probes, overreliance on kubectl logs, treating dev and prod identically, leaving stale resources, jumping into advanced networking too early, and weak security/RBAC—and gives pragmatic, experience-based advice to avoid each one.
The Policy Working Group (now completed) worked to standardize and simplify policy management across the Kubernetes ecosystem. Led by co-chairs Jim Bugwadia, Poonam Lamba, and Andy Suderman, the group produced documentation, whitepapers, a CNCF survey, and the Policy Reports API, and helped educate the community about built-in policy APIs and CNCF policy tools. The WG coordinated with SIG Auth and SIG Security, faced contributor-time and consensus challenges, and encouraged newcomers to join meetings and review materials to get involved.
The Headlamp Karpenter Plugin integrates Karpenter autoscaling visibility into the Headlamp Kubernetes UI, providing real‑time maps, metrics, scaling decisions, pending‑pod diagnostics, and an editable, validated config editor to help users understand, debug, and tune node provisioning and autoscaling behavior.
Kubernetes announces alpha support for a Changed Block Tracking API that lets CSI drivers report allocated and changed blocks between snapshots to enable faster, incremental backups for block volumes. The feature requires a SnapshotMetadataService CRD, an external-snapshot-metadata sidecar, and CSI SnapshotMetadata gRPC RPCs (GetMetadataAllocated and GetMetadataDelta) with streaming responses.
Kubernetes v1.34 graduates Pod Level Resources to Beta and enables it by default. The feature lets you declare CPU, memory and hugepages for an entire Pod (in addition to per-container settings), with pod-level requests used for scheduling and pod-level limits acting as an absolute runtime cap; it improves intra-pod sharing and QoS handling but has several platform and feature limitations.
Kubernetes v1.34 graduates automated recovery from failed persistent volume expansions to GA. Users can correct an over-sized PersistentVolumeClaim (PVC) by reducing the requested size (as long as the new request is still larger than the PV’s original capacity) without cluster-admin intervention; consumed quota from a failed expansion is returned automatically. The release also introduces improved observability and error reporting for expansion operations, retries with reduced request rates, and several resizing workflow bug fixes.
Kubernetes v1.34 introduces the alpha feature DRAConsumableCapacity for Dynamic Resource Allocation (DRA). It enables finer-grained, consumable device capacity so multiple ResourceClaims or DeviceRequests (even across namespaces) can share portions of a device. The scheduler enforces total consumable capacity, drivers can opt into multiple allocations and define request policies, and a ShareID in allocation status distinguishes individual shared slices. The feature requires enabling a feature gate on control plane and kubelet components and complements other DRA improvements like partitionable devices and device status.
Kubernetes v1.34 introduces an alpha feature (ResourceHealthStatus) that lets Dynamic Resource Allocation (DRA) drivers report device health into a Pod’s status via a new dra-health/v1alpha1 gRPC service, enabling the Kubelet to surface per-device health in v1.ContainerStatus. This improves debuggability for hardware-backed workloads by showing device health (Healthy/Unhealthy/Unknown) in allocatedResourcesStatus and requires enabling the feature gate and compatible DRA drivers.
Kubernetes v1.34 advances Volume Group Snapshots to v1beta2. The update adds a VolumeSnapshotInfo type and replaces VolumeSnapshotHandlePairList with VolumeSnapshotInfoList to fix restoreSize handling when CSI drivers do not implement ListSnapshots; objects are migrated from v1beta1 via a conversion webhook and the feature remains CSI-only with GA planned later.
Kubernetes v1.34 promotes the SeparateTaintEvictionController feature to GA, splitting responsibilities so the node lifecycle controller only applies NoExecute taints while a dedicated taint eviction controller handles pod eviction. This improves code separation, enables easier improvements or custom eviction implementations, and can be disabled via a kube-controller-manager flag. Documentation and a KEP/beta announcement provide more details.