Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Much like core OpenShift operators, a standardized flow exists for OLM-managed operators to interact with the cluster in a specific way to leverage AWS STS authorization when using AWS APIs as opposed to insecure static, long-lived credentials. OLM-managed operators can implement integration with the CloudCredentialOperator in well-defined way to support this flow.
Enable customers to easily leverage OpenShift's capabilities around AWS STS with layered products, for increased security posture. Enable OLM-managed operators to implement support for this in well-defined pattern.
See Operators & STS slide deck.
The CloudCredentialsOperator already provides a powerful API for OpenShift's cluster core operator to request credentials and acquire them via short-lived tokens. This capability should be expanded to OLM-managed operators, specifically to Red Hat layered products that interact with AWS APIs. The process today is cumbersome to none-existent based on the operator in question and seen as an adoption blocker of OpenShift on AWS.
This is particularly important for ROSA customers. Customers are expected to be asked to pre-create the required IAM roles outside of OpenShift, which is deemed acceptable.
This Section: High-Level description of the Market Problem ie: Executive Summary
This Section: Articulates and defines the value proposition from a users point of view
This Section: Effect is the expected outcome within the market. There are two dimensions of outcomes; growth or retention. This represents part of the “why” statement for a feature.
As an engineer I want the capability to implement CI test cases that run at different intervals, be it daily, weekly so as to ensure downstream operators that are dependent on certain capabilities are not negatively impacted if changes in systems CCO interacts with change behavior.
Acceptance Criteria:
Create a stubbed out e2e test path in CCO and matching e2e calling code in release such that there exists a path to tests that verify working in an AWS STS workflow.
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
Telecommunications providers continue to deploy OpenShift at the Far Edge. The acceleration of this adoption and the nature of existing Telecommunication infrastructure and processes drive the need to improve OpenShift provisioning speed at the Far Edge site and the simplicity of preparation and deployment of Far Edge clusters, at scale.
A list of specific needs or objectives that a Feature must deliver to satisfy the Feature. Some requirements will be flagged as MVP. If an MVP gets shifted, the feature shifts. If a non MVP requirement slips, it does not shift the feature.
requirement | Notes | isMvp? |
Telecommunications Service Provider Technicians will be rolling out OCP w/ a vDU configuration to new Far Edge sites, at scale. They will be working from a service depot where they will pre-install/pre-image a set of Far Edge servers to be deployed at a later date. When ready for deployment, a technician will take one of these generic-OCP servers to a Far Edge site, enter the site specific information, wait for confirmation that the vDU is in-service/online, and then move on to deploy another server to a different Far Edge site.
Retail employees in brick-and-mortar stores will install SNO servers and it needs to be as simple as possible. The servers will likely be shipped to the retail store, cabled and powered by a retail employee and the site-specific information needs to be provided to the system in the simplest way possible, ideally without any action from the retail employee.
Q: how challenging will it be to support multi-node clusters with this feature?
< What does the person writing code, testing, documenting need to know? >
< Are there assumptions being made regarding prerequisites and dependencies?>
< Are there assumptions about hardware, software or people resources?>
< Are there specific customer environments that need to be considered (such as working with existing h/w and software)?>
< Are there Upgrade considerations that customers need to account for or that the feature should address on behalf of the customer?>
<Does the Feature introduce data that could be gathered and used for Insights purposes?>
< What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)? >
< What does success look like?>
< Does this feature have doc impact? Possible values are: New Content, Updates to existing content, Release Note, or No Doc Impact>
< If unsure and no Technical Writer is available, please contact Content Strategy. If yes, complete the following.>
< Which other products and versions in our portfolio does this feature impact?>
< What interoperability test scenarios should be factored by the layered product(s)?>
Question | Outcome |
This is a clone of issue OCPBUGS-14416. The following is the description of the original issue:
—
Description of problem:
When installing SNO with bootstrap in place the cluster-policy-controller hangs for 6 minutes waiting for the lease to be acquired.
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
1.Run the PoC using the makefile here https://github.com/eranco74/bootstrap-in-place-poc 2.Observe the cluster-policy-controller logs post reboot
Actual results:
I0530 16:01:18.011988 1 leaderelection.go:352] lock is held by leaderelection.k8s.io/unknown and has not yet expired I0530 16:01:18.012002 1 leaderelection.go:253] failed to acquire lease kube-system/cluster-policy-controller-lock I0530 16:07:31.176649 1 leaderelection.go:258] successfully acquired lease kube-system/cluster-policy-controller-lock
Expected results:
Expected the bootstrap cluster-policy-controller to release the lease so that the cluster-policy-controller running post reboot won't have to wait the lease to expire.
Additional info:
Suggested resolution for bootstrap in place: https://github.com/openshift/installer/pull/7219/files#diff-f12fbadd10845e6dab2999e8a3828ba57176db10240695c62d8d177a077c7161R44-R59
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I want to hide user perspective(s) based on the customization.
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
As a SRE, I want hypershift operator to expose a metric when hosted control plane is ready.
This should allow SRE to tune (or silence) alerts occurring while the hosted control plane is spinning up.
The Kube APIServer has a sidecar to output audit logs. We need similar sidecars for other APIServers that run on the control plane side. We also need to pass the same audit log policy that we pass to the KAS to these other API servers.
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Place holder epic to track spontaneous task which does not deserve its own epic.
AWS has a hard limit of 100 OIDC providers globally.
Currently each HostedCluster created by e2e creates its own OIDC provider, which results in hitting the quota limit frequently and causing the tests to fail as a result.
DOD:
Only a single OIDC provider should be created and shared between all e2e HostedClusters.
Once the HostedCluster and NodePool gets stopped using PausedUntil statement, the awsprivatelink controller will continue reconciling.
How to test this:
AC:
We have connectDirectlyToCloudAPIs flag in konnectiviy socks5 proxy to dial directly to cloud providers without going through konnectivity.
This introduce another path for exception https://github.com/openshift/hypershift/pull/1722
We should consolidate both by keep using connectDirectlyToCloudAPIs until there's a reason to not.
DoD:
At the moment if the input etcd kms encryption (key and role) is invalid we fail transparently.
We should check that both key and role are compatible/operational for a given cluster and fail in a condition otherwise
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Description of problem:
Image registry pods panic while deploying OCP in me-central-1 AWS region
Version-Release number of selected component (if applicable):
4.11.2
How reproducible:
Deploy OCP in AWS me-central-1 region
Steps to Reproduce:
Deploy OCP in AWS me-central-1 region
Actual results:
panic: Invalid region provided: me-central-1
Expected results:
Image registry pods should come up with no errors
Additional info:
This is a clone of issue OCPBUGS-3114. The following is the description of the original issue:
—
Description of problem:
When running a Hosted Cluster on Hypershift the cluster-networking-operator never progressed to Available despite all the components being up and running
Version-Release number of selected component (if applicable):
quay.io/openshift-release-dev/ocp-release:4.11.11-x86_64 for the hosted clusters hypershift operator is quay.io/hypershift/hypershift-operator:4.11 4.11.9 management cluster
How reproducible:
Happened once
Steps to Reproduce:
1. 2. 3.
Actual results:
oc get co network reports False availability
Expected results:
oc get co network reports True availability
Additional info:
Description of problem:
Pod and PDB list page just report "Not found" when no resources found
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-15-094115
How reproducible:
Always
Steps to Reproduce:
1. normal user has a new empty project 2. normal user visit PDB list page via Workloads -> PodDisruptionBudgets 3.
Actual results:
2. it just reports 'Not found'
Expected results:
2. for other workloads, it will report "No <resource> found", for example No HorizontalPodAutoscalers found No StatefulSets found No Deployments found so for Pods and PodDisruptionBudgets list page, when no resource can be found, it's better that we also reports "No pods found" and "No PodDisruptionBudgets found"
Additional info:
Description of problem:
There were 4 ingress-controllers and totally 15 routes. On web console, try to query "route_metrics_controller_routes_per_shard" in Observe >> Metrics page. the stats for 3 ingress-controllers are 15, and it is 1 for the last ingress-controller
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-23-154914
How reproducible:
Create pods, services, ingress-controllers, routes, then check "route_metrics_controller_routes_per_shard" on web console
Steps to Reproduce:
1. get cluster's base domain % oc get dnses.config/cluster -oyaml | grep -i domain baseDomain: shudi-412gcpop36.qe.gcp.devcluster.openshift.com 2. create 3 clusters % oc -n openshift-ingress-operator get ingresscontroller NAME AGE default 7h5m extertest3 120m internal1 120m internal2 120m % 3. check the spec of the 4 ingress-controllres a, default b, extertest3 spec: domain: extertest3.shudi-412gcpop36.qe.gcp.devcluster.openshift.com endpointPublishingStrategy: loadBalancer: dnsManagementPolicy: Managed scope: External type: LoadBalancerService c, internal1 spec: domain: internal1.shudi-412gcpop36.qe.gcp.devcluster.openshift.com endpointPublishingStrategy: loadBalancer: dnsManagementPolicy: Managed scope: Internal type: LoadBalancerService d, internal2 spec: domain: internal2.shudi-412gcpop36.qe.gcp.devcluster.openshift.com endpointPublishingStrategy: loadBalancer: dnsManagementPolicy: Managed scope: Internal type: LoadBalancerService routeSelector: matchLabels: shard: alpha 4. check the route, there are 15 routes % oc get route -A | awk '{print $3}' HOST/PORT oauth-openshift.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com console-openshift-console.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com downloads-openshift-console.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com canary-openshift-ingress-canary.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com alertmanager-main-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com prometheus-k8s-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com prometheus-k8s-federate-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com thanos-querier-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com edge1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com int1reen2-test.internal1.shudi-412gcpop36.qe.gcp.devcluster.openshift.com pass1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com reen1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com service-unsecure-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com int1edge2-test.internal1.shudi-412gcpop36.qe.gcp.devcluster.openshift.com test.shudi.com % % oc get route -A | awk '{print $3}' | grep apps.shudi oauth-openshift.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com console-openshift-console.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com downloads-openshift-console.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com canary-openshift-ingress-canary.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com alertmanager-main-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com prometheus-k8s-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com prometheus-k8s-federate-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com thanos-querier-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com edge1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com pass1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com reen1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com service-unsecure-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com % % oc get route -A | awk '{print $3}' | grep apps.shudi | wc -l 12 % oc get route -A | awk '{print $3}' | grep internal1 | wc -l 2 % oc get route -A | awk '{print $3}' | grep shudi.com | wc -l 1 % 5. only route unsvc5 had the shard=alpha label % oc get route unsvc5 -oyaml | grep labels: -A2 labels: name: unsvc5 shard: alpha % oc get route unsvc5 -oyaml | grep spec: -A1 spec: host: test.shudi.com 6. login web console(https://https://console-openshift-console.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com/monitoring/query-browser), then navigate to Observe >> Metrics 7. input"route_metrics_controller_routes_per_shard ", then click the "Run queries" button. As the attached picture showed: name value default 15 extertest3 15 internal1 15 internal2 1 8. Also there was a minor issue: As the attached picture showed, there were two name in the header line Name name value route_metrics_controller_routes_per_shard default 15 route_metrics_controller_routes_per_shard extertest3 15 route_metrics_controller_routes_per_shard internal1 15 route_metrics_controller_routes_per_shard internal2 1
Actual results:
name value default 15 extertest3 15 internal1 15 internal2 1
Expected results:
name value default 12 extertest3 0 internal1 2 internal2 1
Additional info:
Our Prometheus alerts are inconsistent with both upstream and sometimes our own vendor folder. Let's do a clean update run before the next release is branched off.
This is a clone of issue OCPBUGS-3278. The following is the description of the original issue:
—
Description of problem:
When doing openshift-install agent create image, one should not need to provide platform specific data like boot MAC addresses.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1.Create install-config with only VIPs in Baremetal platform section apiVersion: v1 metadata: name: foo baseDomain: test.metalkube.org networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.122.0/23 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 0 controlPlane: name: master replicas: 3 hyperthreading: Enabled architecture: amd64 platform: baremetal: apiVIPs: - 192.168.122.10 ingressVIPs: - 192.168.122.11 --- apiVersion: v1beta1 metadata: name: foo rendezvousIP: 192.168.122.14 2.openshift-install agent create image
Actual results:
ERROR failed to write asset (Agent Installer ISO) to disk: cannot generate ISO image due to configuration errors ERROR failed to fetch Agent Installer ISO: failed to load asset "Install Config": failed to create install config: invalid "install-config.yaml" file: [platform.baremetal.hosts: Invalid value: []*baremetal.Host(nil): bare metal hosts are missing, platform.baremetal.Hosts: Required value: not enough hosts found (0) to support all the configured ControlPlane replicas (3)]
Expected results:
Image gets generated
Additional info:
We should go into install-config validation code, detect if we are doing agent-based installation and skip the hosts checks
This is a clone of issue OCPBUGS-6663. The following is the description of the original issue:
—
Description of problem:
When running openshift-install agent create image, and the install-config.yaml does not contain platform baremetal settings (except for VIPs) warnings are still generated as below: DEBUG Loading Install Config... WARNING Platform.Baremetal.ClusterProvisioningIP: 172.22.0.3 is ignored DEBUG Platform.Baremetal.BootstrapProvisioningIP: 172.22.0.2 is ignored WARNING Platform.Baremetal.ExternalBridge: baremetal is ignored WARNING Platform.Baremetal.ExternalMACAddress: 52:54:00:12:e1:68 is ignored WARNING Platform.Baremetal.ProvisioningBridge: provisioning is ignored WARNING Platform.Baremetal.ProvisioningMACAddress: 52:54:00:82:91:8d is ignored WARNING Platform.Baremetal.ProvisioningNetworkCIDR: 172.22.0.0/24 is ignored WARNING Platform.Baremetal.ProvisioningDHCPRange: 172.22.0.10,172.22.0.254 is ignored WARNING Capabilities: %!!(MISSING)s(*types.Capabilities=<nil>) is ignored It looks like these fields are populated with values from libvirt as shown in .openshift_install_state.json: "platform": { "baremetal": { "libvirtURI": "qemu:///system", "clusterProvisioningIP": "172.22.0.3", "bootstrapProvisioningIP": "172.22.0.2", "externalBridge": "baremetal", "externalMACAddress": "52:54:00:12:e1:68", "provisioningNetwork": "Managed", "provisioningBridge": "provisioning", "provisioningMACAddress": "52:54:00:82:91:8d", "provisioningNetworkInterface": "", "provisioningNetworkCIDR": "172.22.0.0/24", "provisioningDHCPRange": "172.22.0.10,172.22.0.254", "hosts": null, "apiVIPs": [ "10.1.101.7", "2620:52:0:165::7" ], "ingressVIPs": [ "10.1.101.9", "2620:52:0:165::9" ] The install-config.yaml used to generate this has the following snippet: platform: baremetal: apiVIPs: - 10.1.101.7 - 2620:52:0:165::7 ingressVIPs: - 10.1.101.9 - 2620:52:0:165::9 additionalTrustBundle: |
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Happens every time
Steps to Reproduce:
1. Use install-config.yaml with no platform baremetal fields except for the VIPs 2. run openshift-install agent create image
Actual results:
Warning messages are output
Expected results:
No warning messags
Additional info:
This is a clone of issue OCPBUGS-3114. The following is the description of the original issue:
—
Description of problem:
When running a Hosted Cluster on Hypershift the cluster-networking-operator never progressed to Available despite all the components being up and running
Version-Release number of selected component (if applicable):
quay.io/openshift-release-dev/ocp-release:4.11.11-x86_64 for the hosted clusters hypershift operator is quay.io/hypershift/hypershift-operator:4.11 4.11.9 management cluster
How reproducible:
Happened once
Steps to Reproduce:
1. 2. 3.
Actual results:
oc get co network reports False availability
Expected results:
oc get co network reports True availability
Additional info:
Note: This issue is a duplicate of OCPBUGS-10238 intended to target the 4.12 version.
Description of problem:
Updates to the `.spec.updateStrategy.registryPoll.interval` fields for a default CatalogSource are reverted.
Version-Release number of selected component (if applicable):
4.12.5
How reproducible:
100%
Steps to Reproduce:
$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.5 True False 86m Cluster version is 4.12.5 $ oc get catalogsource -n openshift-marketplace redhat-operators -o jsonpath='{.spec.updateStrategy.registryPoll.interval}' 10m $ oc patch -n openshift-marketplace catalogsource/redhat-operators --type=merge -p '{"spec":{"updateStrategy":{"registryPoll":{"interval":"30m0s"}}}}' catalogsource.operators.coreos.com/redhat-operators patched
Actual results:
$ oc get catalogsource -n openshift-marketplace redhat-operators -o jsonpath='{.spec.updateStrategy.registryPoll.interval}' 10m $ oc logs -n openshift-marketplace deployment/marketplace-operator time="2023-03-14T09:43:58Z" level=info msg="[defaults] Restoring CatalogSource redhat-operators" time="2023-03-14T09:43:58Z" level=info msg="[defaults] CatalogSource redhat-operators is annotated and its spec is the same as the default spec"
Expected results:
In 4.12.3 the updated value remains: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.3 True False 15d Cluster version is 4.12.3 $ oc get catalogsource -n openshift-marketplace redhat-operators -o jsonpath='{.spec.updateStrategy.registryPoll.interval}' 10m $ oc patch -n openshift-marketplace catalogsource/redhat-operators --type=merge -p '{"spec":{"updateStrategy":{"registryPoll":{"interval":"30m0s"}}}}' catalogsource.operators.coreos.com/redhat-operators patched $ oc get catalogsource -n openshift-marketplace redhat-operators -o jsonpath='{.spec.updateStrategy.registryPoll.interval}' 30m0s
Additional info:
This is a clone of issue OCPBUGS-6647. The following is the description of the original issue:
—
Description of problem:
Resource type drop-down menu item 'Last used' is in English
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Steps to Reproduce:
1. Navigate to kube:admin -> User Preferences -> Applications 2. Click on Resource type dorp-down
Actual results:
Content is in English
Expected results:
Content should be in target language
Additional info:
Screenshot reference provided
This is a clone of issue OCPBUGS-6651. The following is the description of the original issue:
—
Description of problem:
When running a hypershift HostedCluster with a publicAndPrivate / private setup behind a proxy, Nodes never go ready. ovn-kubernetes pods fail to run because the init container fails. [root@ip-10-0-129-223 core]# crictl logs cf142bb9f427d + [[ -f /env/ ]] ++ date -Iseconds 2023-01-25T12:18:46+00:00 - checking sbdb + echo '2023-01-25T12:18:46+00:00 - checking sbdb' + echo 'hosts: dns files' + proxypid=15343 + ovndb_ctl_ssl_opts='-p /ovn-cert/tls.key -c /ovn-cert/tls.crt -C /ovn-ca/ca-bundle.crt' + sbdb_ip=ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645 + retries=0 + ovn-sbctl --no-leader-only --timeout=5 --db=ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645 -p /ovn-cert/tls.key -c /ovn-cert/tls.crt -C /ovn-ca/ca-bundle.crt get-connection + exec socat TCP-LISTEN:9645,reuseaddr,fork PROXY:10.0.140.167:ovnkube-sbdb.apps.agl-proxy.hypershift.local:443,proxyport=3128 ovn-sbctl: ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645: database connection failed () + (( retries += 1 ))
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always.
Steps to Reproduce:
1. Create a publicAndPrivate hypershift HostedCluster behind a proxy. E.g" ➜ hypershift git:(main) ✗ ./bin/hypershift create cluster \ aws --pull-secret ~/www/pull-secret-ci.txt \ --ssh-key ~/.ssh/id_ed25519.pub \ --name agl-proxy \ --aws-creds ~/www/config/aws-osd-hypershift-creds \ --node-pool-replicas=3 \ --region=us-east-1 \ --base-domain=agl.hypershift.devcluster.openshift.com \ --zones=us-east-1a \ --endpoint-access=PublicAndPrivate \ --external-dns-domain=agl-services.hypershift.devcluster.openshift.com --enable-proxy=true 2. Get the kubeconfig for the guest cluster. E.g kubectl get secret -nclusters agl-proxy-admin-kubeconfig -oyaml 3. Get pods in the guest cluster. See ovnkube-node pods init container failing with [root@ip-10-0-129-223 core]# crictl logs cf142bb9f427d + [[ -f /env/ ]] ++ date -Iseconds 2023-01-25T12:18:46+00:00 - checking sbdb + echo '2023-01-25T12:18:46+00:00 - checking sbdb' + echo 'hosts: dns files' + proxypid=15343 + ovndb_ctl_ssl_opts='-p /ovn-cert/tls.key -c /ovn-cert/tls.crt -C /ovn-ca/ca-bundle.crt' + sbdb_ip=ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645 + retries=0 + ovn-sbctl --no-leader-only --timeout=5 --db=ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645 -p /ovn-cert/tls.key -c /ovn-cert/tls.crt -C /ovn-ca/ca-bundle.crt get-connection + exec socat TCP-LISTEN:9645,reuseaddr,fork PROXY:10.0.140.167:ovnkube-sbdb.apps.agl-proxy.hypershift.local:443,proxyport=3128 ovn-sbctl: ssl:ovnkube-sbdb.apps.agl-proxy.hypershift.local:9645: database connection failed () + (( retries += 1 ))
To create a bastion an ssh into the Nodes See https://hypershift-docs.netlify.app/how-to/debug-nodes/
Actual results:
Nodes unready
Expected results:
Nodes go ready
Additional info:
This is a clone of issue OCPBUGS-881. The following is the description of the original issue:
—
Description of problem:
Create install-config file for vsphere IPI against 4.12.0-0.nightly-2022-09-02-194931, fail as apiVIP and ingressVIP are not in machine CIDR. $ ./openshift-install create install-config --dir ipi ? Platform vsphere ? vCenter xxxxxxxx ? Username xxxxxxxx ? Password [? for help] ******************** INFO Connecting to xxxxxxxx INFO Defaulting to only available datacenter: SDDC-Datacenter INFO Defaulting to only available cluster: Cluster-1 INFO Defaulting to only available datastore: WorkloadDatastore ? Network qe-segment ? Virtual IP Address for API 172.31.248.137 ? Virtual IP Address for Ingress 172.31.248.141 ? Base Domain qe.devcluster.openshift.com ? Cluster Name jimavmc ? Pull Secret [? for help] **************************************************************************************************************************************************************************************** FATAL failed to fetch Install Config: failed to generate asset "Install Config": invalid install config: [platform.vsphere.apiVIPs: Invalid value: "172.31.248.137": IP expected to be in one of the machine networks: 10.0.0.0/16, platform.vsphere.ingressVIPs: Invalid value: "172.31.248.141": IP expected to be in one of the machine networks: 10.0.0.0/16] As user could not define cidr for machineNetwork when creating install-config file interactively, it will use default value 10.0.0.0/16, so fail to create install-config when inputting apiVIP and ingressVIP outside of default machinenNetwork. Error is thrown from https://github.com/openshift/installer/blob/master/pkg/types/validation/installconfig.go#L655-L666, seems new function introduced from PR https://github.com/openshift/installer/pull/5798 The issue should also impact Nutanix platform.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-02-194931
How reproducible:
Always
Steps to Reproduce:
1. create install-config.yaml file by running command "./openshift-install create install-config --dir ipi" 2. failed with above error 3.
Actual results:
fail to create install-config.yaml file
Expected results:
succeed to create install-config.yaml file
Additional info:
When multi-cluster is enabled, it possible to get in a situation where you can't cancel login. If you select a cluster you don't know the credentials for, console will remember the last cluster and repeatedly send you to the login page with no way to cancel or go back. If we decide to set the last cluster in the user's preferences, it might be possible to get stuck even if you clear cookies and localStorage.
There are similar issues logging into cluster that are hibernating. See attached video.
cc Scott Berens
The relevant code in ironic-image was not updated to support TLS, so it still uses the old port and explicit http://
This is a clone of issue OCPBUGS-6270. The following is the description of the original issue:
—
Similar to how, due to the install-config validation, the baremetal platform previously required a bunch of fields that are actually ignored (OCPBUGS-3278), we similarly require values for the following fields in the platform.vsphere section:
None of these values are actually used in the agent-based installer at present, and they should not be required.
Users can work around this by specifying dummy values in the platform config (note that the VIP values are required and must be genuine):
platform: vsphere: apiVIP: 192.168.111.1 ingressVIP: 192.168.111.2 vCenter: a username: b password: c datacenter: d defaultDatastore: e
Description of problem:
OCP cluster installation (SNO) using assisted installer running on ACM hub cluster. Hub cluster is OCP 4.10.33 ACM is 2.5.4 When a cluster fails to install we remove the installation CRs and cluster namespace from the hub cluster (to eventually redeploy). The termination of the namespace hangs indefinitely (14+ hours) with finalizers remaining. To resolve the hang we can remove the finalizers by editing both the secret pointed to by BareMetalHost .spec.bmc.credentialsName and BareMetalHost CR. When these finalizers are removed the namespace termination completes within a few seconds.
Version-Release number of selected component (if applicable):
OCP 4.10.33 ACM 2.5.4
How reproducible:
Always
Steps to Reproduce:
1. Generate installation CRs (AgentClusterInstall, BMH, ClusterDeployment, InfraEnv, NMStateConfig, ...) with an invalid configuration parameter. Two scenarios validated to hit this issue: a. Invalid rootDeviceHint in BareMetalHost CR b. Invalid credentials in the secret referenced by BareMetalHost.spec.bmc.credentialsName 2. Apply installation CRs to hub cluster 3. Wait for cluster installation to fail 4. Remove cluster installation CRs and namespace
Actual results:
Cluster namespace remains in terminating state indefinitely: $ oc get ns cnfocto1 NAME STATUS AGE cnfocto1 Terminating 17h
Expected results:
Cluster namespace (and all installation CRs in it) are successfully removed.
Additional info:
The installation CRs are applied to and removed from the hub cluster using argocd. The CRs have the following waves applied to them which affects the creation order (lowest to highest) and removal order (highest to lowest): Namespace: 0 AgentClusterInstall: 1 ClusterDeployment: 1 NMStateConfig: 1 InfraEnv: 1 BareMetalHost: 1 HostFirmwareSettings: 1 ConfigMap: 1 (extra manifests) ManagedCluster: 2 KlusterletAddonConfig: 2
Description of problem:
The error message of "opm alpha render-veneer semver" is not correct, "semver &{%!q(*os.file=&{{{0 0 0} 3 {0} 0 1 true true true}" is meaningless, should not be printed.
Version-Release number of selected component (if applicable):
zhaoxia@xzha-mac operator-framework-olm % opm version Version: version.Version{OpmVersion:"2149aebcc", GitCommit:"2149aebcc71367e6fba8f5416374917dae1e6a1c", BuildDate:"2022-09-08T04:31:47Z", GoOs:"darwin", GoArch:"amd64"}
How reproducible:
always
Steps to Reproduce:
1. create file zhaoxia@xzha-mac OCP-53915 % cat catalog-semver-veneer-1.yaml Schema: olm.semver Candidate: Bundles: - Image: quay.io/olmqe/nginxolm-operator-bundle:v0.0.1 - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1 - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-alpha - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-beta - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-alpha20220829 - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-alpha20220830 Stable: Bundles: - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-beta Fast: Bundles: - Image: quay.io/olmqe/nginxolm-operator-bundle:v0.0.1 - Image: quay.io/olmqe/nginxolm-operator-bundle:v1.0.1-beta 2. run "opm alpha render-veneer semver" zhaoxia@xzha-mac operator-framework-olm % opm alpha render-veneer semver catalog-semver-veneer-1.yaml 2022/09/08 12:35:05 semver &{%!q(*os.file=&{{{0 0 0} 3 {0} <nil> 0 1 true true true} catalog-semver-veneer-1.yaml <nil> false false false})}: semver-render: unable to post-process bundle info: encountered bundle versions which differ only by build metadata, which cannot be ordered: [bundle version "1.0.1-alpha" cannot be compared to "1.0.1-alpha", bundle version "1.0.1-alpha+20220829" cannot be compared to "1.0.1-alpha"] 3.
Actual results:
"semver &{%!q(*os.file=&{{{0 0 0} 3 {0} 0 1 true true true}" is meaningless, should not be printed.
Expected results:
no error message "semver &{%!q(*os.file=&{{{0 0 0} 3 {0} 0 1 true true true}"
Additional info:
Description of problem:
This bug is a clone of https://bugzilla.redhat.com/show_bug.cgi?id=2109140 on odf-console side. Corresponding PR needed to be merged in console as well. Please, verify this Jira console's bug and https://bugzilla.redhat.com/show_bug.cgi?id=2109140 simultaneous. Steps are exactly same, no difference.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-2841. The following is the description of the original issue:
—
Currently the agent installer supports only x86_64 arch. The image creation command must fail if some other arch is configured different from x86_64
We want to have an allowed list of architectures.
allowed = ['x86_64', 'amd64']
Description of problem:
Git icon shown in the repository details page should be based on the git provider.
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Always
Steps to Reproduce:
1. Create a Repository with gitlab repo url
2. Navigate to the detail page.
Actual results:
github icon is displayed for the gitlab url.
Expected results:
gitlab icon should be displayed for the gitlab url.
Additional info:
use `GitLabIcon` and `BitBucketIcon` from patternfly react-icons.
This is a clone of issue OCPBUGS-5542. The following is the description of the original issue:
—
Description of problem:
The project list orders projects by its name and is smart enough to keep a "numerical order" like:
The more prominent project dropdown is not so smart and shows just a simple "ascii ordered" list:
Version-Release number of selected component (if applicable):
4.8-4.13 (master)
How reproducible:
Always
Steps to Reproduce:
1. Create some new projects called test-1, test-11, test-2
2. Check the project list page (in admin perspective)
3. Check the project dropdown (in dev perspective)
Actual results:
Order is
Expected results:
Order should be
Additional info:
none
Catastrophic job runs where high numbers of tests fail are common. There are likely many root causes, but let's try to find one. This is a hard task because it's not "this one test failed, figure out why."
Clusters of failures are more common on certain platforms, it may be fruitful to start with the worst.
NURP's that average > 5 openshift-tests or openshift-tests-upgrade failures:
variants | avg -----------------------------------------------------+------------------------ {azure,amd64,ovn,upgrade,upgrade-micro,single-node} | 124.5294117647058824 {azure,amd64,ovn,upgrade,upgrade-minor,single-node} | 92.9090909090909091 {openstack,amd64,ovn,ha} | 49.2105263157894737 {azure,amd64,sdn,ha,fips} | 25.6666666666666667 {metal-ipi,amd64,ovn,ha} | 24.6000000000000000 {openstack,amd64,ovn,ha,fips} | 23.5000000000000000 {azure,amd64,ovn,ha,hypershift} | 22.6666666666666667 {s390x,sdn,ha} | 22.5454545454545455 {gcp,amd64,ovn,ha} | 21.5714285714285714 {ppc64le,sdn,ha} | 17.9545454545454545 {metal-ipi,amd64,sdn,ha} | 17.6000000000000000 {openstack,amd64,ovn,ha,serial} | 15.3333333333333333 {azure,amd64,ovn,ha} | 15.1627906976744186 {promote} | 15.0000000000000000 {aws,amd64,ovn,ha} | 14.2558139534883721 {metal-ipi,amd64,ovn,upgrade,upgrade-minor,ha} | 13.9375000000000000 {gcp,amd64,ovn,upgrade,upgrade-minor,ha,realtime} | 11.2000000000000000 {azure,amd64,sdn,upgrade,upgrade-minor,ha} | 9.6842105263157895 {never-stable} | 9.0740740740740741 {aws,amd64,ovn,single-node} | 8.8666666666666667 {metal-ipi,amd64,sdn,upgrade,upgrade-micro,ha} | 7.9090909090909091 {azure,amd64,sdn,upgrade,upgrade-micro,ha} | 6.4000000000000000 {aws,amd64,sdn,ha} | 5.7800000000000000 {vsphere-ipi,amd64,ovn,ha} | 5.6458333333333333 {openstack,amd64,ovn,upgrade,upgrade-minor,ha} | 5.6250000000000000 {metal-ipi,amd64,ovn,upgrade,upgrade-micro,ha} | 5.5882352941176471 {aws,amd64,sdn,upgrade,upgrade-micro,ha} | 5.5789473684210526
Here's a sippy link for 4.12 job runs with > 50 failures: https://sippy.dptools.openshift.org/sippy-ng/jobs/4.12/runs?filters=%257B%2522items%2522%253A%255B%257B%2522columnField%2522%253A%2522test_failures%2522%252C%2522operatorValue%2522%253A%2522%253E%2522%252C%2522value%2522%253A%252250%2522%257D%252C%257B%2522columnField%2522%253A%2522overall_result%2522%252C%2522operatorValue%2522%253A%2522equals%2522%252C%2522value%2522%253A%2522F%2522%257D%255D%252C%2522linkOperator%2522%253A%2522and%2522%257D&sort=desc&sortField=timestamp
This is a clone of issue OCPBUGS-11054. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-11038. The following is the description of the original issue:
—
Description of problem:
Backport support starting in 4.12.z to a new GCP region europe-west12
Version-Release number of selected component (if applicable):
4.12.z and 4.13.z
How reproducible:
Always
Steps to Reproduce:
1. Use openhift-install to deploy OCP in europe-west12
Actual results:
europe-west12 is not available as a supported region in the user survey
Expected results:
europe-west12 to be available as a supported region in the user survey
Additional info:
Description of problem:
By creating network policies with a namespace that has maximum length, it can end up causing this error: 2023-06-22T17:34:40.804880959Z I0622 17:34:40.804851 1 obj_retry.go:318] Retry add failed for *v1.NetworkPolicy ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident/kas, will try again later: failed to create Network Policy ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident/kas: failed to create default deny port groups: error in transact with ops [ {Op:update Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:outport == @a7686019953911959437_ingressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} priority:1000] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {08cc8026-4c22-4c52-99cd-e8cd1469c8bd}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:outport == @a7686019953911959437_ingressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} priority:1001] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {08cc8026-4c22-4c52-99cd-e8cd1469c8bd}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:inport == @a7686019953911959437_egressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} options:{GoMap:map[apply-after-lb:true]} priority:1000] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {f324353c-a47b-4044-9cd9-dbeef058ada3}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}{Op:update Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:inport == @a7686019953911959437_egressDefaultDeny && (arp || nd) meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-production-24gfm4t0rtdsg01bcqgihdrceh3t59na-mshen-incident_]} options:{GoMap:map[apply-after-lb:true]} priority:1001] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {f324353c-a47b-4044-9cd9-dbeef058ada3}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}{Op:update Table:Port_Group Row:map[acls:{GoSet:[{GoUUID:08cc8026-4c22-4c52-99cd-e8cd1469c8bd} {GoUUID:08cc8026-4c22-4c52-99cd-e8cd1469c8bd}]} external_ids:{GoMap:map[name:a7686019953911959437_ingressDefaultDeny]} ports:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {d3b52500-963a-4f7b-8928-d869f298d2e8}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}{Op:update Table:Port_Group Row:map[acls:{GoSet:[{GoUUID:f324353c-a47b-4044-9cd9-dbeef058ada3} {GoUUID:f324353c-a47b-4044-9cd9-dbeef058ada3}]} external_ids:{GoMap:map[name:a7686019953911959437_egressDefaultDeny]} ports:{GoSet:[]}] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {b128baec-6acd-4683-8c12-5b968bf73bd8}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]results [{Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:1 Error: Details: UUID:{GoUUID:} Rows:[]} {Count:0 Error:ovsdb error Details:set contains duplicate UUID:{GoUUID:} Rows:[]} {Count:0 Error: Details: UUID:{GoUUID:} Rows:[]}] and errors [ovsdb error: set contains duplicate]: 1 ovsdb operations failed
This is not a problem in 4.14 as we moved to ACL indexes, but in 4.13 and before we compare the ACL name and the external ids. For default deny ACLs we simply store the direction in the external id, and the name of the ACL is limited to 63 characters in OVN. When we create default deny acls, we create one that denies everything, then we also create some allow acls to permit arp and neighbor discovery traffic. These 2 ACLs may be recognized as duplicate because their truncated name (namespace only) and their directions in external ids match.
This is a clone of issue OCPBUGS-8035. The following is the description of the original issue:
—
Description of problem:
install discnnect private cluster, ssh to master/bootstrap nodes from the bastion on the vpc failed.
Version-Release number of selected component (if applicable):
Pre-merge build https://github.com/openshift/installer/pull/6836 registry.build05.ci.openshift.org/ci-ln-5g4sj02/release:latest Tag: 4.13.0-0.ci.test-2023-02-27-033047-ci-ln-5g4sj02-latest
How reproducible:
always
Steps to Reproduce:
1.Create bastion instance maxu-ibmj-p1-int-svc 2.Create vpc on the bastion host 3.Install private disconnect cluster on the bastion host with mirror registry 4.ssh to the bastion 5.ssh to the master/bootstrap nodes from the bastion
Actual results:
[core@maxu-ibmj-p1-int-svc ~]$ ssh -i ~/openshift-qe.pem core@10.241.0.5 -v OpenSSH_8.8p1, OpenSSL 3.0.5 5 Jul 2022 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config debug1: configuration requests final Match pass debug1: re-parsing configuration debug1: Reading configuration data /etc/ssh/ssh_config debug1: Reading configuration data /etc/ssh/ssh_config.d/50-redhat.conf debug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config debug1: Connecting to 10.241.0.5 [10.241.0.5] port 22. debug1: connect to address 10.241.0.5 port 22: Connection timed out ssh: connect to host 10.241.0.5 port 22: Connection timed out
Expected results:
ssh succeed.
Additional info:
$ibmcloud is sg-rules r014-5a6c16f4-8a4c-4c02-ab2d-626c14f72a77 --vpc maxu-ibmj-p1-vpc Listing rules of security group r014-5a6c16f4-8a4c-4c02-ab2d-626c14f72a77 under account OpenShift-QE as user ServiceId-dff277a9-b608-410a-ad24-c544e59e3778... ID Direction IP version Protocol Remote r014-6739d68f-6827-41f4-b51a-5da742c353b2 outbound ipv4 all 0.0.0.0/0 r014-06d44c15-d3fd-4a14-96c4-13e96aa6769c inbound ipv4 all shakiness-perfectly-rundown-take r014-25b86956-5370-4925-adaf-89dfca9fb44b inbound ipv4 tcp Ports:Min=22,Max=22 0.0.0.0/0 r014-e18f0f5e-c4e5-44a5-b180-7a84aa59fa97 inbound ipv4 tcp Ports:Min=3128,Max=3129 0.0.0.0/0 r014-7e79c4b7-d0bb-4fab-9f5d-d03f6b427d89 inbound ipv4 icmp Type=8,Code=0 0.0.0.0/0 r014-03f23b04-c67a-463d-9754-895b8e474e75 inbound ipv4 tcp Ports:Min=5000,Max=5000 0.0.0.0/0 r014-8febe8c8-c937-42b6-b352-8ae471749321 inbound ipv4 tcp Ports:Min=6001,Max=6002 0.0.0.0/0
This is a clone of issue OCPBUGS-3277. The following is the description of the original issue:
—
I saw this occur one time when running installs in a continuous loop. This was with COMPaCT_IPV4 in a non-disconnected setup.
WaitForBootrapComplete shows it can't access the API
level=info msg=Unable to retrieve cluster metadata from Agent Rest API: no clusterID known for the cluster
level=debug msg=cluster is not registered in rest API
level=debug msg=infraenv is not registered in rest API
This is because create-cluster-and-infraenv.service failed
Failed Units: 2
create-cluster-and-infraenv.service
NetworkManager-wait-online.service
The agentbasedinstaller register command wasn't able to retrieve the image to get the version
Nov 03 23:03:24 master-0 create-cluster-and-infraenv[2702]: time="2022-11-03T23:03:24Z" level=error msg="command 'oc adm release info -o template --template '\{{.metadata.version}}' --insecure=false registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451 --registry-config=/tmp/registry-config3852044519' exited with non-zero exit code 1: \nerror: unable to read image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451: Get \"https://registry.ci.openshift.org/v2/\": dial tcp: lookup registry.ci.openshift.org on 192.168.111.1:53: read udp 192.168.111.80:51315->192.168.111.1:53: i/o timeout\n" Nov 03 23:03:24 master-0 create-cluster-and-infraenv[2702]: time="2022-11-03T23:03:24Z" level=error msg="failed to get image openshift version from release image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451" error="command 'oc adm release info -o template --template '\{{.metadata.version}}' --insecure=false registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451 --registry-config=/tmp/registry-config3852044519' exited with non-zero exit code 1: \nerror: unable to read image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451: Get \"https://registry.ci.openshift.org/v2/\": dial tcp: lookup registry.ci.openshift.org on 192.168.111.1:53: read udp 192.168.111.80:51315->192.168.111.1:53: i/o timeout\n"
This occurs when attempting to get the release here:
https://github.com/openshift/assisted-service/blob/master/cmd/agentbasedinstaller/register.go#L58
We should add a retry mechanism or restart the service to handle spurious network failures like this.
This is a clone of issue OCPBUGS-95. The following is the description of the original issue:
—
In an OpenShift cluster with OpenShiftSDN network plugin with egressIP and NMstate operator configured, there are some conditions when the egressIP is deconfigured from the network interface.
The bug is 100% reproducible.
Steps for reproducing the issue are:
1. Install a cluster with OpenShiftSDN network plugin.
2. Configure egressip for a project.
3. Install NMstate operator.
4. Create a NodeNetworkConfigurationPolicy.
5. Identify on which node the egressIP is present.
6. Restart the nmstate-handler pod running on the identified node.
7. Verify that the egressIP is no more present.
Restarting the sdn pod related to the identified node will reconfigure the egressIP in the node.
This issue has a high impact since any changes triggered for the NMstate operator will prevent application traffic. For example, in the customer environment, the issue is triggered any time a new node is added to the cluster.
The expectation is that NMstate operator should not interfere with SDN configuration.
This is a clone of issue OCPBUGS-3767. The following is the description of the original issue:
—
Description of problem:
Start maintenance action moved from Nodes tab to Bare Metal Hosts tab
Version-Release number of selected component (if applicable):
Cluster version is 4.12.0-0.nightly-2022-11-15-024309
How reproducible:
100%
Steps to Reproduce:
1. Install Node Maintenance operator 2. Go Compute -> Nodes 3. Start maintenance from 3dots menu of worker-0-0 see https://docs.openshift.com/container-platform/4.11/nodes/nodes/eco-node-maintenance-operator.html#eco-setting-node-maintenance-actions-web-console_node-maintenance-operator
Actual results:
No 'Start maintenance' option
Expected results:
Maintenance started successfully
Additional info:
worked for 4.11
This is a clone of issue OCPBUGS-7374. The following is the description of the original issue:
—
Originally reported by lance5890 in issue https://github.com/openshift/cluster-etcd-operator/issues/1000
The controllers sometimes get stuck on listing members in failure scenarios, this is known and can be mitigated by simply restarting the CEO.
similar BZ 2093819 with stuck controllers was fixed slightly different in https://github.com/openshift/cluster-etcd-operator/commit/4816fab709e11e0681b760003be3f1de12c9c103
This fix was contributed by lance5890, thanks a lot!
This is a clone of issue OCPBUGS-10807. The following is the description of the original issue:
—
Description of problem:
Cluster Network Operator managed component multus-admission-controller does not conform to Hypershift control plane expectations. When CNO is managed by Hypershift, multus-admission-controller and other CNO-managed deployments should run with non-root security context. If Hypershift runs control plane on kubernetes (as opposed to Openshift) management cluster, it adds pod security context to its managed deployments, including CNO, with runAsUser element inside. In such a case CNO should do the same, set security context for its managed deployments, like multus-admission-controller, to meet Hypershift security rules.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1.Create OCP cluster using Hypershift using Kube management cluster 2.Check pod security context of multus-admission-controller
Actual results:
no pod security context is set on multus-admission-controller
Expected results:
pod security context is set with runAsUser: xxxx
Additional info:
Corresponding CNO change
Description of problem:
This bugs purpose is to enable a feature backport of https://issues.redhat.com/browse/MON-1949.
This is a clone of issue OCPBUGS-3186. The following is the description of the original issue:
—
Description of problem:
fail to get clear error message when zones is not match with the the subnets in BYON
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. install-config.yaml yq '.controlPlane.platform.ibmcloud.zones,.platform.ibmcloud.controlPlaneSubnets' install-config.yaml ["ca-tor-1", "ca-tor-2", "ca-tor-3"] - ca-tor-existing-network-1-cp-ca-tor-2 - ca-tor-existing-network-1-cp-ca-tor-3 2. openshift-install create manifests --dir byon-az-test-1
Actual results:
FATAL failed to fetch Master Machines: failed to generate asset "Master Machines": failed to create master machine objects: failed to create provider: no subnet found for ca-tor-1
Expected results:
more clear error message in install-config.yaml
Additional info:
Currently on summery logs if there is kube-api issue controller will not upload logs but it should as it has file to read them from
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
1 The debugging endpoint /debug/pprof is exposed over the unauthenticated 10251 port 2 This debugging endpoint can potentially leak sensitive information
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-6799. The following is the description of the original issue:
—
Description of problem:
The pipelines -> repositories list view in Dev Console does not show the running pipelineline as the last pipelinerun in the table.
Original BugZilla Link: https://bugzilla.redhat.com/show_bug.cgi?id=2016006
OCPBUGSM: https://issues.redhat.com/browse/OCPBUGSM-36408
This is a clone of issue OCPBUGS-4350. The following is the description of the original issue:
—
Steps to reproduce:
Release: 4.13.0-0.nightly-2022-11-30-183109 (latest 4.12 nightly as well)
Create a HyperShift cluster on AWS, wait til its completed rolling out
Upgrade the HostedCluster by updating its release image to a newer one
Observe the 'network' clusteroperator resource in the guest cluster as well as the 'version' clusterversion resource in the guest cluster.
When the clusteroperator resource reports the upgraded release and the clusterversion resource reports the new release as applied, take a look at the ovnkube-master statefulset in the control plane namespace of the management cluster. It is still not finished rolling out.
Expected: that the network clusteroperator reports the new version only when all components have finished rolling out.
This is a clone of issue OCPBUGS-3316. The following is the description of the original issue:
—
Description of problem:
Branch name in repository pipelineruns list view should match the actual github branch name.
Version-Release number of selected component (if applicable):
4.11.z
How reproducible:
alwaus
Steps to Reproduce:
1. Create a repository 2. Trigger the pipelineruns by push or pull request event on the github
Actual results:
Branch name contains "refs-heads-" prefix in front of the actual branch name eg: "refs-heads-cicd-demo" (cicd-demo is the branch name)
Expected results:
Branch name should be the acutal github branch name. just `cicd-demo`should be shown in the branch column.
Additional info:
Ref: https://coreos.slack.com/archives/CHG0KRB7G/p1667564311865459
This is a clone of issue OCPBUGS-15722. The following is the description of the original issue:
—
—
In Helm Charts we define a values.schema.json file - a JSON schema for all the possible values the user can set in a chart. This schema needs to follow JSON schema standard. The standard includes something called $ref - a reference to the either local or remote definition. If we use a schema with remote references in OCP, it causes various troubles in OCP. Different OCP versions gives different results, also on the same OCP version you can get different results based on how tight down the cluster networking is.
Tried in Developer Sandbox, OpenShift Local, Baremetal Public Cluster in Operate First, OCP provisioned through clusterbot. It behaves differently in each instance. Individual cases are described below.
1. Go to the "Helm" tab in Developer Perspective
2. Click "Create" in top right and select "Repository"
3. Use following ProjectHelmChartRepository resource and click "Create" (this repo contains single chart, this chart has values.schema.json with content linked below):
apiVersion: helm.openshift.io/v1beta1
kind: ProjectHelmChartRepository
metadata:
name: reproducer
spec:
connectionConfig:
url: https://raw.githubusercontent.com/tumido/helm-backstage/reproducer
4. Go back the "Helm" tab in Developer Perspective
5. Click "Create" in top right and select "Helm Release"
6. In filters section of the catalog in the "Chart repositories" select "Reproducer"
7. Click on the single tile available (Backstage)
8. Click "Install Helm Chart"
9. Either you will be greeted with various error screens or you see the "YAML view" tab (this tab selection is not the default and is remembered during user session only I suppose)
10. Select "Form view"
Various error screens depending on OCP version and network restrictions. I've attached screen captures how it behaves in different settings.
Either render the form view (resolve remote references) or make it obvious that remote references are not supporter. Optionally fallback to the "YAML view" regarding that user doesn't have full schema available, but the chart is still deployable.
Depends on the environment
Always in OpenShift Local, Developer Sandbox, cluster bot clusters
1. Select any other chart to install, click "Install Helm Chart"
2. Change the view to "YAML view"
3. Go back to the Helm catalog without actually deploying anything
4. Select the faulty chart and click "Install Helm Chart"
5. Proceed with installation
Description of problem:
If using ingresscontroller.spec.routeSelector.matchExpressions or ingresscontroller.spec.namespaceSelector.matchExpressions, the route will not count in the new route_metrics_controller_routes_per_shard prometheus metric. This is due to the logic only using "matchLabels". The logic needs to be updated to also use "matchExpressions".
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1. Create IC with matchExpressions: oc apply -f - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: reproducer.$domain routeSelector: matchExpressions: - key: type operator: In values: - shard replicas: 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" EOF 2. Create the route: oc apply -f - <<EOF apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-shard labels: type: shard spec: to: kind: Service name: router-shard EOF 3. Check route_metrics_controller_routes_per_shard{name="sharded"} in prometheus, it's 0
Actual results:
route_metrics_controller_routes_per_shard{name="sharded"} has 0 routes
Expected results:
route_metrics_controller_routes_per_shard{name="sharded"} should have 1 route
Additional info:
Description of problem:
Customer has noticed that object count quotas ("count/*") do not work for certain objects in ClusterResourceQuotas. For example, the following ResourceQuota works as expected: ~~~ apiVersion: v1 kind: ResourceQuota metadata: [..] spec: hard: count/routes.route.openshift.io: "900" count/servicemonitors.monitoring.coreos.com: "100" pods: "100" status: hard: count/routes.route.openshift.io: "900" count/servicemonitors.monitoring.coreos.com: "100" pods: "100" used: count/routes.route.openshift.io: "0" count/servicemonitors.monitoring.coreos.com: "1" pods: "4" ~~~ However when using "count/servicemonitors.monitoring.coreos.com" in ClusterResourceQuotas, this does not work (note the missing "used"): ~~~ apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: [..] spec: quota: hard: count/routes.route.openshift.io: "900" count/servicemonitors.monitoring.coreos.com: "100" count/simon.krenger.ch: "100" pods: "100" selector: annotations: openshift.io/requester: kube:admin status: namespaces: [..] total: hard: count/routes.route.openshift.io: "900" count/servicemonitors.monitoring.coreos.com: "100" count/simon.krenger.ch: "100" pods: "100" used: count/routes.route.openshift.io: "0" pods: "4" ~~~ This behaviour does not only apply to "servicemonitors.monitoring.coreos.com" objects, but also to other objects, such as: - count/kafkas.kafka.strimzi.io: '0' - count/prometheusrules.monitoring.coreos.com: '100' - count/servicemonitors.monitoring.coreos.com: '100' The debug output for kube-controller-manager shows the following entries, which may or may not be related: ~~~ $ oc logs kube-controller-manager-ip-10-0-132-228.eu-west-1.compute.internal | grep "servicemonitor" I0511 15:07:17.297620 1 patch_informers_openshift.go:90] Couldn't find informer for monitoring.coreos.com/v1, Resource=servicemonitors I0511 15:07:17.297630 1 resource_quota_monitor.go:181] QuotaMonitor using a shared informer for resource "monitoring.coreos.com/v1, Resource=servicemonitors" I0511 15:07:17.297642 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for servicemonitors.monitoring.coreos.com [..] I0511 15:07:17.486279 1 patch_informers_openshift.go:90] Couldn't find informer for monitoring.coreos.com/v1, Resource=servicemonitors I0511 15:07:17.486297 1 graph_builder.go:176] using a shared informer for resource "monitoring.coreos.com/v1, Resource=servicemonitors", kind "monitoring.coreos.com/v1, Kind=ServiceMonitor" ~~~
Version-Release number of selected component (if applicable):
OpenShift Container Platform 4.12.15
How reproducible:
Always
Steps to Reproduce:
1. On an OCP 4.12 cluster, create the following ClusterResourceQuota: ~~~ apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: case-03509174 spec: quota: hard: count/servicemonitors.monitoring.coreos.com: "100" pods: "100" selector: annotations: openshift.io/requester: "kube:admin" ~~~ 2. As "kubeadmin", create a new project and deploy one new ServiceMonitor, for example: ~~~ apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: simon-servicemon-2 namespace: simon-1 spec: endpoints: - path: /metrics port: http scheme: http jobLabel: component selector: matchLabels: deployment: echoenv-1 ~~~
Actual results:
The "used" field for ServiceMonitors is not populated in the ClusterResourceQuota for certain objects. It is unclear if these quotas are enforced or not
Expected results:
ClusterResourceQuota for ServiceMonitors is updated and enforced
Additional info:
* Must-gather for a cluster showing this behaviour (added debug for kube-controller-manager) is available here: https://drive.google.com/file/d/1ioEEHZQVHG46vIzDdNm6pwiTjkL9QQRE/view?usp=share_link * Slack discussion: https://redhat-internal.slack.com/archives/CKJR6200N/p1683876047243989
This is a clone of issue OCPBUGS-16160. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-16135. The following is the description of the original issue:
—
Description of problem:
The control-plane-operator pod gets stuck deleting an awsendpointservice if its hostedzone is already gone:
Logs:
{"level":"error","ts":"2023-07-13T03:06:58Z","msg":"Reconciler error","controller":"awsendpointservice","controllerGroup":"hypershift.openshift.io","controllerKind":"AWSEndpointService","aWSEndpointService":{"name":"private-router","namespace":"ocm-staging-24u87gg3qromrf8mg2r2531m41m0c1ji-diegohcp-west2"},"namespace":"ocm-staging-24u87gg3qromrf8mg2r2531m41m0c1ji-diegohcp-west2","name":"private-router","reconcileID":"59eea7b7-1649-4101-8686-78113f27567d","error":"failed to delete resource: NoSuchHostedZone: No hosted zone found with ID: Z05483711XJV23K8E97HK\n\tstatus code: 404, request id: f8686dd6-a906-4a5e-ba4a-3dd52ad50ec3","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/hypershift/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/hypershift/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:234"}
Version-Release number of selected component (if applicable):
4.12.24
How reproducible:
Have not tried to reproduce yet, but should be fairly reproducible
Steps to Reproduce:
1. Install a PublicAndPrivate or Private HCP 2. Delete the Route53 Hosted Zone defined in its awsendpointservice's .status.dnsZoneID field 3. Observe the control-plane-operator looping on the above logs and the uninstall hanging
Actual results:
Uninstall hangs due to CPO being unable to delete the awsendpointservice
Expected results:
awsendpointservice cleans up, if the hosted zone is already gone CPO shouldn't care that it can't list hosted zones
Additional info:
This is a clone of issue OCPBUGS-2384. The following is the description of the original issue:
—
Version:
$ openshift-install version
openshift-install 4.10.0-0.nightly-2021-12-23-153012
built from commit 94a3ed9cbe4db66dc50dab8b85d2abf40fb56426
release image registry.ci.openshift.org/ocp/release@sha256:39cacdae6214efce10005054fb492f02d26b59fe9d23686dc17ec8a42f428534
release architecture amd64
Platform: alibabacloud
Please specify:
What happened?
Unexpected error of 'Internal publish strategy is not supported on "alibabacloud" platform', because Internal publish strategy should be supported for "alibabacloud", please clarify otherwise, thanks!
$ openshift-install create install-config --dir work
? SSH Public Key /home/jiwei/.ssh/openshift-qe.pub
? Platform alibabacloud
? Region us-east-1
? Base Domain alicloud-qe.devcluster.openshift.com
? Cluster Name jiwei-uu
? Pull Secret [? for help] *********
INFO Install-Config created in: work
$
$ vim work/install-config.yaml
$ yq e '.publish' work/install-config.yaml
Internal
$ openshift-install create cluster --dir work --log-level info
FATAL failed to fetch Metadata: failed to load asset "Install Config": invalid "install-config.yaml" file: publish: Invalid value: "Internal": Internal publish strategy is not supported on "alibabacloud" platform
$
What did you expect to happen?
"publish: Internal" should be supported for platform "alibabacloud".
How to reproduce it (as minimally and precisely as possible)?
Always
Description of problem:
Not all rules removed from iptables after disabling multinetworkpolicy
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1. Configure sriov (nodepolicy + sriovnetwork) 2. Configure 2 pods 3. enable MutiNetworkPolicy 4. apply ~20 rules for pod1: spec: podSelector: matchLabels: pod: pod1 policyTypes: - Ingress ingress: [] 5. Disable multinetworkpolicy 6. send ping pod2 => pod1
Actual results:
Traffic is still blocked
Expected results:
Traffic should be passed
Additional info:
Before disabling multiNetworkPolicy: -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default net-attach-def:ns1/sriovnetwork2" -j MULTI-0-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default24 net-attach-def:ns1/sriovnetwork2" -j MULTI-1-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default17 net-attach-def:ns1/sriovnetwork2" -j MULTI-2-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default15 net-attach-def:ns1/sriovnetwork2" -j MULTI-3-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default14 net-attach-def:ns1/sriovnetwork2" -j MULTI-4-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default7 net-attach-def:ns1/sriovnetwork2" -j MULTI-5-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default5 net-attach-def:ns1/sriovnetwork2" -j MULTI-6-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default20 net-attach-def:ns1/sriovnetwork2" -j MULTI-7-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default19 net-attach-def:ns1/sriovnetwork2" -j MULTI-8-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default11 net-attach-def:ns1/sriovnetwork2" -j MULTI-9-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default10 net-attach-def:ns1/sriovnetwork2" -j MULTI-10-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default9 net-attach-def:ns1/sriovnetwork2" -j MULTI-11-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default6 net-attach-def:ns1/sriovnetwork2" -j MULTI-12-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default3 net-attach-def:ns1/sriovnetwork2" -j MULTI-13-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default16 net-attach-def:ns1/sriovnetwork2" -j MULTI-14-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default13 net-attach-def:ns1/sriovnetwork2" -j MULTI-15-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default2 net-attach-def:ns1/sriovnetwork2" -j MULTI-16-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default22 net-attach-def:ns1/sriovnetwork2" -j MULTI-17-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default21 net-attach-def:ns1/sriovnetwork2" -j MULTI-18-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default18 net-attach-def:ns1/sriovnetwork2" -j MULTI-19-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default12 net-attach-def:ns1/sriovnetwork2" -j MULTI-20-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default8 net-attach-def:ns1/sriovnetwork2" -j MULTI-21-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default4 net-attach-def:ns1/sriovnetwork2" -j MULTI-22-INGRESS -A MULTI-0-INGRESS -j DROP -A MULTI-1-INGRESS -j DROP -A MULTI-2-INGRESS -j DROP -A MULTI-3-INGRESS -j DROP -A MULTI-4-INGRESS -j DROP -A MULTI-5-INGRESS -j DROP -A MULTI-6-INGRESS -j DROP -A MULTI-7-INGRESS -j DROP -A MULTI-8-INGRESS -j DROP -A MULTI-9-INGRESS -j DROP -A MULTI-10-INGRESS -j DROP -A MULTI-11-INGRESS -j DROP -A MULTI-12-INGRESS -j DROP -A MULTI-13-INGRESS -j DROP -A MULTI-14-INGRESS -j DROP -A MULTI-15-INGRESS -j DROP -A MULTI-16-INGRESS -j DROP -A MULTI-17-INGRESS -j DROP -A MULTI-18-INGRESS -j DROP -A MULTI-19-INGRESS -j DROP -A MULTI-20-INGRESS -j DROP -A MULTI-21-INGRESS -j DROP -A MULTI-22-INGRESS -j DROP ============================================================= After disabling multiNetworkPolicy: -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default5 net-attach-def:ns1/sriovnetwork2" -j MULTI-0-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default24 net-attach-def:ns1/sriovnetwork2" -j MULTI-1-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default17 net-attach-def:ns1/sriovnetwork2" -j MULTI-2-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default15 net-attach-def:ns1/sriovnetwork2" -j MULTI-3-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default7 net-attach-def:ns1/sriovnetwork2" -j MULTI-4-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default3 net-attach-def:ns1/sriovnetwork2" -j MULTI-5-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default20 net-attach-def:ns1/sriovnetwork2" -j MULTI-6-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default19 net-attach-def:ns1/sriovnetwork2" -j MULTI-7-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default9 net-attach-def:ns1/sriovnetwork2" -j MULTI-8-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default6 net-attach-def:ns1/sriovnetwork2" -j MULTI-9-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default16 net-attach-def:ns1/sriovnetwork2" -j MULTI-10-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default2 net-attach-def:ns1/sriovnetwork2" -j MULTI-11-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default22 net-attach-def:ns1/sriovnetwork2" -j MULTI-12-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default21 net-attach-def:ns1/sriovnetwork2" -j MULTI-13-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default18 net-attach-def:ns1/sriovnetwork2" -j MULTI-14-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default8 net-attach-def:ns1/sriovnetwork2" -j MULTI-15-INGRESS -A MULTI-INGRESS -i int1 -m comment --comment "policy:deny-by-default4 net-attach-def:ns1/sriovnetwork2" -j MULTI-16-INGRESS -A MULTI-0-INGRESS -j DROP -A MULTI-1-INGRESS -j DROP -A MULTI-2-INGRESS -j DROP -A MULTI-3-INGRESS -j DROP -A MULTI-4-INGRESS -j DROP -A MULTI-5-INGRESS -j DROP -A MULTI-6-INGRESS -j DROP -A MULTI-7-INGRESS -j DROP -A MULTI-8-INGRESS -j DROP -A MULTI-9-INGRESS -j DROP -A MULTI-10-INGRESS -j DROP -A MULTI-11-INGRESS -j DROP -A MULTI-12-INGRESS -j DROP -A MULTI-13-INGRESS -j DROP -A MULTI-14-INGRESS -j DROP -A MULTI-15-INGRESS -j DROP -A MULTI-16-INGRESS -j DROP
Currently controller will set status done each time it sees host that is ready in k8s without looking if it was already set.
time="2022-09-13T19:03:45Z" level=info msg="Found new ready node ocp-2.cluster1.kpsalerno.us.ibm.com with inventory id 2da64d56-5057-78c6-ea6e-bf74a783bd79, kubernetes id 2da64d56-5057-78c6-ea6e-bf74a783bd79, updating its status to Done" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).waitAndUpdateNodesStatus" file="/remote-source/app/src/assisted_installer_controller/assisted_installer_controller.go:255" request_id=6258e5a2-4e78-4148-a913-45d704a0fa1d
time="2022-09-13T19:04:05Z" level=info msg="Found new ready node ocp-2.cluster1.kpsalerno.us.ibm.com with inventory id 2da64d56-5057-78c6-ea6e-bf74a783bd79, kubernetes id 2da64d56-5057-78c6-ea6e-bf74a783bd79, updating its status to Done" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).waitAndUpdateNodesStatus" file="/remote-source/app/src/assisted_installer_controller/assisted_installer_controller.go:255" request_id=49e4e63f-cf4f-4b9f-b1f3-923c473c09dd
Description of problem:
The cluster-dns-operator does not reconcile the openshift-dns namespace, which has been exposed as an issue in 4.12 due to the requirement for the namespace to have pod-security labels. If a cluster has been incrementally updated from a version less than or equal to 4.9, the openshift-dns namespace will most likely not contain the required pod-security labels since the namespace was statically created when the cluster was installed with old namespace configuration.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always if cluster originally installed with v4.9 or less
Steps to Reproduce:
1. Install v4.9 2. Upgrade to v4.12 (incrementally if required for upgrade path) 3. openshift-dns namespace will be missing pod-security labels
Actual results:
"oc get ns openshift-dns -o yaml" will show missing pod-security labels: apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "" openshift.io/sa.scc.mcs: s0:c15,c0 openshift.io/sa.scc.supplemental-groups: 1000210000/10000 openshift.io/sa.scc.uid-range: 1000210000/10000 creationTimestamp: "2020-05-21T19:36:15Z" labels: kubernetes.io/metadata.name: openshift-dns olm.operatorgroup.uid/3d42c0c1-01cd-4c55-bf88-864f041c7e7a: "" openshift.io/cluster-monitoring: "true" openshift.io/run-level: "0" name: openshift-dns resourceVersion: "3127555382" uid: 0fb4571e-952f-4bea-bc45-461beec54369 spec: finalizers: - kubernetes
Expected results:
pod-security labels should exist: labels: kubernetes.io/metadata.name: openshift-dns olm.operatorgroup.uid/3d42c0c1-01cd-4c55-bf88-864f041c7e7a: "" openshift.io/cluster-monitoring: "true" openshift.io/run-level: "0" pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/warn: privileged
Additional info:
Issue found in CI during upgrade
https://coreos.slack.com/archives/C03G7REB4JV/p1663676443155839
Tracker issue for bootimage bump in 4.12. This issue should block issues which need a bootimage bump to fix.
The previous bump was OCPBUGS-1941.
Clone of https://issues.redhat.com/browse/OCPBUGSM-44162.
Cannot use the original as the bot won't accept a security bug:
When the change merges, the Bugzilla associated with the CVE must be set to MODIFIED. Since the DPTP bugzilla bot is not permitted to scan bugs with the SECURITY group in Bugzilla, The REP will not be able to use the bot's public functionality of moving their bug to MODIFIED.
Description of problem:
documentationBaseURL still points to 4.10
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-31-101631
How reproducible:
Always
Steps to Reproduce:
1.Check documentationBaseURL on 4.12 cluster: # oc get configmap console-config -n openshift-console -o yaml | grep documentationBaseURL documentationBaseURL: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.11/ 2. 3.
Actual results:
1.documentationBaseURL is still pointing to 4.11
Expected results:
1.documentationBaseURL should point to 4.12
Additional info:
This is a clone of issue OCPBUGS-8342. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-8258. The following is the description of the original issue:
—
Invoking 'create cluster-manifests' fails when imageContentSources is missing in install-config yaml:
$ openshift-install agent create cluster-manifests INFO Consuming Install Config from target directory FATAL failed to write asset (Mirror Registries Config) to disk: failed to write file: open .: is a directory
install-config.yaml:
apiVersion: v1alpha1 metadata: name: appliance rendezvousIP: 192.168.122.116 hosts: - hostname: sno installerArgs: '["--save-partlabel", "agent*", "--save-partlabel", "rhcos-*"]' interfaces: - name: enp1s0 macAddress: 52:54:00:e7:05:72 networkConfig: interfaces: - name: enp1s0 type: ethernet state: up mac-address: 52:54:00:e7:05:72 ipv4: enabled: true dhcp: true
Description of problem:
The origin issue is from SDB-3484. When a customer wants to update its pull-secret, we find that sometimes the insight operator does not execute the cluster transfer process with the message 'no available accepted cluster transfer'. The root cause is that the insight operator does the cluster transfer process per 24 hours, and the telemetry does the registration process per 24 hours, on the ams side, both the two call /cluster_registration do the same process, so it means the telemetry will complete the cluster transfer before the insight operator.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Create two OCP clusters. 2. Create a PSR that will help create two 'pending' CTs. The pending CTs will be accepted after ~6 hours. 3. Wait for ~24 hours, check the PSR, and check the logs in IO, and also check the pull-secrets in the clusters.
Actual results:
The PSR is completed, but there is no successfully transfer logs in IO, and the pull-secrets in the clusters are not updated.
Expected results:
The transfer process is executed successfully, and the pull-secrets are updated on the clusters.
Additional info:
This is a clone of issue OCPBUGS-13170. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-12971. The following is the description of the original issue:
—
Description of problem:
hybrid-overlay on Windows nodes ignores existing non-persistent routes when creating the new vNIC. This becomes an issue especially on AWS where non-persistent routes are used for the local metadata server which is in turn used by the kubelet. How reproducible:{code:none} Always on Windows Server 2022 instances on AWS
Steps to Reproduce:
1. Bring up an hybrid-ovn OCP cluster on AWS 2. Add a Windows Server 2022 Machine or BYOH instance
Actual results:
The Windows Server 2022 node goes to NotReady
Expected results:
The Windows Server 2022 node goes to Ready
Additional info:
While this problem shows up only on AWS, it can be an issue on other providers if customers are depending on non-persistent routes on Windows nodes in general.
Description of problem:
release-4.12 of openshift/cloud-provider-openstack is missing some commits that were backported in upstream project into the release-1.25 branch. We should import them in our downstream fork.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-2500. The following is the description of the original issue:
—
Description of problem:
When the Ux switches to the Dev console the topology is always blank in a Project that has a large number of components.
Version-Release number of selected component (if applicable):
How reproducible:
Always occurs
Steps to Reproduce:
1.Create a project with at least 12 components (Apps, Operators, knative Brokers) 2. Go to the Administrator Viewpoint 3. Switch to Developer Viewpoint/Topology 4. No components displayed 5. Click on 'fit to screen' 6. All components appear
Actual results:
Topology renders with all controls but no components visible (see screenshot 1)
Expected results:
All components should be visible
Additional info:
Description of problem:
OLM has a dependency on openshift/cluster-policy-controller. This project had dependencies with v0.0.0 versions, which due to a bug in ART was causing issues building the olm image. To fix this, we have to update the dependencies in the cluster-policy-controller project to point to actual versions. This was already done: * https://github.com/openshift/cluster-policy-controller/pull/103 * https://github.com/openshift/cluster-policy-controller/pull/101 And these changes already made it to 4.14 and 4.13 branches of the cluster-policy-controller. The backport to 4.12 is: https://github.com/openshift/cluster-policy-controller/pull/102
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-10647. The following is the description of the original issue:
—
Description of problem:
Cluster Network Operator managed component multus-admission-controller does not conform to Hypershift control plane expectations. When CNO is managed by Hypershift, multus-admission-controller must run with non-root security context. If Hypershift runs control plane on kubernetes (as opposed to Openshift) management cluster, it adds pod or container security context to most deployments with runAsUser clause inside. In Hypershift CPO, the security context of deployment containers, including CNO, is set when it detects that SCC's are not available, see https://github.com/openshift/hypershift/blob/9d04882e2e6896d5f9e04551331ecd2129355ecd/support/config/deployment.go#L96-L100. In such a case CNO should do the same, set security context for its managed deployment multus-admission-controller to meet Hypershift standard.
How reproducible:
Always
Steps to Reproduce:
1.Create OCP cluster using Hypershift using Kube management cluster 2.Check pod security context of multus-admission-controller
Actual results:
no pod security context is set
Expected results:
pod security context is set with runAsUser: xxxx
Additional info:
This is the highest priority item from https://issues.redhat.com/browse/OCPBUGS-7942 and it needs to be fixed ASAP as it is a security issue preventing IBM from releasing Hypershift-managed Openshift service.
Description of problem:
Installer fails due to Neutron policy error when creating Openstack servers for OCP master nodes. $ oc get machines -A NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api ostest-kwtf8-master-0 Running 23h openshift-machine-api ostest-kwtf8-master-1 Running 23h openshift-machine-api ostest-kwtf8-master-2 Running 23h openshift-machine-api ostest-kwtf8-worker-0-g7nrw Provisioning 23h openshift-machine-api ostest-kwtf8-worker-0-lrkvb Provisioning 23h openshift-machine-api ostest-kwtf8-worker-0-vwrsk Provisioning 23h $ oc -n openshift-machine-api logs machine-api-controllers-7454f5d65b-8fqx2 -c machine-controller [...] E1018 10:51:49.355143 1 controller.go:317] controller/machine_controller "msg"="Reconciler error" "error"="error creating Openstack instance: Failed to create port err: Request forbidden: [POST https://overcloud.redhat.local:13696/v2.0/ports], error message: {\"NeutronError\": {\"type\": \"PolicyNotAuthorized\", \"message\": \"(rule:create_port and (rule:create_port:allowed_address_pairs and (rule:create_port:allowed_address_pairs:ip_address and rule:create_port:allowed_address_pairs:ip_address))) is disallowed by policy\", \"detail\": \"\"}}" "name"="ostest-kwtf8-worker-0-lrkvb" "namespace"="openshift-machine-api"
Version-Release number of selected component (if applicable):
4.10.0-0.nightly-2022-10-14-023020
How reproducible:
Always
Steps to Reproduce:
1. Install 4.10 within provider networks (in primary or secondary interface)
Actual results:
Installation failure: 4.10.0-0.nightly-2022-10-14-023020: some cluster operators have not yet rolled out
Expected results:
Successful installation
Additional info:
Please find must-gather for installation on primary interface link here and for installation on secondary interface link here.
This is a clone of issue OCPBUGS-10427. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-9969. The following is the description of the original issue:
—
Description of problem:
OCP cluster born on 4.1 fails to scale-up node due to older podman version 1.0.2 present in 4.1 bootimage. This was observed while testing bug https://issues.redhat.com/browse/OCPBUGS-7559?focusedCommentId=21889975&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-21889975
Journal log: - Unit machine-config-daemon-update-rpmostree-via-container.service has finished starting up. -- -- The start-up result is RESULT. Mar 10 10:41:29 ip-10-0-218-217 podman[18103]: flag provided but not defined: -authfile Mar 10 10:41:29 ip-10-0-218-217 podman[18103]: See 'podman run --help'. Mar 10 10:41:29 ip-10-0-218-217 systemd[1]: machine-config-daemon-update-rpmostree-via-container.service: Main process exited, code=exited, status=125/n/a Mar 10 10:41:29 ip-10-0-218-217 systemd[1]: machine-config-daemon-update-rpmostree-via-container.service: Failed with result 'exit-code'. Mar 10 10:41:29 ip-10-0-218-217 systemd[1]: machine-config-daemon-update-rpmostree-via-container.service: Consumed 24ms CPU time
Version-Release number of selected component (if applicable):
OCP 4.12 and later
Steps to Reproduce:
1.Upgrade a 4.1 based cluster to 4.12 or later version 2. Try to Scale up node 3. Node will fail to join
Description of problem:
Event souces are not shown in topology
Version-Release number of selected component (if applicable):
Have verified it on 4.12.0-0.nightly-2022-09-20-095559
How reproducible:
Steps to Reproduce:
1. Install Serverless operator 2. Create CR for knative-serving and knative-eventing respectively 3. Create/select a ns -> go to dev console -> add -> event souce 4. Create any event source
Actual results:
Can't see created resouoce(Event source) in topology
Expected results:
Should be able to see created resoouce on topology
Additional info:
4.12 tech-preview jobs are suffering:
$ w3m -dump -cols 200 'https://search.ci.openshift.org/?search=event+happened.*no+matches+for+kind.*InsightsDataGather&maxAge=48h&type=junit' | grep 'failures match' | sort periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-techpreview (all) - 10 runs, 100% failed, 100% of failures match = 100% impact periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-techpreview-serial (all) - 10 runs, 100% failed, 90% of failures match = 90% impact periodic-ci-openshift-release-master-ci-4.12-e2e-azure-sdn-techpreview (all) - 10 runs, 100% failed, 100% of failures match = 100% impact periodic-ci-openshift-release-master-ci-4.12-e2e-azure-sdn-techpreview-serial (all) - 10 runs, 100% failed, 90% of failures match = 90% impact periodic-ci-openshift-release-master-ci-4.12-e2e-gcp-sdn-techpreview (all) - 10 runs, 100% failed, 100% of failures match = 100% impact periodic-ci-openshift-release-master-ci-4.12-e2e-gcp-sdn-techpreview-serial (all) - 10 runs, 100% failed, 100% of failures match = 100% impact
with runs like this failing:
: [sig-arch] events should not repeat pathologically expand_less 0s { 1 events happened too frequently event happened 138 times, something is wrong: ns/default namespace/default - reason/Unable to find REST mapping for %s/%s: %w InsightsDataGather.config.openshift.io%!(EXTRA string=v1, *meta.NoKindMatchError=no matches for kind "InsightsDataGather" in version "config.openshift.io/v1")}
based on events like:
$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-techpreview/1597393851226525696/artifacts/e2e-aws-sdn-techpreview/gather-extra/artifacts/events.json | jq -r '.items[] | select(.metadata.namespace == "default" and (.message | contains("InsightsDataGather")))' { "apiVersion": "v1", "count": 145, "eventTime": null, "firstTimestamp": "2022-11-29T01:32:16Z", "involvedObject": { "apiVersion": "v1", "kind": "Namespace", "name": "default", "namespace": "default" }, "kind": "Event", "lastTimestamp": "2022-11-29T02:19:36Z", "message": "InsightsDataGather.config.openshift.io%!(EXTRA string=v1, *meta.NoKindMatchError=no matches for kind \"InsightsDataGather\" in version \"config.openshift.io/v1\")", "metadata": { "creationTimestamp": "2022-11-29T01:32:16Z", "name": "default.172bea26177786ae", "namespace": "default", "resourceVersion": "237357", "uid": "187cf3a0-cf4b-4cd1-ae72-51b5d77b7e73" }, "reason": "Unable to find REST mapping for %s/%s: %w", "reportingComponent": "", "reportingInstance": "", "source": { "component": "run-resourcewatch-config-observer-controller-configobservercontroller" }, "type": "Warning" }
4.12 tech-preview jobs are impacted.
100% for some job flavors, per the search CI output above.
1. Look at test results for any of the impacted job flavors.
Actual results:
Lots of NoKindMatchError events for v1 InsightsDataGather (it's only v1alpha1).
Expected results:
Passing test-cases.
Additional info:
The problematic REST-mapping client was removed from 4.13/dev as part of origin#27596.
This is a clone of issue OCPBUGS-4913. The following is the description of the original issue:
—
Description of problem:
Currently the Terraform code waits for 45 seconds, but anecdotal data suggest we should actually wait for 3 minutes in order to avoid "failures" due to occasional slow boots of a new VM in PowerVS.
Version-Release number of selected component (if applicable):
How reproducible:
often enough
Steps to Reproduce:
1. run IPI installer against PowerVS 2. look for "empty tuple" in the error message when it fails to reach `bootstrap-complete` 3.
Actual results:
Expected results:
VMs to always have IP address assigned by DHCP after a certain wait
Additional info:
The change has already been merged into master/4.13, but 4.12 also needs this for planned PowerVS IPI GA on the z-stream.
This is a clone of issue OCPBUGS-12729. The following is the description of the original issue:
—
Description of problem:
This came out of the investigation of https://issues.redhat.com/browse/OCPBUGS-11691 . The nested node configs used to support dual stack VIPs do not correctly respect the EnableUnicast setting. This is causing issues on EUS upgrades where the unicast migration cannot happen until all nodes are on 4.12. This is blocking both the workaround and the eventual proper fix.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Deploy 4.11 with unicast explicitly disabled (via MCO patch) 2. Write /etc/keepalived/monitor-user.conf to suppress unicast migration 3. Upgrade to 4.12
Actual results:
Nodes come up in unicast mode
Expected results:
Nodes remain in multicast mode until monitor-user.conf is removed
Additional info:
This is a clone of issue OCPBUGS-4490. The following is the description of the original issue:
—
Description of problem:
When hypershift HostedCluster has endpointAccess: Private, the csi-snapshot-controller is in CrashLoopBackoff because the guest APIServer url in the admin-kubeconfig isn't reachable in Private mode.
Version-Release number of selected component (if applicable):
4.13
How reproducible:
Always
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-5151. The following is the description of the original issue:
—
Description of problem:
Cx is not able to install new cluster OCP BM IPI. During the bootstrapping the provisioning interfaces from master node not getting ipv4 dhcp ip address from bootstrap dhcp server on OCP IPI BareMetal install Please refer to following BUG --> https://issues.redhat.com/browse/OCPBUGS-872 The problem was solved by applying rd.net.timeout.carrier=30 to the kernel parameters of compute nodes via cluster-baremetal operator. The fix also need to be apply to the control-plane. ref:// https://github.com/openshift/cluster-baremetal-operator/pull/286/files
Version-Release number of selected component (if applicable):
How reproducible:
Perform OCP 4.10.16 IPI BareMetal install.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Customer should be able to install the cluster without any issue.
Additional info:
This is a clone of issue OCPBUGS-5988. The following is the description of the original issue:
—
Description of problem:
Etcd operator is in degraded state as one of the masters can't connect. Master that fails to connect was previously bootstrap and pivoted as part of assisted-installer installation to master. Etcd log: 2023-01-17T23:09:26.523562312Z 28dcf1b0a44481b0, started, test-infra-cluster-04bf4418-master-1, https://192.168.127.11:2380, https://192.168.127.11:2379, false 2023-01-17T23:09:26.523562312Z 30600b5b86e23c8e, started, etcd-bootstrap, https://192.168.127.12:2380, https://192.168.127.12:2379, false 2023-01-17T23:09:26.523562312Z 73f00626fee34a87, started, test-infra-cluster-04bf4418-master-0, https://192.168.127.10:2380, https://192.168.127.10:2379, false 2023-01-17T23:09:26.541214220Z #### attempt 0 2023-01-17T23:09:26.547811132Z member={name="test-infra-cluster-04bf4418-master-1", peerURLs=[https://192.168.127.11:2380}, clientURLs=[https://192.168.127.11:2379] 2023-01-17T23:09:26.547811132Z member={name="etcd-bootstrap", peerURLs=[https://192.168.127.12:2380}, clientURLs=[https://192.168.127.12:2379] 2023-01-17T23:09:26.547811132Z member={name="test-infra-cluster-04bf4418-master-0", peerURLs=[https://192.168.127.10:2380}, clientURLs=[https://192.168.127.10:2379] 2023-01-17T23:09:26.547811132Z target={name="etcd-bootstrap", peerURLs=[https://192.168.127.12:2380}, clientURLs=[https://192.168.127.12:2379] 2023-01-17T23:09:26.547846508Z member "https://192.168.127.12:2380" dataDir has been destroyed and must be removed from the cluster There are couple of problems that we see: 1. For unknown reason etcd operator BootstrapTeardownController fails to start as it fails to see "openshift-etcd" namespace though by the logs it is there. 2023-01-17T21:39:43.323928903Z E0117 21:39:43.323917 1 base_controller.go:272] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd names 2. DelayStrategy code was change by https://github.com/openshift/cluster-etcd-operator/pull/964/files and currently it requires 3 healthy members in order to remove. It can create issues as etcd and cluster-bootstrap(bootkube) are not synchronized and nothing is actually blocking bootstrap on stop etcd and block remove of bootstrap etcd.(at least how i understand the flow)
Version-Release number of selected component (if applicable):
How reproducible:
It is race as far as i understand but reproduced pretty much in our CI by installing 4.12 nightlies
Steps to Reproduce:
1. 2. 3.
Actual results:
Etcd is degrade cause third joined master etcd can't start
Expected results:
Etcd is healthy
Additional info:
When installing OCP cluster with worker nodes VM type specified as high performance, some of the configuration settings of said VMs do not match the configuration settings a high performance VM should have.
Specific configurations that do not match are described in subtasks.
Default configuration settings of high performance VMs:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index?extIdCarryOver=true&sc_cid=701f2000001Css5AAC#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools
When installing OCP cluster with worker nodes VM type specified as high performance, manual and automatic migration is enabled in the said VMs.
However, high performance worker VMs are created with default values of the engine, so only manual migration should be enabled.
Default configuration settings of high performance VMs:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index?extIdCarryOver=true&sc_cid=701f2000001Css5AAC#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools
How reproducible: 100%
How to reproduce:
1. Create install-config.yaml with a vmType field and set it to high performance, i.e.:
apiVersion: v1 baseDomain: basedomain.com compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: ovirt: affinityGroupsNames: [] vmType: high_performance replicas: 2 ...
2. Run installation
./openshift-install create cluster --dir=resources --log-level=debug
3. Check worker VM's configuration in the RHV webconsole.
Expected:
Only manual migration (under Host) should be enabled.
Actual:
Manual and automatic migration is enabled.
This is a clone of issue OCPBUGS-3172. The following is the description of the original issue:
—
Customer is trying to install the Logging operator, which appears to attempt to install a dynamic plugin. The operator installation fails in the console because permissions aren't available to "patch resource consoles".
We shouldn't block operator installation if permission issues prevent dynamic plugin installation.
This is an OSD cluster, presumably for a customer with "cluster-admin", although it may be a paired down permission set called "dedicated-admin".
See https://docs.google.com/document/d/1hYS-bm6aH7S6z7We76dn9XOFcpi9CGYcGoJys514YSY/edit for permissions investigation work on OSD
This is a clone of issue OCPBUGS-3235. The following is the description of the original issue:
—
Frequently we see the loading state of the topology view, even when there aren't many resources in the project.
Including an example
topology will sometimes hang with the loading indicator showing indefinitely
topology should load consistently without fail
intermittent
4.9
Description of problem:
Have 6 runs of techpreview jobs where the jobs fails due to the MCO:
{Operator degraded (RequiredPoolsFailed): Unable to apply 4.12.0-0.ci.test-2022-09-21-183414-ci-op-qd6plyhc-latest: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)] Operator degraded (RequiredPoolsFailed): Unable to apply 4.12.0-0.ci.test-2022-09-21-183414-ci-op-qd6plyhc-latest: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)]}
looking at the MCD logs the master seems to go degraded in bootstrap due to the rendered config not being found?
I0921 18:49:47.091804 8171 daemon.go:444] Node ci-op-qd6plyhc-6dd9a-bfmjd-master-1 is part of the control plane I0921 18:49:49.213556 8171 node.go:24] No machineconfiguration.openshift.io/currentConfig annotation on node ci-op-qd6plyhc-6dd9a-bfmjd-master-1: map[csi.volume.kubernetes.io/nodeid: {"pd.csi.storage.gke.io":"projects/openshift-gce-devel-ci-2/zones/us-central1-b/instances/ci-op-qd6plyhc-6dd9a-bfmjd-master-1"} volumes.kubernetes.io/controller-managed-attach-detach:true], in cluster bootstrap, loading initial node annotation from /etc/machine-config-daemon/node-annotations.json I0921 18:49:49.215186 8171 node.go:45] Setting initial node config: rendered-master-2dde32327e4e5d15092fccbac1dcec49 I0921 18:49:49.253706 8171 daemon.go:1184] In bootstrap mode E0921 18:49:49.254046 8171 writer.go:200] Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-2dde32327e4e5d15092fccbac1dcec49" not found I0921 18:49:51.232610 8171 daemon.go:499] Transitioned from state: Done -> Degraded I0921 18:49:51.249618 8171 daemon.go:1184] In bootstrap mode E0921 18:49:51.249906 8171 writer.go:200] Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-2dde32327e4e5d15092fccbac1dcec49" not found
However looking at controller a rendered-config was generated correctly but it's not the missing config from above:
I0921 18:54:06.736984 1 render_controller.go:506] Generated machineconfig rendered-master-acc8491aafab8ef511a40b76372325ee from 6 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 }] I0921 18:54:06.737226 1 event.go:285] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"b2084ca6-4b33-46bf-b83b-9e98010ff085", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"5648", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-acc8491aafab8ef511a40b76372325ee successfully generated (release version: 4.12.0-0.ci.test-2022-09-21-183220-ci-op-9ksj7d7g-latest, controller version: a627415c240b4c7dd2f9e90f659690d9c0f623f3) I0921 18:54:06.742053 1 render_controller.go:532] Pool master: now targeting: rendered-master-acc8491aafab8ef511a40b76372325ee
So far I see this in the following techpreview jobs:
GCP techpreview
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-gcp-sdn-techpreview/1572638837954318336
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-gcp-sdn-techpreview-serial/1572638838793179136
Vsphere techpreview
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-nightly-4.12-e2e-vsphere-ovn-techpreview/1572638854794448896
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-nightly-4.12-e2e-vsphere-ovn-techpreview-serial/1572638855574589440
AWS Techpreview:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-aws-sdn-techpreview/1572638828672323584
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-aws-sdn-techpreview-serial/1572638829217583104
The above jobs affect the k8s 1.25 bump and are blocking the job.
There are also other occurances not in our PR:
https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/31965/rehearse-31965-pull-ci-openshift-openshift-controller-manager-master-openshift-e2e-aws-builds-techpreview/1572581504297472000
Also see a quick search:
https://search.ci.openshift.org/?search=timed+out+waiting+for+the+condition%2C+error+pool+master+is+not+ready&maxAge=48h&context=1&type=bug%2Bissue%2Bjunit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Did something change that would affect tech preview jobs?
Also note, this seems like a new failure. I have some of these jobs passing in the last ~ 8 days.
Description of problem:
The SQL-based index image created by old opm failed to run in 4.12 even if added the `privileged` permission to the namespace.
MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 1 (2s ago) 11s MacBook-Pro:~ jianzhang$ oc logs jian-operators-4g5ln Error: open /etc/nsswitch.conf: permission denied
PS: the SQL-based index created by the new opm version doesn't have this issue.
opm version Version: version.Version{OpmVersion:"e41024eb3", GitCommit:"e41024eb37c721bc43e8b3df226dd30c0589aee7", BuildDate:"2022-08-16T01:50:17Z", GoOs:"darwin", GoArch:"amd64"}
Version-Release number of selected component (if applicable):
OCP 4.12
MacBook-Pro:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-0.nightly-2022-08-15-150248 True False 3h25m Cluster version is 4.12.0-0.nightly-2022-08-15-150248
How reproducible:
always
Steps to Reproduce:
1. Deploy OCP 4.12
2, Deploy a CatalogSource in the `openshift-marketplace` namespace.
MacBook-Pro:~ jianzhang$ oc get ns openshift-marketplace -o yaml apiVersion: v1 kind: Namespace metadata: annotations: capability.openshift.io/name: marketplace include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" openshift.io/node-selector: "" openshift.io/sa.scc.mcs: s0:c16,c10 openshift.io/sa.scc.supplemental-groups: 1000260000/10000 openshift.io/sa.scc.uid-range: 1000260000/10000 workload.openshift.io/allowed: management creationTimestamp: "2022-08-15T23:15:27Z" labels: kubernetes.io/metadata.name: openshift-marketplace olm.operatorgroup.uid/1b776321-2714-4c1f-95ba-2ddff49c4efe: "" openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/enforce: baseline pod-security.kubernetes.io/warn: baseline name: openshift-marketplace ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: cd81594b-4f6c-46d6-9369-75deef542ec8 resourceVersion: "8617" uid: 1c35352e-3636-4f2b-a3b1-c84ebc6681e0 spec: finalizers: - kubernetes status: phase: Active
3, Check the CatalogSource pod status, crashed.
MacBook-Pro:~ jianzhang$ oc get catalogsource -n openshift-marketplace jian-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T02:24:20Z" generation: 1 name: jian-operators namespace: openshift-marketplace resourceVersion: "106145" uid: 6a75ecc9-7b88-4411-bcf5-e34618f9b3cd spec: displayName: Jian Operators image: quay.io/olmqe/etcd-index:v1 priority: -100 publisher: Jian sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: jian-operators.openshift-marketplace.svc:50051 lastConnect: "2022-08-16T03:12:28Z" lastObservedState: TRANSIENT_FAILURE latestImageRegistryPoll: "2022-08-16T02:34:21Z" registryService: createdAt: "2022-08-16T02:24:20Z" port: "50051" protocol: grpc serviceName: jian-operators serviceNamespace: openshift-marketplace MacBook-Pro:~ jianzhang$ oc get pods -n openshift-marketplace NAME READY STATUS RESTARTS AGE 28bb83ea022e9728d25570ab0adbe09a31d6a0a606917488e0ddb00f925mnfw 0/1 Completed 0 3h23m 7049ea48beb27a712fa506b76ad672be201ce5d3a6a93d627a0091e0fesvdlj 0/1 Completed 0 3h23m certified-operators-ftt2n 1/1 Running 0 3h49m community-operators-27dx9 1/1 Running 0 3h49m jian-operators-5zq7d 0/1 CrashLoopBackOff 12 (71s ago) 38m jian-operators-gpg4v 0/1 CrashLoopBackOff 14 (57s ago) 48m marketplace-operator-9c8496b58-2jfmv 1/1 Running 0 3h56m qe-app-registry-rqrrv 1/1 Running 0 141m redhat-marketplace-s6zrj 1/1 Running 0 3h49m redhat-operators-54cqr 1/1 Running 0 3h49m MacBook-Pro:~ jianzhang$ oc -n openshift-marketplace logs jian-operators-gpg4v Error: open /etc/nsswitch.conf: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging -h, --help help for serve -p, --port string port number to serve on (default "50051") --skip-migrate do not attempt to migrate to the latest db revision when starting -t, --termination-log string path to a container termination log file (default "/dev/termination-log") --timeout-seconds string Timeout in seconds. This flag will be removed later. (default "infinite") Global Flags: --skip-tls skip TLS certificate verification for container image registries while pulling bundles or index
4. Create a namespace with the `privileged` permission.
MacBook-Pro:~ jianzhang$ oc get ns debug -o yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/sa.scc.mcs: s0:c30,c10 openshift.io/sa.scc.supplemental-groups: 1000890000/10000 openshift.io/sa.scc.uid-range: 1000890000/10000 creationTimestamp: "2022-08-16T02:46:41Z" labels: kubernetes.io/metadata.name: debug pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" name: debug resourceVersion: "95718" uid: bdf93839-6c42-4365-a65c-d9c0b9fe0504 spec: finalizers: - kubernetes status: phase: Active
5. Deploy a CatalogSource as above step 2. Still crashed.
MacBook-Pro:~ jianzhang$ oc get pods -n debug NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 10 (114s ago) 28m jian-operators-wn766 0/1 CrashLoopBackOff 8 (2m25s ago) 18m MacBook-Pro:~ jianzhang$ oc -n debug logs jian-operators-wn766 Error: open /etc/nsswitch.conf: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging -h, --help help for serve -p, --port string port number to serve on (default "50051") --skip-migrate do not attempt to migrate to the latest db revision when starting -t, --termination-log string path to a container termination log file (default "/dev/termination-log") --timeout-seconds string Timeout in seconds. This flag will be removed later. (default "infinite") Global Flags: --skip-tls skip TLS certificate verification for container image registries while pulling bundles or index
Actual results:
The sql-based index image created by the old opm version cannot be run.
MacBook-Pro:~ jianzhang$ oc -n debug logs jian-operators-wn766 Error: open /etc/nsswitch.conf: permission denied
Expected results:
The old SQL-based index image runs well. Or we have a workaround for it.
Additional info:
I changed another old sql-based image and have a try, get another permission issue.
MacBook-Pro:~ jianzhang$ oc get catalogsource NAME DISPLAY TYPE PUBLISHER AGE jian-operators Jian Operators grpc Jian 37m xia-operators Xia Operators grpc Xia 101s MacBook-Pro:~ jianzhang$ oc get catalogsource xia-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T03:22:38Z" generation: 1 name: xia-operators namespace: debug resourceVersion: "110629" uid: 8be42e68-43be-4fd4-9b67-c74edc5e6353 spec: displayName: Xia Operators image: quay.io/olmqe/ditto-index:test-xzha-1 priority: -100 publisher: Xia sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: xia-operators.debug.svc:50051 lastConnect: "2022-08-16T03:24:18Z" lastObservedState: CONNECTING registryService: createdAt: "2022-08-16T03:22:38Z" port: "50051" protocol: grpc serviceName: xia-operators serviceNamespace: debug MacBook-Pro:~ jianzhang$ oc project Using project "debug" on server "https://api.qe-daily-412-0816.ibmcloud.qe.devcluster.openshift.com:6443". MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 11 (3m41s ago) 35m jian-operators-wn766 0/1 CrashLoopBackOff 9 (4m13s ago) 25m xia-operators-6wgjt 0/1 CrashLoopBackOff 1 (8s ago) 13s MacBook-Pro:~ jianzhang$ oc logs xia-operators-6wgjt time="2022-08-16T03:22:43Z" level=warning msg="\x1b[1;33mDEPRECATION NOTICE:\nSqlite-based catalogs and their related subcommands are deprecated. Support for\nthem will be removed in a future release. Please migrate your catalog workflows\nto the new file-based catalog format.\x1b[0m" Error: open ./db-609956243: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging
Even if that namespace is `privileged`.
MacBook-Pro:~ jianzhang$ oc get ns debug -o yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/sa.scc.mcs: s0:c30,c10 openshift.io/sa.scc.supplemental-groups: 1000890000/10000 openshift.io/sa.scc.uid-range: 1000890000/10000 creationTimestamp: "2022-08-16T02:46:41Z" labels: kubernetes.io/metadata.name: debug pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" name: debug resourceVersion: "95718" uid: bdf93839-6c42-4365-a65c-d9c0b9fe0504 spec: finalizers: - kubernetes status: phase: Active
But, both of them work well in the 4.11 cluster. As follows,
MacBook-Pro:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-08-15-152346 True False 91m Cluster version is 4.11.0-0.nightly-2022-08-15-152346 MacBook-Pro:~ jianzhang$ oc get catalogsource NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 106m community-operators Community Operators grpc Red Hat 106m jian-operators Jian Operators grpc Jian 48m redhat-marketplace Red Hat Marketplace grpc Red Hat 106m redhat-operators Red Hat Operators grpc Red Hat 106m xia-operators Xia Operators grpc Xia 6s MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE certified-operators-fsjc8 1/1 Running 0 107m community-operators-9qvzt 1/1 Running 0 107m jian-operators-n5s8c