Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JENKINS-60054: Add umask option to container step #1500

Closed

Conversation

dee-kryvenko
Copy link

@dee-kryvenko dee-kryvenko commented Jan 12, 2024

https://issues.jenkins.io/browse/JENKINS-60054

Testing done

Tested using following pipeline:

podTemplate(yaml: '''
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: test
    image: busybox
    command:
    - sleep
    args:
    - infinity
''') {
    node(POD_LABEL) {
        container(name: 'test', umask: '002') {
            sh 'umask'
        }
    }
}

Which gives:

[Pipeline] sh
+ umask
0002

Submitter checklist

Preview Give feedback

This fixes https://issues.jenkins.io/browse/JENKINS-60054

@dee-kryvenko dee-kryvenko requested a review from a team as a code owner January 12, 2024 03:04
@dee-kryvenko
Copy link
Author

@jglick any chance you can take a look?

@dee-kryvenko
Copy link
Author

@Vlatombe seems like you are merged a couple of PRs lately, any chance you can look into this one?

@Vlatombe
Copy link
Member

Vlatombe commented Jan 12, 2024

Please provide the use case you want to solve here.
As it stands, I don't understand what is the added value, you'd get the same result by running

sh '''
umask 002
<any-command>
'''

Reading through the jira issue, it looks like an interesting idea, though I don't think you need a new user attribute for this.

@dee-kryvenko
Copy link
Author

dee-kryvenko commented Jan 12, 2024

User doesn't always have control to set umask in the container if the step is not sh but a step implemented by another plugin. Not to mention the toil and having to remember that every script needs umask...

I am pretty sure that in this scenario 002 umask should be default. In fact - any kubernetes scenario... there isn't multi user containers. But I tried to be least invasive and preserve current behavior by default with that new user input attribute.

@jglick
Copy link
Member

jglick commented Jan 16, 2024

User doesn't always have control to set umask in the container if the step is not sh but a step implemented by another plugin.

Well,

container('whatever') {
  sh 'umask 002'
  otherStep()
}

Anyway from the Jira description I suspect this is just a workaround for something more fundamental broken in your environment. $WORKSPACE (and $WORKSPACE_TMP) are of course expected to be writable without any special option.

@dee-kryvenko
Copy link
Author

podTemplate(yaml: '''
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: test
    image: debian
    tty: true
    command:
    - cat
''') {
    node(POD_LABEL) {
        container(name: 'test') {
            sh 'id'
            sh 'umask'
            sh 'umask 002'
            sh 'umask'
            sh 'mkdir test && touch test/test.txt'
            sh 'ls -la test'
        }
        sh 'id'
        sh 'ls -la test'
        sh 'rm -f test/test.txt'
    }
}
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] sh
+ id
uid=0(root) gid=0(root) groups=0(root)
[Pipeline] sh
+ umask
0022
[Pipeline] sh
+ umask 002
[Pipeline] sh
+ umask
0022
[Pipeline] sh
+ mkdir test
+ touch test/test.txt
[Pipeline] sh
+ ls -la test
total 8
drwxr-xr-x 2 root root 4096 Jan 16 19:43 .
drwxr-xr-x 3 1000 1000 4096 Jan 16 19:43 ..
-rw-r--r-- 1 root root    0 Jan 16 19:43 test.txt
[Pipeline] }
[Pipeline] // container
[Pipeline] sh
+ id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
[Pipeline] sh
+ ls -la test
total 8
drwxr-xr-x 2 root    root    4096 Jan 16 19:43 .
drwxr-xr-x 3 jenkins jenkins 4096 Jan 16 19:43 ..
-rw-r--r-- 1 root    root       0 Jan 16 19:43 test.txt
[Pipeline] sh
+ rm -f test/test.txt
rm: cannot remove 'test/test.txt': Permission denied
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE

What am I doing wrong?

@jglick
Copy link
Member

jglick commented Jan 16, 2024

What am I doing wrong?

Running the test container as root. You would need to configure the pod so that all containers run as jenkins:jenkins ~ 1000:1000.

@dee-kryvenko
Copy link
Author

podTemplate(yaml: '''
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: test
    image: python
    tty: true
    command:
    - cat
    securityContext:
     runAsUser: 1000
     runAsGroup: 1000
''') {
    node(POD_LABEL) {
        container(name: 'test') {
            sh 'pip install boto3'
        }
    }
}
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] sh
+ pip install boto3
WARNING: The directory '/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.
Defaulting to user installation because normal site-packages is not writeable
Collecting boto3
  Obtaining dependency information for boto3 from https://files.pythonhosted.org/packages/bc/7b/6b6613b93e895364abead3265a7a48034962e7d11c26775310fc2f06c46b/boto3-1.34.19-py3-none-any.whl.metadata
  Downloading boto3-1.34.19-py3-none-any.whl.metadata (6.6 kB)
Collecting botocore<1.35.0,>=1.34.19 (from boto3)
  Obtaining dependency information for botocore<1.35.0,>=1.34.19 from https://files.pythonhosted.org/packages/4a/84/153a044b34e9939fafd69f60f1f15dc36a6ff7b24666d4684bbdd89a26d3/botocore-1.34.19-py3-none-any.whl.metadata
  Downloading botocore-1.34.19-py3-none-any.whl.metadata (5.6 kB)
Collecting jmespath<2.0.0,>=0.7.1 (from boto3)
  Downloading jmespath-1.0.1-py3-none-any.whl (20 kB)
Collecting s3transfer<0.11.0,>=0.10.0 (from boto3)
  Obtaining dependency information for s3transfer<0.11.0,>=0.10.0 from https://files.pythonhosted.org/packages/12/bb/7e7912e18cd558e7880d9b58ffc57300b2c28ffba9882b3a54ba5ce3ebc4/s3transfer-0.10.0-py3-none-any.whl.metadata
  Downloading s3transfer-0.10.0-py3-none-any.whl.metadata (1.7 kB)
Collecting python-dateutil<3.0.0,>=2.1 (from botocore<1.35.0,>=1.34.19->boto3)
  Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 247.7/247.7 kB 20.5 MB/s eta 0:00:00
Collecting urllib3<2.1,>=1.25.4 (from botocore<1.35.0,>=1.34.19->boto3)
  Obtaining dependency information for urllib3<2.1,>=1.25.4 from https://files.pythonhosted.org/packages/d2/b2/b157855192a68541a91ba7b2bbcb91f1b4faa51f8bae38d8005c034be524/urllib3-2.0.7-py3-none-any.whl.metadata
  Downloading urllib3-2.0.7-py3-none-any.whl.metadata (6.6 kB)
Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1->botocore<1.35.0,>=1.34.19->boto3)
  Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Downloading boto3-1.34.19-py3-none-any.whl (139 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 139.3/139.3 kB 108.5 MB/s eta 0:00:00
Downloading botocore-1.34.19-py3-none-any.whl (11.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.9/11.9 MB 198.9 MB/s eta 0:00:00
Downloading s3transfer-0.10.0-py3-none-any.whl (82 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 82.1/82.1 kB 218.5 MB/s eta 0:00:00
Downloading urllib3-2.0.7-py3-none-any.whl (124 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 124.2/124.2 kB 233.6 MB/s eta 0:00:00
Installing collected packages: urllib3, six, jmespath, python-dateutil, botocore, s3transfer, boto3
ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/.local'
Check the permissions.


[notice] A new release of pip is available: 23.2.1 -> 23.3.2
[notice] To update, run: pip install --upgrade pip
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE

What am I doing wrong now?

@jglick
Copy link
Member

jglick commented Jan 16, 2024

With images not already designed to run as 1000:1000 you may need to add a user, override $HOME, etc. (On OpenShift is actually more laborious since the UID is randomized every time.)

@dee-kryvenko
Copy link
Author

This is typical for a pipeline to switch between versions of software to test things. For example - I want to test my code with different versions of Python. With Docker/Kubernetes, it is as simple as picking a correct tag. There are hundreds of tags on https://hub.docker.com/_/python, and if I get what you are implying correctly - you want me to re-build every tag of every public image to comply with Jenkins' made up standards, and do it continuously since versions of packages get regular security patches. Do that for python, ruby, go, java, terraform etc. This is such a waste of compute resources, and I wonder how that alone contributes to the energy grid strain.

I am starting to think that the only thing I am doing wrong here is - using Jenkins. Which we already established from our past conversations, and Jenkins is actively getting replaced at my current employer (all while saving costs on rebuilding images and so much costs/time on made-up artificial limitations!). All I wanted is to share a two-line fix for an ages old problem, and help out the community. But if help is not needed and a response is to do some insane magnitude level things on the user side instead of a 2-line fix in the plugin code - feel free to close the PR and thank you, as always, for re-affirming my past choices!

@jglick
Copy link
Member

jglick commented Jan 16, 2024

you want me to re-build every tag of every public image

Of course not. There are a variety of ways to adapt to the mismatch between how containers were designed to be used and how they get used in CI systems which expect to mount common volumes, including tricks which do not require a separate image. (Yet another not previously mentioned here: end the container block with chown -R 1000:1000 .) Unfortunately AFAIK neither Docker nor K8s offer a way to transparently remap UIDs in filesystems. (Maybe a Linux kernel limitation, if I follow e.g. moby/moby#2259 (comment) correctly?) From search hits like this I suspect the kind of problem here is not necessarily limited to Jenkins.

@dee-kryvenko
Copy link
Author

dee-kryvenko commented Jan 16, 2024

What are these "variety of ways" you are talking about?

end the container block with chown -R 1000:1000

And if the pipeline suddenly aborts before having a chance to run this step, a cache volume will be left forever broken and the next container to grab it might not be having same user as the last container had (or root). Not to mention other possible cases like sharing a socket via a volume or any sort of simultaneous volume access.

Look, this is not a problem limited to Jenkins, you are right. In fact, I had a feature request to add fsUser support next to fsGroup in Kubernetes ages ago, and I wasn't first/alone. But that is not the point. The point is - a combination of fsGroup+supplementalGroups+umask 002 has been a widely adopted pattern in the absence of alternative options for a long while now. It does cover about 99% of the cases, except for cases when software explicitly requires a user ownership (such as ssh private keys), and 99% or even 80% is infinitely more than 0%. The point is - it is only with Jenkins that I am experiencing hurdles and have to jump extra hoops while implementing that pattern, so as many other patterns. Notoriously artificial limitations, in fact, is a pattern in and out of itself very well known to be attributed to Jenkins. So as the attitude from maintainers like the one you are demonstrating right now. Look, I seriously am not trying to be mean here - my career has started with Jenkins and Jenkins does have a special sentimental place at my heart. I am wasting my time writing this - because I care. But it is these repeated conversations I am having with you and other maintainers all the time that makes me extremely frustrated and sad. Like - what is a downside of merging this PR? Does it break anything? No. Does 2 extra lines adds any extra burden for maintainers? No. Does it miss tests? No. Is it maybe a poor code style? I hope I am not that bad to make multiple errors in a two lines change, not to mention that for the most part it is a copycat of the shell option right above the new umask option, but I will change whatever you want me to change. What is it then? I am sure there are some downsides I am unaware of, but are they really more damaging than not having this feature? When did this paradigm shift happened that I now have to prove "why" to accept my contribution rather that you having to prove "why not"? Is it not a case that contributions or contributors is a sparse commodity? Why are we trying to solve an opposite problem like we are having too many contributions? I am just trying to bring your attention to the fact that this attitude might not be the best choice for Jenkins in a position that it currently is in, and at a certain point many users just like myself - may find out that it is actually cheaper and more rewarding to just stop using Jenkins. Half of the problems I am facing are not even technical, but either having to argue with someone like right now or like when you yourself admitted on some other ticket that was concerning CPS something along the lines that there simply is not a single member of the project who understands that piece of code any more. It is just no longer a case that I can make to my employer any longer that Jenkins can/will make everyone's life easier. Not a good look. Not a good look to go with that iconic Jenkins logo.

@jglick
Copy link
Member

jglick commented Jan 16, 2024

a cache volume will be left forever broken

Are you using permanent volumes? It does not look that way from your example—the workspace is ephemeral to the pod. Mounting volumes for caching purposes is possible but much trickier for various reasons (not just file ownership).

Just trying to clarify the use case, as this is the first time I can recall hearing of anyone attempting to make a pod with multiple UIDs work by means of umask.

@dee-kryvenko
Copy link
Author

dee-kryvenko commented Jan 16, 2024

I tried to boil down to minimal reproducible example, because what I am demonstrating is true for ephemeral/workspace volumes too, and wether it is possible to work around that issue on ephemeral/workspace volumes or not in any other way - the argument to be made here is that with a proposed feature in this PR it is much cleaner and convenient for the user. I'd even go as far as to argue that it has to be included as a best practice in the docs. What default umask was meant for is not even applicable in containerized environment - multi user containers is not a thing (practically speaking). A concept of a group regain its meaning in a multi-container environment though, such as a pod. Which I would speculate was the exact intention with fsGroup implementation by Kubernetes.

But yes, biggest problem I am having is with cache volumes that are persistent and configured with Retain policy. I have a little controller that goes over Released volumes and makes them Available without actually wiping them out based on a SC annotation (it's OSS btw but I don't want to self advertise here). So let's say I have a storage class for m2 cache, but my users are using so many different Java images (that there is such a great abundance of this days), and some of them happen to use different users. Most are using root but there are cases that root can't be used (for example, users may try to run postgress from junit - and it will refuse to start under root, so they have to use runAsUser but they can't set it to 1000 or any random value they want - it has to be an existing user with a writable user home, and they can't or don't want to build a custom image just for that). I have this same problem again and again with maven cache, gradle cache, Go cache, terraform plugins cache and so much more. I am working with a diverse crowd of developers that are unrelated to each other and are using different languages/platforms, and somehow they run into the same one issue. I am using this patch in prod for a while now and it works great. Anyway, there's so many possible scenarios that no one can't even begin to list them all, so like I mentioned in my previous comment - I would like to turn the table and have you explain why this can't be merged rather than me trying to explain why someone other than me may need it. Alternatively, if you give me a hint how I could possibly make this feature into a separate plugin, I may just try and do that but I am not that fluent with this plugin code to understand the process of doing this, although I am familiar with creating plugins in general.

@dee-kryvenko
Copy link
Author

podTemplate(yaml: '''
apiVersion: v1
kind: Pod
spec:
  securityContext:
    fsGroup: 1000
    supplementalGroups: [1000]
  containers:
  - name: test
    image: python
    tty: true
    command:
    - cat
''') {
    node(POD_LABEL) {
        container(name: 'test', umask: '002') {
            sh 'id'
            sh 'umask'
            sh 'mkdir test && touch test/test.txt'
            sh 'ls -la test'
        }
        sh 'id'
        sh 'ls -la test'
        sh 'rm -f test/test.txt'
    }
}
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] sh
+ id
uid=0(root) gid=0(root) groups=0(root),1000
[Pipeline] sh
+ umask
0002
[Pipeline] sh
+ mkdir test
+ touch test/test.txt
[Pipeline] sh
+ ls -la test
total 8
drwxrwsr-x 2 root 1000 4096 Jan 16 23:55 .
drwxr-sr-x 3 1000 1000 4096 Jan 16 23:55 ..
-rw-rw-r-- 1 root 1000    0 Jan 16 23:55 test.txt
[Pipeline] }
[Pipeline] // container
[Pipeline] sh
+ id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins)
[Pipeline] sh
+ ls -la test
total 8
drwxrwsr-x 2 root    jenkins 4096 Jan 16 23:55 .
drwxr-sr-x 3 jenkins jenkins 4096 Jan 16 23:55 ..
-rw-rw-r-- 1 root    jenkins    0 Jan 16 23:55 test.txt
[Pipeline] sh
+ rm -f test/test.txt
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS

@dee-kryvenko
Copy link
Author

A more complete example would be something to the effect of:

podTemplate(yaml: '''
apiVersion: v1
kind: Pod
spec:
  securityContext:
    supplementalGroups:
     - 1000
    fsGroup: 1000
    fsGroupChangePolicy: "OnRootMismatch"
  volumes:
    - name: "gradle-cache"
      ephemeral:
        volumeClaimTemplate:
          spec:
            storageClassName: "gradle-cache-class-annotated-for-the-release-controller"
            resources:
              requests:
                storage: "200Gi"
            accessModes:
              - "ReadWriteOnce"
  containers:
  - name: "jnlp"
    image: "jenkins/inbound-agent:3148.v532a_7e715ee3-1"
    tty: true
    command:
     - "/bin/sh"
    args:
     - "-c"
     - "umask 002; jenkins-agent"
  - name: jdk
    image: eclipse-temurin:17.0.9_9-jdk
    imagePullPolicy: IfNotPresent
    securityContext:
      runAsUser: 3
      runAsGroup: 4
    tty: true
    command:
    - cat
    volumeMounts:
    - name: "gradle-cache"
      mountPath: "/gradle/caches"
''') {
    node(POD_LABEL) {
        container(name: 'jdk', umask: '002') {
            sh 'gradle ...'
        }
    }
}

Which btw something until recently I was unable to do for WORKSPACE volume but it was implemented in https://issues.jenkins.io/browse/JENKINS-72444 (thanks for that!)

@Vlatombe
Copy link
Member

I have mixed feelings. I understand the problem here is to use off-the-shelf container images, using UIDs that can differ from the jnlp container that is using 1000 by default.

This leads to various issues, the most apparent one being that the durable-task log is not writable by UIDs other than 1000 or 0. I guess this particular issue may be fixed by relaxing permissions of this file.

As mentioned, there is also the problem that the workspace directory gets read/written by different UIDs and this can lead to inconsistent results when using the default umask 022, since files created by a container can't be read or written by others.

Similar issues are documented in Tekton

Noting that gitlab went through the route of tweaking umask inside their build image, and it went really bad (#57 then #632, issue #1736 ...), so this is really a slippery area.

About the solution proposed here
fsGroup+supplementalGroups+umask 002

It does cover about 99% of the cases

The main problem with proposing this as a solution for containers with different UIDs is that people who will typically use these (container images on-the-shelf) are the same people who are susceptible to smash their fingers with a hammer because they forgot they used a non-default umask while building their website and then publishing it.

end the container block with chown -R 1000:1000 .

Would be the saner option but comes with a performance cost, especially in the current container step model where one can go back and forth between containers.

When did this paradigm shift happened that I now have to prove "why" to accept my contribution rather that you having to prove "why not"?

We had numerous contributions in the past that were accepted a bit too easily and ended up being a burden to maintain. Once something is in, it becomes much harder to remove. Especially here we're dealing with a plugin with a large number of users and a new option that could affect security. I understand you've done your best for your patch to be unintrusive.

@dee-kryvenko
Copy link
Author

Noting that gitlab went through the route of tweaking umask inside their build image, and it went really bad (#57 then #632, issue #1736 ...), so this is really a slippery area.

Noting that GitLab genius idea was to use umask 0000 which indeed really... bad. In this PR, I am not prescribing any particular umask, and it will be up to the user. In my examples above I did umask 002 which is what GitLab "reverted" to in the end - they did not revert to 0022 which was the default before the change.

The main problem with proposing this as a solution for containers with different UIDs is that people who will typically use these (container images on-the-shelf) are the same people who are susceptible to smash their fingers with a hammer because they forgot they used a non-default umask while building their website and then publishing it.

That's actually a good point, indeed it will be easy to slip up if the contents of the workspace is packaged. But I would emphasize that this PR does not make it a default behavior, and users need to opt-in explicitly to do this, and I am hoping you are not trying to prescribe users what they should and shouldn't do "for their safety".

Would be the saner option but comes with a performance cost, especially in the current container step model where one can go back and forth between containers.

Not only that but like I said earlier - build pod can experience involuntary disruption at any moment. This is Kubernetes. This will leave broken garbage behind.

Especially here we're dealing with a plugin with a large number of users and a new option that could affect security. I understand you've done your best for your patch to be unintrusive.

I really do not understand this comment. The change is fully backward compatible and does not change pre-existing behavior in any way. It does not change default behavior either. Even if the argument to be made that setting wrong umask (especially 0000) opens up serious security liability - users need to explicitly opt in to do it. This is API version control 101. I am hoping you don't imply that you'd rather make life harder for 99% of the people just because 1% can misuse the feature and shoot their foot.

@jglick
Copy link
Member

jglick commented Jan 18, 2024

make life harder for 99% of the people

Well, of those who had this problem to begin with. AFAICT no prior pull request nor Jira issue referred to umask at all. So everyone else either

  • runs single-container pods
  • uses pod definitions with consistent UIDs (e.g., @carlossg the original author of this plugin also maintains the maven image which explicitly documents usage of UID 1000; commonly required anyway on OpenShift and when running under Pod Security Admission)
  • runs pods with some containers as root but are not trying to run scripts both inside and outside container
  • thought of using umask and just added that to their sh script
  • found some other way to work around filesystem owner mismatches

For the last two cases, an option to the container step (plus, I suppose, a corresponding option for agent kubernetes in Declarative Pipeline) could be helpful insofar as it is a bit more discoverable (e.g., in Snippet Generator) than showing the same trick in README.md, if indeed this is the best workaround for the UID mismatch.

There is a need to completely rewrite ContainerExecDecorator to not use the API server: the current implementation is brittle and does not scale well. Whether the rewrite would have any interaction with umask, I am not sure at this point; any proposed change would have to be tested pretty carefully against various scenarios including UID mismatches. The running idea is to send commands over a named pipe from the jnlp container, at least on Linux (Windows is trickier). In most cases this would actually be going through the wrapper script (or Golang binary) used by durable-task (as came up in the JENKINS-60054 report).

As an aside, for CloudBees CI customers we would recommend using a cache/uncache step for things like the local Gradle repo, rather than using a PVC which can be error-prone.

@dee-kryvenko
Copy link
Author

Well if ContainerExecDecorator rewrite makes this umask option obsolete/irrelevant or even as much as problematic to re-implement, it can always be feature gated / disabled / turned no-op. If one thing Kubernetes taught me it is the API version control, feature gates and graceful rollout/deprecation lifecycles, so I find it ironic that we are having this conversation in the context of Kubernetes plugin. It should be cheap to introduce, experiment and drop what didn't stick, and the fact that you are trying to guard against a new feature, rather small one in scope, with demonstrated value, for the sake of future theoretical refactoring - tells me something is not right. Anyway you made your point, have a nice day.

@dee-kryvenko
Copy link
Author

dee-kryvenko commented Jan 18, 2024

AFAICT no prior pull request nor Jira issue referred to umask at all. So everyone else either

I was running into this issue for 8 years now, since 2016. And by "I" - I mean hundreds and hundreds of developers of all kinds I serviced over that time. And they almost never was able to understand why that was happening to them. As you can tell, it took me personally at least 3+ years to even be able to formulate it properly into JENKINS-60054 in 2019 (in big part by means of learning English from scratch), and another 4 to learn enough to be able to send you this PR. Mind you, I was somehow surviving all these years. And I always have better things to do other than argue with someone on GitHub, which is what tends to happen every time I try to contribute to Jenkins. Be aware of your survivorship bias. If you fail to visually observe a phenomenon, doesn't disprove it or even as much as imply that it doesn't exist.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy