so can we use ssh_url_to_repo and http_url_to_repo values to check if the current project belongs there...? If so, I can do some adjustments to the /bin/download bash script & try to make it work.
I think we should first review all Docker and Composer projects to see where else we use the GITLAB_URL environment variable. Because if such an issue happens here, it could well happen elsewhere too.
Once we know the full scope of the issue, we can discuss what the best resolution would be for that.
I'll have a look and come back to this later today.
OK, the good news is that the scope remains small. The GITLAB_URL is only used in this docker project, but in 2 files: download and merge. That reminds me that those 2 files contain a lot of redundant code and are due for a refactoring anyways. I wonder if now is the right time to fix that. And if so, should we rewrite those 2 script files entirely and use the GitLab CLI instead? It is already part of this package (see https://gitlab.com/gitlab-org/cli) and it would be much easier to do certain things with that rather than using bash and curl.
It depends a bit how quickly you need that fix. If you need it today, it can't be done with a proper approach, then only a quick fix will help. If we have a bit more time like a few days, then I'd prefer the proper approach.
@jurgenhaas great, thank you! CLI looks interesting, I didn't know before about that!
It depends a bit how quickly you need that fix. If you need it today, it can't be done with a proper approach, then only a quick fix will help. If we have a bit more time like a few days, then I'd prefer the proper approach.
It's not urgent at all! I already have a workaround, but also we can use chatops (mattermost) commands to get DB dumps/files, etc.
This is now implemented with the GitLab CLI client, and the code is much cleaner and much more reliable. The environment variables GITLAB_URL and GITLAB_PRIVATE_TOKEN are no longer required. The client grabs all required information from the current git repository.
To use the client, you need to authenticate once on your host. Use glab auth login --help to see the available options. The authentication will be made persistent in your ~/.config/glab-cli/config.yml file and is available in all L3D projects. The client also allows working with any number of GitLab instances, not only one.
One little caveat though: when using the download script, there is a glab command to wait for the pipeline to finish, which pulls the database dump from a server. This command requires a user confirmation at the end, but I guess this is well worth it.
@dejan-acolono this is available in L3D version 2.8.1 - please give it a try.
but in our case, I think we still have the same problem :/
Trigger pipeline ...
none of the git remotes configured for this repository points to a known GitLab host. Please use glab auth login to authenticate and configure a new host for glab
but let me try & re-configure glab authentication somehow
I've had that issue when the pipeline didn't produce any jobs because of rules and conditions. Which pipeline configuration do you use, the one from lakedrops or your own?
but then any script included in .before-script-template.yml (on your side) doesn't work for us (script not found or something like that).
Do you have maybe any solution/workaround for that?
But yes, our gitlab-ci-cd/drupal probably is not up to date with yours. Let me check it
You should be able to use our template by simply using this in your .gitlab-ci.yml:
include: - project: 'gitlab-ci-cd/drupal' ref: main file: '/lakedrops.yml'
and modify that to use it from the remote server. Then you don't need to include the before-script separately, do you? But that's probably a separate issue on how to use the templates from a remote host.
Unable to create pipelineProject `gitlab-ci-cd/drupal` not found or access denied! Make sure any includes in the pipeline configuration are correctly defined.
@slsawhney thanks for reporting this issue. Note, this was closed about a month ago and commenting on closed issues is not a good idea. Chances are that this gets missed. Starting a new issue then is recommended. Just a tip for next time.
As for your issue, there are several questions:
Does the glab client recognize the project URL correctly? Is the printed URL in your screenshot identifying the location of the project correctly?
Have you authenticated your glab client to talk to your GitLab? This would usually work with glab auth login (required once on your host) and you can verify the status with glab auth status