question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
TeamCity allows me to report back from my MsBuild script using the ##teamcity interaction. I can use this to tell TeamCity that the build has FAILED, or indeed SUCCEEDED, however I would like to tell it to CANCEL the build instead. Does anyone know of a way to do this?
I can use this to inform TeamCity of failure...
<Message Text="##teamcity[buildStatus status='FAILURE']" Condition="Something==SomeCondition" />
I would love to do this...
<Message Text="##teamcity[buildStatus status='CANCEL']" Condition="Something==SomeCondition" />
I've tried out the TeamCity Service Tasks but nothing thus far.
EDIT:
So it seems this feature is not available, although a workaround http request can be used to cancel a build. There is also a feature request for Cancelling a build the TC website.
| According to JetBrains issue tracker and release page, since TeamCity 2019.1 EAP 1 builds can be stopped with service message as in:
##teamcity[buildStop comment='canceling comment' readdToQueue='true']
| TeamCity | 4,125,624 | 15 |
TeamCity is throwing errors when I added new the output variable syntax in our latest code update:
if (Enum.TryParse(input, out MyProject.ClassificationType classification))
{
result.Classification = classification;
}
TeamCity threw this error:
[Csc] MyProject\MyCode.cs(125, 111): error CS1003: Syntax error, ',' expected
The code builds and runs fine in Visual Studio.
| MSBuild on the TeamCity Agent's machine was outdated to using Microsoft Build Tools 2015.
I was able to fix this by downloading and installing the new Build Tools for Visual Studio 2017 found here:
https://www.visualstudio.com/downloads/ -> Other Tools and Frameworks -> Build Tools for Visual Studio 2017 -> Download
Or bypass the spam by going here: https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=15
Update TeamCity's build step to build using MSBuild 15 or Visual Studio 2017.
| TeamCity | 43,880,813 | 15 |
We are using TeamCity to produce *.nupkg artifacts which we don't want to be cleaned up. TeamCity provides a field where you can specify an ANT-style pattern for indicating which files you do or don't want to be cleaned up. Let's assume for a second that we have the following files which we do not want to be cleaned up:
/a.nupkg
/dir1/b.nupkg
/dir1/dir2/c.nupkg
Does the *.nupkg pattern match .nupkg files both in the root directory AND all child directories or do need to use **.*nupkg to traverse all directories?
I read the following documentation but this is still ambiguous to me: http://ant.apache.org/manual/dirtasks.html#patterns
If there is an Ant-Pattern tester (similar to http://regexpal.com/) that would be amazing.
| To match all files, in all directories (from the base directory and deeper)
**/*.nupkg
Will match
sample.nupkg
sample-2.nupkg
tmp/sample.nupkg
tmp/other.nupkg
other/new/sample.nupkg
** will match any directory (multiple directories deep).
*.nupkg will match any file with the nupkg extension. Or just * will match any file or any directory (but just a single directory deep).
PS: There is no Ant Pattern Tester.
| TeamCity | 33,417,655 | 15 |
I am currently trying to get some tests run in gradle for a shared build server. I get the following error:
Error occurred during initialization of VM
java.lang.InternalError: Could not create SecurityManager:
worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager
at sun.misc.Launcher.<init>(Launcher.java:102)
at sun.misc.Launcher.<clinit>(Launcher.java:53)
at java.lang.ClassLoader.initSystemClassLoader(ClassLoader.java:1451)
at java.lang.ClassLoader.getSystemClassLoader(ClassLoader.java:1436)
The JVM commandline arguments from running with --debug are:
-DisTestMode=true
-Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager
-DtestLocators
-javaagent:../expandedArchives/org.jacoco.agent-0.7.8.jar_cbks496gfbgpke4b5ek12xen8/jacocoagent.jar=destfile=../../jacoco/testSpringContext_cnt_dmabtec.exec,append=true,inclnolocationclasses=false,dumponexit=true,output=file,jmx=false
-Xms128m
-Dfile.encoding=US-ASCII
-Duser.country=US
-Duser.language=en
-Duser.variant
I've tried running with different versions of gradle on local and on server to compare:
2.14.1 on local vs 2.14.1 on server
4.10.1 on local vs 4.5.x on server
On local, it will always pass all tests regardless of gradle version.
On server, it will always fail when it tries to one one specific test.
Both local and server have the same JVM arguments.
Both are using java JDK 8.
Same result if .gradle directory in working directory is deleted prior to gradle task
Cannot delete the /.gradle directory with gradle-worker.jar since it is a shared build server (I don't the permissions)
Does anyone have any ideas as to what is the issue? If you need more information, please ask and I will provide if I am able to. Thanks.
| This occurs when ~/.gradle/daemon folder is corrupted in MacOS/Unix.
Forcefully removing the daemon folder resolved the issue for me.
rm -rf ~/.gradle/daemon
| TeamCity | 53,217,315 | 15 |
I need to execute git commands in a TeamCity build step.
These git commands need to use a SSH-based url for the git repo in order to authenticate as a priviliged user to the git server (because these git commands will actually modify the git repo, not just read it).
I am aware of this question.
I have already VCS checkout mode "Automatically on Agent". The VCS root is correctly configured with ssh and working well.
However, as stated in the documentation, TeamCity
temporarily saves the key on the agent's file system and removes it after git fetch/clone is completed.
So even though the TeamCity correctly used the SSH key during agent-side checkout, the key is intentionally not accessible later in the build.
But I really want to use the key later!
The output that the git commands generate is:
[06:12:29][Step 3/4] Permission denied (publickey).
[06:12:29][Step 3/4] fatal: Could not read from remote repository.
[06:12:29][Step 3/4]
[06:12:29][Step 3/4] Please make sure you have the correct access rights
[06:12:29][Step 3/4] and the repository exists.
I have confirmed that the known_hosts file exists and contains the appropriate public keys. I have also confirmed that the C:\Users\systeamcityagent\.ssh does not contain any private keys (as expected).
I am running TeamCity Enterprise 9.1.3.
What is the recommended solution for this?
| Teamcity 9.1 introduced a new Features called SSH Agent that allows you to establish Agent side SSH Connections using Server Stored SSH Keys:
See What´s New in TeamCity 9.1
| TeamCity | 33,918,088 | 15 |
For my automated tests I have a project added to TeamCity server and 2 Agent Pool, one is a Windows Server and the other one is a MAC. The default agent pool is WIN but I wanted to run my tests on the MAC server. To change the agent pool to MAC, I tried to add Agent Requirement by setting teamcity.agent.name to the MAC server from the list but it is not added to list of compatible agents associated with the project, but added to compatible agents with this warning on top of it: Following agents belong to the agent pools which are not associated with "Tests" project where Tests is the name of my project.
How can I associate it MAC agent to my project?
| You'll have to add the MAC agent to the agent pool for this project - that's configured in the Agent section available at /agents.html?tab=agentPools for your TeamCity build server.
Alternatively you can create a new agent pool with the MAC agent, and add the project to that pool.
| TeamCity | 29,924,888 | 15 |
I'd like to set up a nightly build for my release branch. Since I'm using git-flow I don't always have a relase branch so I would like it to build it if it can find a branch with a pattern of:
refs/heads/release-*
Any idea of how to get teamcity to perform this action for me?
| Use Branch Filter in the Trigger and set the only filter as
+:release-*
Also in Version Control under Branch Specification use
+:(release-*)
I also had similar issue and solved it as given above.
I think this would solve your problem too.
| TeamCity | 27,122,891 | 15 |
I have a number of client side packages managed by bower. When we deploy our application (through teamcity) we do a bower install to get the latest version of each package and then copy this to our server.
When I run this from my local machine bower install takes 10-20s. When I run it as a build step in teamcity (note command line build step with custom script containing "bower install") it takes 4 minutes. If I remote desktop onto that machine and run bower install from the command line it takes 10-20s.
Has anyone got any thoughts what's going on?
Edit
If I look at ProcExp on the server it seems ssh.exe is hanging for a long time before finishing executing.
Some extra details:
TeamCity Enterprise 7.1.4 (build 24331); Agent Version: 24331
Windows Server 2008 R2
Agent running as admin account
Git v1.8
Build step is custom script; node node_modules/bower/bin/bower install
Tried with both teamcity.git.use.native.ssh=false & teamcity.git.use.native.ssh=true
Using private keys in /.ssh
I found this issue on Teamcity's Youtrack which seems to be the same/similar issue but has since been closed. Not certain if its related or not. I've also raised a new issue but no response
| We discovered that Git for windows installs an old version of SSH, if you upgrade to the latest version of SSH it will fix the slowness http://darrell.mozingo.net/2011/09/29/painfully-slow-clone-speeds-with-msysgit-gitextensions/
| TeamCity | 14,362,554 | 15 |
I have the following steps for my project:
build
unit tests
test coverage
duplicates finder
fx cop
Is there any way to make TeamCity execute 2-5 steps in parallel? Can I use several build agents for that?
| Yes. Assuming you have at least four build agents, you can do the following:
Under MyProject, define 5 build configurations (Build, Unit Tests, etc).
Edit build configurations 2-5, and define a new Trigger in Build Triggering (Choose Finish Build Trigger, and set it to run after a successful run of Build.
Edit build configurations 2-5, and define a new Artifact dependency in Dependencies (Choose Add new artifact dependency, and choose the output of your Build configuration.
As long as you have agents available, the build configurations will run after a successful Build, each on its own agent.
On a side note, without knowing your specific project, i'd recommend doing that only if the whole process takes a really long time (lets say more than ~15 minutes), and you can spare those machines (virtual or not).
| TeamCity | 8,970,603 | 15 |
We use TeamCity at work. It would be nice to be able to keep an eye on checkin, build, and test run status without having to have a browser window open.
I have seen references to a TeamCity Visual Studio plugin here and here. The second page is their Professional vs. Enterprise Edition feature comparison page. Both version list "Plugins for MS Visual Studio, Eclipse, and JetBrains IDEs family."
Does anyone know how to download and install this add-in? It does not appear to be in the general TeamCity installation.
| On your TeamCity web UI, goto My Settings and Tools
On the TeamCity tools right sidebar, you will see a link to download the Visual Studio Addin.
Direct link: http://your.teamcity.server/update/vsAddinInstallerv4.msi
| TeamCity | 7,591,289 | 15 |
Right now our assemblies have a version number like 2.0.831.0. As I understand it, that's major version, minor version, date and build number. If I make a change and build again on the same day it's 2.0.831.1, 2.0.831.2 etc.
My TeamCity build number format is simply 2.{0} where {0} is an auto incremented number that just goes on forever (2.195, 2.196 etc).
How do I make TeamCity look exactly like the assembly version? We want to be able to associate the Change Log with the assembly version so anyone can say assembly version 2.0.831.2 had these changes in these files.
Extra info:
Our build step uses the "Visual Studio (sln)" option instead of "MSBuild" if that matters.
We use Subversion for source control if that matters.
Our TeamCity version is 6.5.1 (build 17834).
| I would recommend you to adopt the semantic versioning scheme {major}.{minor}.{patch} and append a 4th element for the build number {major}.{minor}.{patch}.{build}.
This is way more useful as to include the build date into the versioning scheme.
TeamCity 6.5 (you haven't specified a version) has a build feature which could be used to patch the version in the AssemblyInfo.cs during the build. See the documentation for the AssemblyInfo Patcher.
You could then define the build number format in the way you would like to have in your assembly and use the format for the build itself, as also for the patching feature.
| TeamCity | 7,258,718 | 15 |
It is building the builds at a time that is five hours ahead of the actual time. I have remoted to the Server box and the time on it is correct. How can I get TeamCity to build in the correct timezone?
| Usually, TeamCity shows the time in the Server's local time.
On the My Settings & Tools page, there is a setting, which enables showing times at current user local time. So if you've enabled this setting and your machine is 5 hours ahead from Server's time, you'll see time which is 5 hours ahead.
| TeamCity | 5,258,505 | 15 |
I'm having a small drama with the wildcard syntax in my TeamCity artifact configuration. I want to grab every file matching the pattern myproject.*.dll from any folder and place each DLL in the root of the artifacts path.
Here's what I've got at present:
**/obj/Debug/myproject.*.dll => /
This is grabbing all the DLLs but it's putting them inside the same folder structure as the source so rather than ending up with "myproject.web.dll" in the artifacts I get "Web/obj/debug/myproject.web.dll".
What am I missing here?
| I'm afraid you cannot do this in an easy way.
You should collect your *.dll locally to a single place, and than use TeamCity's artifacts rule to copy all of them to root directory.
Or, you can enter all paths manually (without ** part)
This is how it works in TC.
| TeamCity | 5,200,317 | 15 |
We're moving from a combination of CC/CC.NET to TeamCity.
The core of our product is Windows but we have a Mac agent.
We have our VCS checkout mode set to "Automatically on server". Meaning the source will be checked out on the (Windows) server and then copied to the agents (including the Mac agent) as needed.
Our product uses the BWToolkit framework for a portion of its UI. This means that we store the framework in our source control.
The issue is that the source copy from the TeamCity server screws up the symbolic links within the framework directory. This results in our product failing to build (error: BWToolkitFramework/BWToolkitFramework.h: No such file or directory).
This is how an ls -l from inside the root framework directory looks on my machine:
total 24
lrwxr-xr-x 1 myuser admin 35 Nov 22 10:45 BWToolkitFramework -> Versions/Current/BWToolkitFramework
lrwxr-xr-x 1 myuser admin 24 Nov 22 10:45 Headers -> Versions/Current/Headers
lrwxr-xr-x 1 myuser admin 26 Nov 22 10:45 Resources -> Versions/Current/Resources
drwxr-xr-x 5 myuser admin 170 Nov 22 10:45 Versions
And this is how it looks on the build machine:
total 24
-rwxrwxr-- 1 root admin 40 Nov 19 16:21 BWToolkitFramework
-rwxrwxr-- 1 root admin 29 Nov 19 16:21 Headers
-rwxrwxr-- 1 root admin 31 Nov 19 16:21 Resources
drwxrwxr-- 4 root admin 136 Nov 19 16:21 Versions
In addition instead of appearing as links on the build machine (little arrow overlay on icon) they appear as files with the unix executable icon. If you open one of these files that should be a link you get something similar to the following (this from the BWToolkitFramework link):
link Versions/Current/BWToolkitFramework
This appears to be an issue with the server checkout option in TeamCity because CruiseControl is running on the same machine doing a direct SVN checkout and I've had no issues.
Is there any way to fix this other than changing our TeamCity configuration to use the SVN checkout on client option?
| I filed this issue as TW-14499 in hopes of an official response/fix.
It was just marked as a duplicate of TW-5953 Symlinks are not supported for SVN server-side checkout, so this is a known issue that's been open about 2 years. If anybody else runs into it please vote for/comment on the issue in hopes that it will get fixed.
| TeamCity | 4,249,440 | 15 |
I'm just getting to grips with TeamCity and MSDeploy and have deployment to a dev environment triggered by SVN commit working nicely. The question I have is in terms of releasing to a test environment; I want to do this on demand and based on a specific revision number. What's the best way to configure a TeamCity build based on a user-defined revision?
| You can use Run Custom Build Dialog in the TeamCity, and customize there Changes to include parameter, where you need to specify actually SVN revision to build.
| TeamCity | 4,092,480 | 15 |
I'm using TeamCity for my CI builds, and I'd like to set up a second build for running automated UI tests on Windows XP and Windows 7 virtual machines.
I imagine the build working as follows:
Compile, run unit tests, etc.
Prepare MSI using WiX
Copy MSI to target test machines
Remotely execute MSI's
Copy test harness code to remote machine
Run tests
Build finishes
The automated UI tests are written using NUnit and would need to be run directly on the test virtual machine (they can't run remotely). It's important that if the tests fail, it appears in the TeamCity build log and the build fails. I'd rather not install VS or the TeamCity build agents on either of the test virtual machines.
It seems that most of this should be possible using psexec.exe. Are there any alternative (preferably open source) tools that I should look at?
| takes a deep breath
We were looking into something to help us out with our automated UI tests. We use ranorex to test the UI and TeamCity/Msbuild to execute the tests.
We never found any tools to help us out (I’m constantly keeping an eye out for some so will monitor this thread) but here is what we did instead.
The CI server copies the setup files and test scripts to the Testing Host Server.
The CI server then launches a custom app on the Testing Host Server providing the name of the VM to launch.
The Test Host Server then launches the VM software, using Virtual PC.exe -singlepc -pc vhdname.vhd -launch , and waits for it to shutdown (after it’s run its tests).
The VM grabs the setup files and scripts from the network location and executes.
After the tests are run it then returns the results to a networked location and shuts itself down.
Control is returned to the custom app.
Control is returned to the CI server which determines from the results if it has passed or failed (and updates the UI so developers are made aware of the result).
Results are collection as artifacts in TeamCity and tagged in Svn.
I think that's everything. Its convoluted, however, it works. Hope someone of that helps you.
| TeamCity | 3,573,666 | 15 |
I've set up TeamCity on a Linux (Ubuntu) box and would like to use it for some of Python/Django projects.
The problem is that I don't really see what to do next - I tried searching for a Python specific build agent for TeamCity but without much of the success.
How can I manage that?
| Ok, so there's how to get it working with proper TeamCity integration:
Presuming you have TeamCity installed with at least 1 build agent available
1) Configure your build agent to execute
manage.py test
2) Download and install this plugin for TC http://pypi.python.org/pypi/teamcity-messages
3) You'll have to provide your custom test runner for plugin in (2) to work. It can be straight copy of run_tests from django.test.simple, with only one slight modification: replace line where test runner is called with TeamcityTestRunner, so insted of
def run_tests(test_labels, verbosity=1, interactive=True, extra_tests=[]):
...
result = unittest.TextTestRunner(verbosity=verbosity).run(suite)
use this:
def run_tests(test_labels, verbosity=1, interactive=True, extra_tests=[]):
...
result = TeamcityTestRunner().run(suite)
You'll have to place that function into a file in your solution, and specify a custome test runner, using Django's TEST_RUNNER configuration property like this:
TEST_RUNNER = 'my_site.file_name_with_run_tests.run_tests'
Make sure you reference all required imports in your file_name_with_run_tests
You can test it by running
./manage.py test
from command line and noticing that output has changed and now messages like
#teamcity....
appearing in it.
| TeamCity | 1,091,465 | 15 |
I'm currently trying out VS2017 at work, due to an interest in migrating our server systems to .Net core.
I have switched a couple of minor tools projects to target .NetStandard 1.2 (recreate and move files), and everything builds locally.
However, when I request a build on our TeamCity 10.0.5 server, the build fails with the following message (Project name redacted):
E:\TeamCity\buildAgent1\work\467cb2a824afdbda\Source\{PROJECT FOLDER}\{PROJECT}.csproj error MSB4019: The imported project "C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\MSBuild\Sdks\Microsoft.NET.Sdk\Sdk\Sdk.props" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk.
The folder Microsoft.NET.Sdk does indeed exist on my machine, but not on the build server.
I have installed Build Tools for Visual Studio 2017, and installed .Net Core 1.1 SDK, and the build configuration uses a Visual Studio (sln) runner using VS2017.
What set of tools/installer am I missing?
UPDATE:
I've experimented with the Full Visual Studio 2017 installer (community), and the package .NET Framework 4.5 Targeting Pack added the folder in question.
Odd, since 4.5 Multi-Targeting pack is already installed on the server.
After setting Build agent to use a full VS2017 community install, a bunch of other issues appeared, such as System namespace not found in the projects in question.
Adding a dotnet restore build step fixed this issue, but then other namespaces where missing (missing installs), and so on.
Building a full framework solution with .NETStandard projects seems to rely on .NET Core to build, which I don't understand, since I'm not targeting .NET Core.
| try installing the .NET Core SDK from here:
.NET Core Downloads
and, if necessary:
set MSBuildSDKsPath=C:\Program Files\dotnet\sdk\1.0.1\Sdks
| TeamCity | 42,837,264 | 14 |
Folks,
I am trying to use Team City. I completed the six steps out of 7. Now for the last step, under the tab "Agent Requirements", it is showing me the following message
Agents compatibility
In this section you can see which agents are
compatible with requirements and which are not.
There are no agents registered.
Why it is showing this message ? Any solution ?
or rather, how to register an agent ?
EDIT : I already installed the Agent via MS Windows Installer, but it still shows me the same above message
Screenshot :
Finally managed to get this output :)
| Your agents tab title says - Agents (0) - that means you have no registered agents. Go to the Agents tab and see if the agent(s) that you want are authorized and connected. ( the first time an agent connects to the server, it has to be authorized from the Agents page - checkout http://server/agents.html?tab=unauthorizedAgents)
| TeamCity | 8,054,777 | 14 |
I am trying to run my Jest unit tests in Team City but I always end up getting the prompt as shown below.
No tests found related to files changed since last commit.
Press `a` to run all tests, or run Jest with `--watchAll`.
Watch Usage
› Press a to run all tests.
› Press f to run only failed tests.
› Press p to filter by a filename regex pattern.
› Press t to filter by a test name regex pattern.
› Press q to quit watch mode.
› Press Enter to trigger a test run.
I tried running yarn test a to run all the tests. But once the tests have completed execution, I'm still getting the same prompt. I tried yarn test a q but that doesn't work. I also tried yarn test a --forceExit and yarn test a --bail but nothing happens, I still get the prompt. How can I run all my Jest tests without getting this prompt as there will be no interaction when running through Team City? Any help would be much appreciated.
| --ci
When this option is provided, Jest will assume it is running in a CI environment. This changes the behavior when a new snapshot is encountered. Instead of the regular behavior of storing a new snapshot automatically, it will fail the test and require Jest to be run with --updateSnapshot. link
Also, you can change package.json to:
"test": "CI=true react-scripts test --env=jsdom",
which works great.
Your other option is to set CI in the command like any variable:
CI=true yarn test
| TeamCity | 51,608,998 | 14 |
I'm trying to setup a CI server for a website that I'm developing, but I can't find any info regarding how to do it with the new ASP.NET 5.
| I got you brother. This took me a few days to figure out. This configuration is on TeamCity v10 for a ASP.NET Core 1.0 RC2/preview2 project. As a bonus, I am including the step where it pushes to Octopus Deploy. You will need to install the dotnet teamcity plugin and the newest Octopus Deploy plugin with Push functionality. Here's an overview of the build steps:
First off, don't try to use dotnet restore to restore the packages. It won't work if you have internal nuget packages that are not compiled as .Net Core. This took forever to figure out. I would ignore trying to use dotnet restore until people have converted everything over to .Net Core or Microsoft fixes dotnet.exe to be more flexible.
Some of the stuff I read said to use the newest beta version of NuGet, 3.5. When I tried this, I would get the following error.
[14:30:09][restore] Starting NuGet.exe 3.5.0.1737 from D:\buildAgent\tools\NuGet.CommandLine.3.5.0-rc1\tools\NuGet.exe
[14:30:10][restore] Could not load type 'NuGet.CommandAttribute' from assembly 'NuGet, Version=3.5.0.1737, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.
I don't know what that means, and I don't care. Use 3.4.4 for now. Fill in the rest as appropriate.
The dotnet publish step is pretty straightforward. Make sure you provide the output directory because you want to use it in the final step. Also, be sure to specify an absolute path by using the %teamcity.build.workingDir% variable because of this bug. Otherwise it will fail to find your web.config file and not finish publishing the entire site. You'll be missing things like web.config and wwwroot!
Finally we Push to Octopus. This was very tricky for me. Note the part that says
%teamcity.build.workingDir%/published-app/**/* => OrderReviewBoard.1.0.0.zip
IF ANY PART OF THIS IS INVALID, YOUR STEP WILL FAIL WITHOUT EXPLAINING ITSELF!!! By invalid, I mean maybe you put a teamcity environment variable (like the %build.number% they show in all the examples) in that zip name that doesn't properly resolve. Or you specify a non-existent path. Or any number of things, you will see an error that says "[Octopus Deploy] Please specify a package to push". That means that one was never generated because that statement failed. I realize you want to have an auto-incrementing build number there. I'll leave it up to you to figure out how to do that.
Don't get all confused by what is running here. Octopus tries to explain it on their site, but it is hidden here. There is octo pack and octo push. The new version of octo pack is running out of sight, based on whatever statement you put in that "Package paths" box. Don't get sidetracked trying to create a nuspec package, or trying to use dotnet pack. These are dead ends for our purposes. Create a .zip file and move on with your life. Finally, notice the additional command line arguments I added. These help you out a tiny bit. They aren't required. Good luck.
| TeamCity | 30,401,915 | 14 |
Is there a way to archive or temporarily hide a build configuration in Teamcity? The documentation only mentions pausing here: https://confluence.jetbrains.com/display/TCD8/Build+Configuration
| You can archive a project, but build configurations can only be paused. However, you might be able to achieve something almost as good:
Create a project called 'Archive'.
Archive the project (Actions -> Archive project...)
Move the build configuration to the 'Archive' project (Actions -> Move configuration...)
The build configuration will be removed from the original project, but still retrievable should you want to use it again later.
| TeamCity | 29,951,444 | 14 |
In TeamCity I created a Build Configuration with two msbuild Build Steps which should build a Solution .sln file.
I defined the target as "Build", when I run the build, both steps obviously execute the standard configuration and both build either Debug or Release configuration twice.
Now I went to the build step settings and found the CommandLine argument, for Debug I added /p:Configuration=Debug, for Release I added /p:Configuration=Release.
Building this results in a warning of TeamCity:
MSBuild command line parameters contain "/property:" or "/p:". It is recommended to define System Property on Build Parameters instead.
Although one debug and one release has been build.
I googled this message and created two System Parameters: /p:Configuration=Debug and /p:Configuration=Release.
If I would now change my command line for debug to %system.DebugConfig% and for release to %system.ReleaseConfig% I get the same error.
Only then I really understood that those system parameters will be passed to each build step never automatically, always.
Ok, but how do I properly define two different build steps building debug and release using system parameters or without team city complaining about /p found in command line?
| There is no way to do this (that I know of, bear with me) but there are enough alternatives:
If you're willing to drop seperate build steps and instead use seperate builds instead - which normally shouldn't be any problem, use templates: create a build template in which you add all parameters which have to be configurable, like Configuration in your case and give them suitable defaults. Then create a build configuration for each combination of properties you want and in that configuration override the parameters.
Another option: do it yourself. I've switched between a couple of CIs in the past and it became pretty obvious that the more you rely on a CI system's features, the harder it becomes to test builds manually, the more cumbersome it gets crawling through the CI system's web interfaces or databases or config files to find the right configuration parameters, and the more crippled the transition to another CI becomes. So at one point I just gave up on it and wrote a couple of msbuild 'master' scripts which would build everything with one click of the button so to speak, remember the Joel Test, but on any machine I want, including my own, without the need for any CI at all. That was such a relief that I haven't looked back since, and the CI configuration now is kept to a minimum. Applied to your case: create an msbuild file as below and a build step in TC which invokes it's Build target and has a MyTargetProjectFile system parameter poiting to the actual project file:
<ItemGroup>
<Configurations Include="Debug;Release"/>
</ItemGroup>
<Target Name="Build">
<MsBuild Projects="$(MyTargetProjectFile)" Targets="Build"
Properties="Configuration=%(Configurations.Identity)"/>
</Target>
Yet another option: just ignore TeamCity's warning. It was never clear to me why they insist in doing it this way, it seems counterintuitive and leads to questions like this one :] Having different build steps with different properties seems like a rather basic requirement, no? I don't think we have any msbuild step not using /p and it all works just fine.
| TeamCity | 28,863,143 | 14 |
How can i check the Team City version details? and which version support for project moving (move project and build configurations from one server to another server)?
Thanks,
| TeamCity version is displayed on every page (at the bottom part of it). Like this:
TeamCity Professional 8.X.X (build XXXXXX)
| TeamCity | 21,845,859 | 14 |
I am using TeamCity and I am new to it. I have added a Build Configuration to the TeamCity and I created one VCS root to attach to it.
However, my project have a special requirement to detect a particular file that was changed in the VCS root location and use that file in build step. I am sure this could be done in TeamCity, I am not able to figure out how.
Any help? Thanks,
| To get the names of the files changed this is what I did. Thanks to Sam Jones.
I used System.TeamCity.build.changedFiles.file variable as follows.
Add a command line build step
Select Run as Custom Script
Add the script copy "%system.teamcity.build.changedFiles.file%" changelog.txt in script box.
You will get the changes in changelog.txt file in the format specified on this link.
NOTE: teamcity.build.changedFiles.file does not work. You need to use system.teamcity.build.changedFiles.file
| TeamCity | 20,332,397 | 14 |
On the TeamCity server we have installed VS 2012.
I have created a build configuration in TeamCity that builds and deploys the solution.
I have added a MSTest 2012 Configuration as well, but don't know how to tell it what project is the VS 2012 test project so that it can run those tests.
Thanks
| You need to specify assembly file (dll) of your tests, not the project file (csproj).
Here's an example: http://shrani.si/f/p/PH/2tO4Zo5s/tmpa4cc.jpg
So let's say your Testing assembly is called Company.Tests.dll and it is located in Company.Tests/bin/Debug/Company.Tests.dll
Basically, in "List assembly files:" you must put the path (You can use wildcards)
For example:
**\bin\**\*.Tests.dll
This will locate all assemblies with .Tests.dll suffix.
Regards
| TeamCity | 16,408,166 | 14 |
I have a build step in my build configuration thats runner type "Command Line", running a custom script.
The script is executing Robocopy:
robocopy "%teamcity.build.workingDir%\Code" "\\target\d$\Web\Target Sites" /E /NP /LOG:robocopy.log
if ERRORLEVEL GEQ 4 (
"D:\blat.exe" "robocopy.log" -to [email protected] -f [email protected] -subject "Error during robocopy on TEAMCITY" -server mail.me.com
)
exit /B 0
The Robocopy command is working fine but I keep getting an email and in the build log I keep seeing:
GEQ was unexpected at this time.
The ERRORLEVEL check isn't working for some reason?
I tried IF %ERRORLEVEL% GEQ but this breaks my build has TeamCity expects me to pass a build parameter.
Does this only work as an "Executable with parameters"?
| Neil, you might try escaping the percent sign.
Try IF %%ERRORLEVEL%% GEQ ...
| TeamCity | 14,755,561 | 14 |
I'm using TeamCity 5.1.5
I'd like to customize the email notification template on a per project basis.
Project A : use custom email notification email template to include additional info about the build and test results
Project B,C,D : use the default email notification template
I've perused through the TeamCity documentation and looked into the /config/_notifications/email directory and can't seem to find anything that indicates email templates can be configured on a per project basis. Any help is appreciated.
gracias!
| As far as I know, the template files can not be configured on a per-project basis.
However, using the FreeMarker expression syntax and properties provided by TeamCity, you can update the e-mail template to conditionally provide certain information for a given project.
For example:
<#if project.name = "Project A">
Build Results: Passable
Test Results: Smelly
</#if>
| TeamCity | 11,726,940 | 14 |
Is it possible to get the raw build log from a TeamCity build? I've written a custom test runner that gets run as a commandline build step and reports test results back by printing ##teamcity... lines to stdout. The build log from TeamCity seems to be stripping these out when it recognises them. I'd like to see the raw output to help debug my test runner.
Update:
Apparently this simply isn't possible. neverov (I assume Dimitry Neverov of JetBrains?) has explained this and given a workaround so I've accepted his answer.
| You can see the raw output from the build agent by looking in the agents /logs directory. This shows the unparsed data that is being hidden on the build output shown in the TeamCity console.
For example c:\TeamCity-Agent\logs\teamcity-build.log.
| TeamCity | 11,354,203 | 14 |
Attempting to build a C# project which has numerous references to assemblies in NuGet packages fails in TeamCity but works fine in Visual Studio.
Found in the log;
For SearchPath "{HintPathFromItem}".
[13:48:15][ResolveAssemblyReference]
Considered "..\packages\AspNetMvc.4.0.20126.16343\lib\net40\System.Web.Mvc.dll", but it didn't exist.
The reference in the project file is;
<Reference Include="System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL">
<Private>True</Private>
<HintPath>..\packages\AspNetMvc.4.0.20126.16343\lib\net40\System.Web.Mvc.dll</HintPath>
</Reference>
Any ideas? It seems like it's not starting from the correct directory so can't resolve "../packages" which exists one level above the .csproj file.
| I know this has been answered, but maybe someone else has had the same problem I did.
My hint paths in my project file were incorrectly pointing to packages and changing it to ..packages fixed it for me.
So changing it from this:
<Reference Include="Newtonsoft.Json">
<HintPath>packages\Newtonsoft.Json.5.0.5\lib\net40\Newtonsoft.Json.dll</HintPath>
</Reference>
To this:
<Reference Include="Newtonsoft.Json">
<HintPath>..\packages\Newtonsoft.Json.5.0.5\lib\net40\Newtonsoft.Json.dll</HintPath>
</Reference>
Fixed it.
| TeamCity | 9,734,532 | 14 |
I am using the TeamCity Visual Studio runner. I want to add a setting that is not accessible from Visual Studio.
/Property:FileAlignment=4096
I typed that directly into the build step "Command line parameters." The build log shows the error:
MSBuild command line parameters contains "/property:" or "/p:" parameters. Please use Build Parameters instead.
I don't understand how to provide this to MSBuild from TeamCity and get rid of this warning!
1. Which kind of parameter should I use?
There are 3 kinds:
Configuration parameters
System properties
Environment variables.
I don't want an environment or system variable because I don't want this build to depend on anything external. I am going to try Config right now, but then I'm not sure I'm filling it in right.
2. How can I tell this parameter is actually getting used?
The build log, which seems only to have navigable/foldable xml-like levels with their program, did not say the build parameters.
| You should use "System properties". Don't worry about the name, that's just how TeamCity calls it. They are regular properties. You can add them in "Edit Configuration Settings > 7. Build Parameters".
For example, you can add the system property as follows:
Name: system.FileAlignment
Type: System property (system.)
Value: 4096
Note that TeamCity will insist on the "system." prefix. It doesn't matter because the MSBuild script will still see it as $(FileAlignment).
| TeamCity | 8,810,860 | 14 |
I'm trying to do a password reset. I'm following the instructions here. I've tried shutting down the two services (TeamCity Build Agent Service and TeamCity Web Server) or some combination of the two, but I keep getting "User with specified username does not exist".
Is there something else I need to stop or shutdown?
| Shut down TeamCity via the command in its bin directory:
D:\TeamCity\bin\shutdown.bat
| TeamCity | 6,731,600 | 14 |
Well we are facing a strange problem with JetBrains TeamCity induced unit tests on our main project where tests from few library projects are failing regularly. Apparently, it's not reading the config file (coming from app.config and nicely stored in project -> bin -> debug -> projectName.dll.config).
Hints or tips on what could be the real issue would be highly appreciated.
| I've got the same problem and wasted a couple of hours to figure out what the problem is.
In our case, the NUnit plugin was configured to run the tests from:
**\*Tests.dll
Though this sounds to be OK, it has turned out that this pattern will not only match to the MyTests.dll in the bin\Debug folder but also to the obj\Debug\MyTests.dll. The obj folder is used internally for the compilation and does not contain the config file.
Finally the solution was to change the plugin configuration to
**\bin\Debug\*Tests.dll
Actually we use a system variable for the build configuration so we did not have the "Debug" hard-coded. Using bin* might be also dangerous when the workspace is also used for Debug/Release builds and you don't have a full cleanup specified.
You might wonder why I did not realize the test count mismatch (actually it was doubled, because they were running once from bin and once from obj), but this is typical: while everything is green, you don't care about the count. When we have introduced the first test depending on the config, we had only one failure (because the one from bin was passing), so the duplication was not outstanding.
| TeamCity | 5,725,749 | 14 |
We got a TeamCity server which produces nightly deployable builds. We want our beta tester to have access these nightly builds.
What are the best practices to do this? TeamCity Server is not public, it is in our office, so I assume best approach would be pushing artifacts via FTP or something like that.
Also I have no clue how to trigger a script when an artifact created successfully. Does TeamCity provide a way to do that?
| I don't know of a way to trigger a script, but I wouldn't worry about that. You can retrieve artifacts via a URL. Depending on what makes sense for your project, you could have a script set up on a scheduler (cron or Windows Scheduling) that pulls the artifact and sends it to the FTP site for the Beta testers. You can configure it to pull only the latest successful artifact. If you set up the naming right, if the build fails they beta testers won't notice because the new build number just won't be there, no bad builds would be pushed to them.
| TeamCity | 1,106,815 | 14 |
I'm trying to get the latest successful build.
This request returns all of the successful builds for a specified buildType (as BUILDTYPE below).
/httpAuth/app/rest/builds/?locator=buildType:BUILDTYPE,status:SUCCESS
Is there a way to further filter out to get the single latest successful build of the corresponding buildType?
TeamCity Version: Professional 9.1.3 (build 37176)
| Adding a count of 1 should work:
/httpAuth/app/rest/builds/?locator=buildType:BUILDTYPE,status:success,count:1
| TeamCity | 36,270,266 | 13 |
I am getting following error while building project in Team city.
Same project is getting build on Local machine. Local machine has VS 2015 and F# 4.0.
My Project Configuration is as below.
<Project ToolsVersion="14.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" />
<Choose>
<When Condition="'$(VisualStudioVersion)' == '11.0'">
<PropertyGroup Condition="Exists('$(MSBuildExtensionsPath32)\..\Microsoft SDKs\F#\4.0\Framework\v4.0\Microsoft.FSharp.Targets')">
<FSharpTargetsPath>$(MSBuildExtensionsPath32)\..\Microsoft SDKs\F#\4.0\Framework\v4.0\Microsoft.FSharp.Targets</FSharpTargetsPath>
</PropertyGroup>
</When>
<Otherwise>
<PropertyGroup Condition="Exists('$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\FSharp\Microsoft.FSharp.Targets')">
<FSharpTargetsPath>$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\FSharp\Microsoft.FSharp.Targets</FSharpTargetsPath>
</PropertyGroup>
</Otherwise>
</Choose>
<Import Project="$(FSharpTargetsPath)" />
This is Console application.
| I had a similar problem a while back, because I was running the local machine with Administrator privileges, but the Visual Studio installer had set environment variables at the user level and not the system level which Administrator uses. So when compiling as Administrator, the FSharpTargetsPath was not being correctly built from environment variables like VisualStudioVersion.
Have a look on your local machine to see what environment variables values are set for VisualStudioVersion at the level you are successfully using (System or User), as well as other variables and then check that these are set at the corresponding level on the Team City machine.
Perhaps you are running as user on your local machine and System on the Team City machine.
See details here: https://stackoverflow.com/a/21420306/152739
I hope this makes sense.
| TeamCity | 38,343,130 | 13 |
I am trying to create a nuget package for a .csproj file but want the package name to be different from the csroj file (which it is by default) and I don't want to specify a .nuspec file. Is there a way of doing this? I can only see a version name override option on the command line options and not a package name override option.
I am doing this in TeamCity but this is besides the point. I am thinking I need to pass additional parameters to the NuGet pack command?
Thanks,
| Nuget command line doesn't provide any option for direct name change.
http://docs.nuget.org/docs/reference/command-line-reference#Pack_Command
If you want to differ project and nuget package name you will have to prepare and edit custom nuspec file. You may also do it manually after creating package by using e.g. NuGetPackage Explorer.
| TeamCity | 26,252,422 | 13 |
I am currently setting up a TeamCity build server that will pull source code from our git repositories, which are hosting on Bitbucket. I am doing this for a repositories that are setup as part of a team on Bitbucket (not my personal account).
What I am running into is that the URL paths seem to be custom for each user. For example, my paths look like:
https://[email protected]/TeamName/RepoName.git
If I were to leave the company, this would be a nightmare to update for 40+ builds.
I have considered creating a service account on bitbucket for the build server to use, but this will take up one of our users (we only have a 10-user license).
Is creating a separate account the recommended approach, or are there better options?
| You should use a deployment key. It gives read-only access to both private and public repositories. Check the documentation at https://confluence.atlassian.com/display/BITBUCKET/Use+deployment+keys
| TeamCity | 19,730,454 | 13 |
I am getting the following error on my TeamCity project:
Error collecting changes for VCS repository 'MySvnRepository'
Unable to get SVN log entries for: https://myserver/svn/trunk; range:
RR[99_2013/08/27 13:35:20 +0100 => 6_2013/08/27 14:40:13
+0100]@d2fecd1e-4276-d847-874c-cb6b9eafeb43; revisions: 99..6
I have tested the VCS connection through the "Test Connection" button in the TeamCity admin screen and it connects fine.
Looking at the error message it looks as though it is trying to retrieve entries 6 - 99, but there are only 6 log entries in the repository as it is newly created. I have checked that the build counter has been reset.
Is there something obvious I am missing here?
| We had faced similar issue with team city. Seems problem was some how it was trying to fetch the SVN logs from two configurations. Because we had used the same VCS root for 2 different branches at different times.
To solve this, delete and recreate the build configuration as well as the VCS root and it should work.
| TeamCity | 18,467,378 | 13 |
I recently re-configured our TeamCity build configuration to take advantage of the Branch features to apply the same build configuration to multiple branches in the same repository.
Now, I'm trying to setup an automated build script that can pull the latest artifact from TeamCity, but only for a specific branch. I was able to get it working fine on the default branch in the original configuration, using the TeamCity REST API, but can't figure out how to format the URL to pull the artifact for a specific branch.
I've looked at the following resources, but to no avail:
http://confluence.jetbrains.com/display/TW/REST+API+Plugin
http://confluence.jetbrains.com/display/TCD7/Patterns+For+Accessing+Build+Artifacts
Thoughts?
| I just came across this article.
I plan on giving this a try over the next couple days, and if it works, I will give a brief summary of the result for anyone else who has trouble with this.
EDIT:
Sorry for the delay, just realized that I never came back to report how we resolved this issue.
We ended up upgrading TeamCity (which we should have done anyway, so it wasn't a big deal), and once that was finished, it worked great without much effort. We're now running TeamCity v8.1.5, and here's the URL pattern we're using to pull our artifacts:
http://<build-server>/httpAuth/app/rest/builds/buildType:<build-type>,branch:<branch>/artifacts/content/<artifact-path>
NOTE: We're using the httpAuth API in order to authorize access to our build artifacts, so we also had to create a new TeamCity user for our deployments.
| TeamCity | 17,625,547 | 13 |
I try to run dotCover with my NUnit tests, in the TeamCity 8 as a build step. But no metter what I try I always get the same error in the log file:
Step 4/4: Coverage (NUnit) (1s)
[Step 4/4] Starting: C:\TeamCity\buildAgent\plugins\dotnetPlugin\bin\JetBrains.BuildServer.NUnitLauncher.exe #TeamCityImplicit
[Step 4/4] in directory: C:\TeamCity\buildAgent\work\6aee0f0d2626793d
[Step 4/4] ##teamcity[importData type='dotNetCoverage' tool='dotcover' file='C:\TeamCity\buildAgent\temp\buildTmp\coverage_dotcover3226256377023598081.data']
[Step 4/4] Importing data from 'C:\TeamCity\buildAgent\temp\buildTmp\coverage_dotcover3226256377023598081.data' with 'dotNetCoverage' processor
[Step 4/4] Rejected coverage report file: C:\TeamCity\buildAgent\temp\buildTmp\coverage_dotcover3226256377023598081.data size: 0. File is empty or does not exist
[Step 4/4] Process exited with code -2146232576
[Step 4/4] Step Coverage (NUnit) failed
I have tried to use both the included in TeamCity dotCover and the separately installed one, but both are failing with the same error.
My configuration:
If I choose no coverage tool, the tests work fine on its own. But with dotCover selected I always get the same error.
Any help here would be much appreciated.
| Check out: http://confluence.jetbrains.com/pages/viewpage.action?pageId=49448495
In the case of internal TeamCity DotCover, you have to add the "ALL APPLICATION PACKAGES" read access rights to the TeamCity installation folder. If using an external DotCover, add the rights there.
This corrected the issue for me, for now.
| TeamCity | 16,320,321 | 13 |
I have a TeamCity Build Configuration that includes the following to publish artifacts:
Source\Builder\bin\Release\*.dll=>release
This works fine, however I am wanting to exclude one dll (there are quite a few) and have read that you can use + & - operators to do this. Something along the lines of:
+: Source\Builder\bin\Release\*.dll=>release
-: Source\Builder\bin\Release\Builder.*
As soon as I add these in, no artifacts are published and I get the following error in the build log (looks like it is counting the + as part of the path):
[Publishing artifacts] Collecting files to publish [+:Source\Builder\bin\Release\*.dll=>release]
[Publishing artifacts] Artifacts path +:Source/Builder/bin/Release/*.dll not found
I am using version 7.1.1, anyone any ideas (I am not sure whether these operators are even valid). I have seen a solution with MSBuild but am surprised this functionality is not available.
Thanks in advance.
| I don't believe you can.
However, if you are using the artifacts in another build configuration as an artifact dependency, you can exclude a particular file there.
When you set up the dependencies, you can specify a negative operator like this:
+:release/**=>Dependencies/SomeProject
-:release/SomeBinary.dll
It is a horrible hack, but one way you could get it to work would be to set up a new build configuration which gets the dependencies as an artifact dependency, excluding the one binary, and then publishes its own artifacts.
As in, create a new build configuration and publish:
Dependencies/SomeProject=>release
Then reference the artifacts from this build configuration instead of the other one.
| TeamCity | 12,780,516 | 13 |
I've been looking at TFS, TeamCity, Jenkins and Bamboo and to be honest, none of them were convincing. I want
Good reporting
Good Git support
Gated/delayed check-in/commit
Integration with Visual Studio and/or Atlassian products
The solution shouldn't require regular developers to use command line or terminal (Git Extensions FTW)
TFS is a mess to configure and work with in general, it doesn't support Git obviously, but it has gated check-ins (although it seems to unnecessarily check out the whole project every time and so it is slow?). Also really lacking in the reporting department.
TeamCity has really bad gated check-in support when it comes to Git, otherwise it's my favorite. Supports a lot of stuff out of the box.
The reporting in Jenkins is bad (historical trends and so on), it seems to have more bugs than the others, and the plugin quality can be scary. On the other hand it's free and versatile. How is the support for Git and gated check-ins?
Bamboo obviously has great Atlassian integration, but no support for gated check-ins. :(
Any advice?
| @arex1337 All the answers here provided have their merits. Experience tells us no project/organization is ever happy with a single vendor for all their needs. What you may probably end up having is a base CI tool with a mix of plugins/additions from other vendors who their own USPs.
As an example :
Jenkins as a base tool. @Aura and @sti have already mentioned all the good things; while we can agree the plugin development is a little uncontrolled, there are still a lot of them out there which provide excellent quality. The main thing being the community is active, really agile (they have 1 release per week normally) and any problems you might have are easily solved. Additional benefit being easy plugin development so if if push comes to shove, you can write your own.
@Mark O'Connor is bang on with the SONAR suggestion. One of the best ones you can get in terms of reporting and get cool reports. And @Thomas has cleared the air about gated commits
In favor of Jenkins:
Good reporting - You got it with SONAR+Jenkins
Good Git support - Jenkins gives that
Gated/delayed check-in/commit - Jenkins Gerrit plugin
Integration with Visual Studio and/or Atlassian products - The Jenkins wiki itself runs on Atlassian. Here is a list of some integrations already there
Clover , Crowd , Confluence, JIRA : Plugin1 Plugin2 Plugin3
Shouldn't require regular developers to use CLI - Jenkins doesn't
Now you may replace Jenkins with Bamboo in the above example and might come close to what you want. But as of now it seems your best bet is Jenkins.
TFS and TeamCity : There are not yet there in the league of Jenkins and Bamboo.
| TeamCity | 12,155,401 | 13 |
Here's my configuration:
On the build log, I only see the output of the first two lines, and then "Process exited with code 0" as the last output of this build step.
I tried opening a terminal in the build server in the SYSTEM account (using PsTools), since Team City is configured to run under said account. Then, I created a Test.ps1 file with the same content and ran a command just like Team City's:
[Step 1/4] Starting: C:\Windows\system32\cmd.exe /c C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NonInteractive -Command - <C:\TeamCity\buildAgent\temp\buildTmp\powershell5129275380148486045.ps1 && exit /b %ERRORLEVEL%
(except for the path to the .ps1 file and the cmd.exe initial part, of course). I saw the output of the two first lines, and then the terminal disappeared all of a sudden!
Where did I mess up? I'm new to Powershell, by the way.
| The stdin command option of Powershell has some weirdness around multiline commands like that.
You script in the following form would work:
write-host "test"
write-host "test2"
if("1" -eq "1"){write-host "test3 in if"} else {write-host "test4 in else"}
The ideal way would be to use the Script : File option in TeamCity which will will run the script you specify using the -File parameter to Powershell.
If you don't want to have a file and having VCS, in the current setup, change Script Execution Mode to Execute .ps1 file with -File argument.
| TeamCity | 9,165,658 | 13 |
I have, out of curiosity I guess, clicked the little grey 'x' to the right of a project on my TeamCity Project Dashboard. It turns out that it hides the project from the dashboard.
And now I cannot find a way to 'unhide' it. When I try to use Configure Visible Projects menu, it says it's visible. Also if I hover over the downward arrow next to the Project link in the menu, I can navigate to the project.
But I really want to see it on the dashboard again.
Thanks.
| For those (like me) who struggled to find the configuration alluded to above:
Go to the main page (with the list of projects).
To the right of the project with the hidden build config there is a drop down that says "1 hidden". Use that to unhide it.
| TeamCity | 7,951,168 | 13 |
I am attempting to setup automated tests for our applications using a virtual machine environment.
What I would like to have is something like the following scenario:
Build server is automatically triggered to start an automated test for the application
A "build" script is then run which consist of:
Copy application files and a test script to a location accessible by the VM
Start the VM
In the VM, a special application looks in the shared folder and start the test script
The tests script do its job, results are output to shared folder
Test script ends
The special application then delete the test script
The special application somehow have the VM manager close the VM and revert to the previous snapshot
When the VM has exited, process the result and send to build server.
I am using TeamCity if that matters.
For virtual machines, we use VirtualBox but we are open to any other if needed.
Is there any applications/suite that would manage this scenario?
If there are none then I would then code it myself, should be easy but the only part I am not sure is the handling of the virtual machine.
What I need to be able to do is to have the VM close itself after the test and revert to a previous snapshot since I want it to be in a known state for the next test.
Any pointers?
| I have a similar setup running and I chose to use Vagrant as its the same thing our developers where using for normalizing the development environment.
The initial state of the virtualmachine was scripted using puppet, but we didn't run the deployment scripts from scratch on each test, only once a day.
You could use puppet/chef for everything, but for all other operations on the VM, we would use Fabric scripts, as they were used for the real deployment too, and somehow fitted how we worked better. In sum the script would look something like the following:
vagrant up # fire up the vm, and run the puppet provisioning tool
fab vm run_test # run tests on vm
fab local process_result # process results on local shared folder
vagrant destroy # destroy the vm
The advantage is that your developers can also use vagrant to mimic your production environment without having to take care of that themselves (i.e. changes to your database settings get synced to all your developers vm's wherever they are) and the same scripts can be used in production too.
| TeamCity | 6,359,142 | 13 |
I suspect there's probably an easy answer to this I'm just not seeing, but whenever I run a TeamCity build with either MSBuild or the Visual Studio solution runner against a .csproj and target "Package", the build artifacts always include the "csproj.teamcity.patch" string after the project name:
Running the same process via command line doesn't include these. The problem it's causing me is that my build script has a target which looks for "Web.deploy.cmd" after the package task runs and obviously it's not finding it when files are named this way. I'm reticent to change the command in the build script to include the TeamCity string as it will play havoc with running it from outside the build servers.
Can anyone tell me why this is happening and how you'd work around it when you need to be able to refer to the artifacts by name?
| You may set 'teamcity.msbuild.generateWrappingScript' configuration parameter with value 'false' to make TeamCity avoid generating wrapping script.
TeamCity MSBuild/Solution build runners used to generate wrapping scripts to add TeamCity-provided tasks.
| TeamCity | 4,211,683 | 13 |
I get this error when running my Moq tests through Teamcity 5
Test(s) failed.
System.IO.FileNotFoundException :
Could not load file or assembly 'Moq,
Version=3.1.416.3, Culture=neutral,
PublicKeyToken=69f491c39445e920' or
one of its dependencies. The system
cannot find the file specified. at
MyCode.Tests.SomeHandlerTests.Setup()
The tests run fine on my local; they just fail on the build server.
I made sure the assemblies are in the Bin (looking at them now over RDP just be double sure).
| So the issue was to do with the Test DLL search path under the nunit settings
It was:
..\Tests\**\*Test*.dll
But is now:
..\Tests\*\bin\Debug\*Test*.dll
And things work nicely
UPDATE
http://confluence.jetbrains.com/display/TCD8/NUnit
You can use this pattern
**\*.dll
as long as you add this pattern in the "Do not run tests from" field
**\obj\**\*.dll
| TeamCity | 3,665,674 | 13 |
I have just installed GitLab on a fresh Ubuntu 14.04 64 bit server. I did so using the Omnibus package as indicated in the download page. There were no error messages during the install and all the remarks from the script were displayed in green.
When I access the server through port 80 I get the following:
Following the Trouble Shooting Guide I tried to query the status, but the result is also an error:
sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production
sudo: bundle: command not found
I tried to access the logs but the unicorn.stderr.log file is nowhere to be found in the system.
There is a similar question with the same error on Ubuntu 12.04, to which the solution is to increase the unicorn timeout. I have tried to do so but the error message remains.
| There is a lag of some 5 minutes from the moment gitlab is started/restarted to the point when it is actually able to process requests. Here is an example from the log:
2015-01-08_09:00:57.37719 [13326] 08 Jan 10:00:57.377 * The server is now ready to accept connections on port 0
2015-01-08_09:00:57.37722 [13326] 08 Jan 10:00:57.377 * The server is now ready to accept connections at /var/opt/gitlab/redis/redis.socket
[...]
==> /var/log/gitlab/unicorn/unicorn_stderr.log <==
I, [2015-01-08T10:04:48.676879 #13351] INFO -- : listening on addr=127.0.0.1:8080 fd=11
I, [2015-01-08T10:04:48.677663 #13351] INFO -- : unlinking existing socket=/var/opt/gitlab/gitlab-rails/sockets/gitlab.socket
I, [2015-01-08T10:04:48.690283 #13351] INFO -- : listening on addr=/var/opt/gitlab/gitlab-rails/sockets/gitlab.socket fd=12
I, [2015-01-08T10:04:48.716769 #13413] INFO -- : worker=0 spawned pid=13413
I, [2015-01-08T10:04:48.735878 #13351] INFO -- : master process ready
I, [2015-01-08T10:04:48.846635 #13416] INFO -- : worker=1 spawned pid=13416
I, [2015-01-08T10:04:48.837438 #13413] INFO -- : worker=0 ready
I, [2015-01-08T10:04:48.863110 #13416] INFO -- : worker=1 ready
Before Unicorn reports that it is up and running on port 8080, the "GitLab is not responding" message will be displayed. So all one has to do is wait.
| GitLab | 27,816,046 | 18 |
We have a GitLab CI pipeline which builds a new Docker image based on an external ETCD snapshot of a Hashicorp Vault secrets back-end. The image is for disaster recovery so we don't have any interest in keeping old versions in the registry.
Is there any way of purging GitLab registry container images which are older than a certain date. Or to keep a maximum number of recent images and delete the rest?
Thanks
S
| run this command:
sudo gitlab-ctl registry-garbage-collect -m
| GitLab | 55,361,101 | 18 |
I am trying to clone a private git repository(gitLab) into a kubernetes pod, using SSH keys for authentication. I have stored my keys in a secret. Here is the yaml file for the job that does the desired task.
Heres the same question, but doesnt give the exact solution :
Clone a secure git repo in Kubernetes pod
Logs of the init container after execution:
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
v3.7.1-66-gfc22ab4fd3 [http://dl-cdn.alpinelinux.org/alpine/v3.7/main]
v3.7.1-55-g7d5f104fa7 [http://dl-cdn.alpinelinux.org/alpine/v3.7/community]
OK: 9064 distinct packages available
OK: 23 MiB in 23 packages
Cloning into '/tmp'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
The yaml file which works perfectly for public repo:
apiVersion: batch/v1
kind: Job
metadata:
name: nest-build-kaniko
labels:
app: nest-kaniko-example
spec:
template:
spec:
containers:
-
image: 'gcr.io/kaniko-project/executor:latest'
name: kaniko
args: ["--dockerfile=/workspace/Dockerfile",
"--context=/workspace/",
"--destination=aws.dest.cred"]
volumeMounts:
-
mountPath: /workspace
name: source
-
name: aws-secret
mountPath: /root/.aws/
-
name: docker-config
mountPath: /kaniko/.docker/
initContainers:
-
name: download
image: alpine:3.7
command: ["/bin/sh","-c"]
args: ['apk add --no-cache git && git clone https://github.com/username/repo.git /tmp/']
volumeMounts:
-
mountPath: /tmp
name: source
restartPolicy: Never
volumes:
-
emptyDir: {}
name: source
-
name: aws-secret
secret:
secretName: aws-secret
-
name: docker-config
configMap:
name: docker-config
The yaml file after using git-sync for cloning private repository:
apiVersion: batch/v1
kind: Job
metadata:
name: nest-build-kaniko
labels:
app: nest-kaniko-example
spec:
template:
spec:
containers:
-
image: 'gcr.io/kaniko-project/executor:latest'
name: kaniko
args: ["--dockerfile=/workspace/Dockerfile",
"--context=/workspace/",
"--destination=aws.dest.cred"]
volumeMounts:
-
mountPath: /workspace
name: source
-
name: aws-secret
mountPath: /root/.aws/
-
name: docker-config
mountPath: /kaniko/.docker/
initContainers:
-
name: git-sync
image: gcr.io/google_containers/git-sync-amd64:v2.0.4
volumeMounts:
-
mountPath: /git/tmp
name: source
-
name: git-secret
mountPath: "/etc/git-secret"
env:
- name: GIT_SYNC_REPO
value: "[email protected]:username/repo.git"
- name: GIT_SYNC_SSH
value: "true"
- name: GIT_SYNC_DEST
value: "/tmp"
- name: GIT_SYNC_ONE_TIME
value: "true"
securityContext:
runAsUser: 0
restartPolicy: Never
volumes:
-
emptyDir: {}
name: source
-
name: aws-secret
secret:
secretName: aws-secret
-
name: git-secret
secret:
secretName: git-creds
defaultMode: 256
-
name: docker-config
configMap:
name: docker-config
| You can use git-sync
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: git-sync-test
spec:
selector:
matchLabels:
app: git-sync-test
serviceName: "git-sync-test"
replicas: 1
template:
metadata:
labels:
app: git-sync-test
spec:
containers:
- name: git-sync-test
image: <your-main-image>
volumeMounts:
- name: service
mountPath: /var/magic
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync-amd64:v2.0.6
imagePullPolicy: Always
volumeMounts:
- name: service
mountPath: /magic
- name: git-secret
mountPath: /etc/git-secret
env:
- name: GIT_SYNC_REPO
value: <repo-path-you-want-to-clone>
- name: GIT_SYNC_BRANCH
value: <repo-branch>
- name: GIT_SYNC_ROOT
value: /magic
- name: GIT_SYNC_DEST
value: <path-where-you-want-to-clone>
- name: GIT_SYNC_PERMISSIONS
value: "0777"
- name: GIT_SYNC_ONE_TIME
value: "true"
- name: GIT_SYNC_SSH
value: "true"
securityContext:
runAsUser: 0
volumes:
- name: service
emptyDir: {}
- name: git-secret
secret:
defaultMode: 256
secretName: git-creds # your-ssh-key
For more details check this link.
| GitLab | 53,683,594 | 18 |
I had a problem with go-modbus in an issue in Github.
Author suggested me to use:
$ go get github.com/goburrow/modbus
instead of
$ git clone https://github.com/goburrow/modbus.git
What is the difference between that two commands?
| The git clone command will clone a repo into a newly created directory, while go get downloads and installs the packages named by the import paths, along with their dependencies.
| GitLab | 51,121,797 | 18 |
I keep getting a "Host Key Verification Failed" error when trying to push changes to a git controlled folder/project to Gitlab. For whatever reason, it works fine using Visual Studio for Mac, and I can login to my Gitlab account just fine via web browser.
| Resolved by deleting any/all Known_hosts files in ~/.ssh/ and then executing ssh [email protected] in Terminal and answering "yes" (which re-adds [email protected] to known_hosts after re-creating a new known_hosts file).
I did some messing around in known_hosts which probably caused the problem.
| GitLab | 45,538,408 | 18 |
I was committing my git process and thought that it would be okay if I ignore comments so I used this code
git commit filename
Bash was strange and thus i closed the console
now when I use proper command
git commit -m"THIRD COMMIT" filename
It give the following response:
Another git process seems to be running in this repository, e.g.
an editor opened by 'git commit'. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
what should I do?
| I met this problem recently too.
rm -f ./.git/index.lock
try this commend in your git bash, then you can solve your problem.
| GitLab | 40,449,342 | 18 |
I've read the differences between Gitlab Community and Enterprise in this page: https://about.gitlab.com/features/
Based on that page I understand the integration with Jenkins is only available in the enterprise version. However, I've seen that using web hooks I can trigger builds in Jenkins when a push happens in Gitlab.
So my question is which is the difference between community and enterprise regarding the integration with jenkins?
| On the merge request page, there is a state widget that shows the status of tests for that particular merge request, and on your project home page, there is test status badging. These two UI elements only show up if you enable a 'ci service' on the project. In community you can turn it on with Gitlab CI. In enterprise you can set it up to work with jenkins.
| GitLab | 28,327,766 | 18 |
Using the following CI pipeline running on GitLab:
stages:
- build
- website
default:
retry: 1
timeout: 15 minutes
build:website:
stage: build
...
...
...
...
website:dev:
stage: website
...
...
...
What does the first colon in job name in build:website: and in website:dev: exactly mean?
Is it like we pass the second part after the stage name as a variable to the stage?
| Naming of jobs does not really change the behavior of the pipeline in this case. It's just the job name.
However, if you use the same prefix before the : for multiple jobs, it will cause jobs to be grouped in the UI. It still doesn't affect the material function of the pipeline, but it will change how they show up in the UI:
It's a purely cosmetic feature.
Jobs can also be grouped using / as the separator or a space.
| GitLab | 70,322,505 | 17 |
I'm writing GitLab CI/CD pipeline script in .gitlab-ci.yml
I want to check if a specific file changed in another repo and if so I would like to copy the file, commit and push to the current repo.
everything works until I get to the 'git push' part
I tried several ways to fixed it:
stages:
- build
build:
stage: build
script:
- echo "Building"
- git checkout -b try
- git remote add -f b https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.{otherRepo}.git
- git remote update
- CHANGED=$(git diff try:mobile_map.conf b/master:mobile_map.conf)
- if [ -n "${CHANGED}" ]; then
echo 'changed';
FILE=$(git show b/master:mobile_map.conf > mobile_map.conf);
git add mobile_map.conf;
git commit -m "updating conf file";
git push;
else
echo 'not changed';
fi
- git remote rm b
for this code I get :
fatal: unable to access 'https://gitlab-ci-token:[MASKED]@gitlab.{curr_repo}.git/': The requested URL returned error: 403
also I tried to add this line in the beginning :
git remote set-url origin 'https://{MY_USER_NAME}:"\"${PASSWORD}\""@gitlab.{curr_repo}.git'
and I get this error message:
fatal: Authentication failed for 'https://{MY_USER_NAME}:"\"${PASSWORD}\""@{curr_repo}.git/'
also I added:
- git config --global user.name {MY_USER_NAME}
- git config --global user.email {MY_EMAIL}
please help me,
Thanks
| Job-tokens only have read-permission to your repository.
A unique job token is generated for each job and provides the user read access all projects that would be normally accessible to the user creating that job. The unique job token does not have any write permissions, but there is a proposal to add support.
You can't use deploy-tokens because they can't have write-access to a repository (possible tokens).
You could use a project-access-token with read-write-access to your repository.
You can use project access tokens:
On GitLab SaaS if you have the Premium license tier or higher. Project
access tokens are not available with a trial license.
On self-managed instances of GitLab, with any license tier. If you
have the Free tier: [...]
Then you can use your project-access-token as an environment variable in the url.
git push "https://gitlab-ci-token:$PROJECT_ACCESS_TOKEN@$CI_SERVER_HOST/$CI_PROJECT_PATH.git"
At least that's how we use it in our pipelines.
I hope this helps you further.
| GitLab | 65,234,416 | 17 |
I'm trying to push an image to gitlab registry.
I've done it many times, so I wonder why I get this error.
I build the image with latest tag:
Successfully tagged registry.gitlab.com/mycompany/rgpd_api:latest
Then I login and I push:
docker login registry.gitlab.com -u gitlab+deploy-token-91931
docker push registry.gitlab.com/mycompany/rgpd_api:latest
But I get:
The push refers to repository [registry.gitlab.com/mycompany/rgpd_api]
be679cc302b9: Preparing
denied: requested access to the resource is denied
I gave gitlab+deploy-token-91931 token both read_repository and read_registry rights.
My repo is:
https://gitlab.com/mycompany/rgpd_api
I checked with docs page: https://docs.gitlab.com/ee/user/project/container_registry.html
But when I do it through Gitlab CI, with gitlab-ci-token
I can push it normally.
I also tried to regenerate a new token, but still same issue.
How can I fix it ?
| I've stumbled upon this question as well and it turns out that
Group level Deploy tokens can be used to push images to group level container registry similarly to a PAT token with API access or other applicable scopes.
The image must to be tagged with the tag that matches an existing project within the group.
Any image tagged differently will be rejected with the denied: requested access to the resource is denied error message.
So, with the setup below:
GitLab group called mytest
Project within that group called hello-world
Docker image tagged as registry.gitlab.com/mytest/hello-world
Deploy token created for an entire group
Docker daemon authorized to push to that registry by cat "<deploy_token>" | docker login -u "<token_username>" --password-stdin registry.gitlab.com
You will get the following results:
Successful push for docker push registry.gitlab.com/mytest/hello-world because such project exists within the group
denied: requested access to the resource is denied if you try to push an image tagged with the name of the project that does not exist in the group like docker push registry.gitlab.com/mytest/no-project
So, again, image must be tagged to match an existing path within te group, like an existing project within the group or a subgroup.
| GitLab | 57,654,620 | 17 |
Can you use the same ssh key for different version control hosting services?
And if you can, what are the pros and cons?
Scenario: I have ssh keys that I am using on my computer, can I and should I use the same ssh keys with gitlab/gitbucket on the same computer?
| No, it is not advisable: a private key should remain used for only one service, that way you can revoke/change it just for that service.
What you can do is set up a ~/.ssh/config file in which you can associate the right private key with the right host, as explained here.
| GitLab | 56,285,972 | 17 |
I want to publish a private npm package with Gitlab CI.
I've created an auth token for my npm user and set it as a variable NPM_TOKEN in my Gitlab CI settings.
The job then creates an .npmrc file with the registry and the auth token.
- npm run build && npm run build:es6
- echo '//registry.npmjs.org/:_authToken=${NPM_TOKEN}'>.npmrc
- npm publish
The job fails with this message:
npm ERR! code ENEEDAUTH
npm ERR! need auth auth required for publishing
npm ERR! need auth You need to authorize this machine using `npm adduser`
Is it possible to publish with only an auth token?
| As @Amityo said, rather than manually editing the npmrc file,
npm config set //registry.npmjs.org/:_authToken ${NPM_TOKEN}
is the way to go, because otherwise you may be editing the wrong npmrc file.
If you are still getting an authentication error, and are certain that the token is correct, check your registry URL. You can run
npm publish --verbose
whose output will includes lines like
npm verb getPublishConfig { registry: 'https://.......' }
npm verb mapToRegistry no registry URL found in name for scope @boxine
npm verb publish registryBase https://.......
If you are publishing to npmjs.org, the URL (....... above) should be https://registry.npmjs.org/ .
If this registry URL does not fit, look in your npmrc file for a different one. Also make sure you didn't override the registry in your package.json file! You can search for publishConfig in that file.
| GitLab | 54,665,511 | 17 |
Could you tell me if I do it in correct way:
I have create Docker image with all stuff which is need for running my tests in gitlab CI
I push it to gitlab registry
I can see on gitlab page in section Registry my image - gitlablogin/projectname
I want to use this image for CI, so in .gitlab-ci.yml I add image: gitlablogin/projectname
Before I had had in .gitlab-ci.yml
same_task:
stage: deploy
image: python:3
script:
- python -V
Now I have:
pep8:
stage: deploy
image: gitlablogin/projectname
script:
- python -V
and after this change job failed:
Running with gitlab-runner 11.4.2 (cf91d5e1)
on docker-auto-scale 72989761
Using Docker executor with image gitlablogin/projectname ...
Pulling docker image gitlablogin/projectname ...
ERROR: Job failed: Error response from daemon: pull access denied for gitlablogin/projectname, repository does not exist or may require 'docker login' (executor_docker.go:168:0s)
Is my usage of docker in context of gitlab CI and gitlab registry is correct? I also want to keep my docker file on same repo and build new image when samething change in Dockerfile, what will be the best way to do it?
| Right now it is possible to use images from your gitlab registry without any special steps. Just build and push an image to your gitlab project container registry
docker build -t registry.gitlab.com/gitlabProject/projectName:build .
docker push registry.gitlab.com/gitlabProject/projectName:build
and then just specify this image in your pipeline settings:
image: registry.gitlab.com/gitlabProject/projectName:build
Gitlab is able to pull this image using it's credentials:
Preparing the "docker+machine" executor
00:46
Using Docker executor with image registry.gitlab.com/gitlabProject/projectName:build ...
Authenticating with credentials from job payload (GitLab Registry)
Pulling docker image registry.gitlab.com/gitlabProject/projectName:build ...
Using docker image sha256:e7e0f4f5fa8cff8a93b1f37ffd7dd0505946648246aa921dd457c06a1607304b for registry.gitlab.com/gitlabProject/projectName:build ...
More: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#support-for-gitlab-integrated-registry
| GitLab | 53,088,837 | 17 |
I try to update my single gitlab runner from 11.0 to 11.3.1
and followed the instruction on the gitlab doc.
sudo apt-get install gitlab-runner will confirm that I have the new version installed:
gitlab-runner is already the newest version (11.3.1).
The last updates like 10.* to 11.0 worked absolutely fine but this time
the runner still stays on 11.0 (in -help and gitlab-ci web ui).
A restart of the runner don't change anything so it looks like I miss a major step for the update some how.
It would be great to find out what I'm doing wrong, thanks in advance. :-)
OS: Ubuntu 18.04.1
(I'm relatively new in the linux and gitlab world so it could be something obvious)
Used update command:
# For Debian/Ubuntu/Mint
sudo apt-get update
sudo apt-get install gitlab-runner
| Ok the problem was solved by using the manual update described at:
https://docs.gitlab.com/runner/install/linux-manually.html
Stop the service (you need elevated command prompt as before):
sudo gitlab-runner stop
Download the binary to replace Runner's executable:
sudo wget -O /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-386
sudo wget -O /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64
You can download a binary for every available version as described in
Bleeding Edge - download any other tagged release.
Give it permissions to execute:
sudo chmod +x /usr/local/bin/gitlab-runner
Start the service:
sudo gitlab-runner start
"latest" - may install a beta so it is important to select the right tag (also described in the link)
| GitLab | 52,716,757 | 17 |
How to setup Gradle publish task user credentials with GitLab CI secret variables? I am using gradle maven publish plugin, and here is snippet from build.gradle
repositories {
maven {
credentials {
username artifactUser
password artifactPass
}
url "..."
}
}
I've tried to use gradle.properties as below
artifactUser=${env.MAVEN_REPO_USER}
artifactPass=${env.MAVEN_REPO_PASS}
And several ways of accessing secret variables in .gitlab-ci.yml file (because gradle.properties is not picked up from gradle or variables are not transformed correctly, it is in root project dir)
Method 1
'./gradlew publish -x test -PartifactUser=${env.MAVEN_REPO_USER} -PartifactPass=${env.MAVEN_REPO_PASS}'
Error: /bin/bash: line 56: -PartifactUser=${env.MAVEN_REPO_USER}: bad substitution
Method 2
before_script:
- chmod +x ./gradlew
- export REPO_USER=${env.MAVEN_REPO_USER}
- export REPO_PASS=${env.MAVEN_REPO_PASS}
...
deploy:
stage: deploy
script:
- ./gradlew publish -x test -PartifactUser=$REPO_USER -PartifactPass=$REPO_PASS
I am using openjdk:8-jdk-slim image for build using gradle wrapper. Seems like there are several issues with this kind of variable usage, do we have any workaround?
| You don't need env. prefinx in your .gitlab-ci.yml. You don't need to re-export the variables as well.
If you have defined a variables named MAVEN_REPO_USER and MAVEN_REPO_PASS in Gitlab CI/CD settings for the project, you can just use them in Gradle script:
repositories {
maven {
credentials {
username System.getenv("MAVEN_REPO_USER")
password System.getenv("MAVEN_REPO_PASS")
}
url "…"
}
}
| GitLab | 51,137,958 | 17 |
I'm very new to gitlab and gitlab CI and I have put a pipeline in place that is successfully completing.
My master and development branches are protected so a merge request is required so that another dev in the group can review the code and comment before merging.
I was wondering if it is possible to generate this merge request at the end of this pipeline. Is there a setting for this in the gitlab repository or do I have to create a script to achieve this?
Side note :
Just before posting this I came across this section of the gitlab docs
I'm using gitlab-runner 11.0.0 on ubuntu 18.04
| In order to achieve my simple needs, I simply added a final stage to my pipeline which essentially executes a bash script adapted from this post.
EDIT:
As requested by @Yuva
# Create a pull request on pipeline success
create_merge_request:
stage: createMR
tags:
- autoMR
script:
- 'echo Merge request opened by $GITLAB_USER_NAME '
- ~/commit.sh
and in commit.sh
#!/bin/bash
# This script was adapted from:
# https://about.gitlab.com/2017/09/05/how-to-automatically-create-a-new-mr-on-gitlab-with-gitlab-ci/
# TODO determine URL from git repository URL
[[ $HOST =~ ^https?://[^/]+ ]] && HOST="${BASH_REMATCH[0]}/api/v4/projects/"
# The branch which we wish to merge into
TARGET_BRANCH=develop;
# The user's token name so that we can open the merge request as the user
TOKEN_NAME=`echo ${GITLAB_USER_LOGIN}_COMMIT_TOKEN | tr "[a-z]" "[A-Z]"`
# See: http://www.tldp.org/LDP/abs/html/parameter-substitution.html search ${!varprefix*}, ${!varprefix@} section
PRIVATE_TOKEN=`echo ${!TOKEN_NAME}`
# The description of our new MR, we want to remove the branch after the MR has
# been closed
BODY="{
\"project_id\": ${CI_PROJECT_ID},
\"source_branch\": \"${CI_COMMIT_REF_NAME}\",
\"target_branch\": \"${TARGET_BRANCH}\",
\"remove_source_branch\": false,
\"force_remove_source_branch\": false,
\"allow_collaboration\": true,
\"subscribed\" : true,
\"title\": \"${GITLAB_USER_NAME} merge request for: ${CI_COMMIT_REF_SLUG}\"
}";
# Require a list of all the merge request and take a look if there is already
# one with the same source branch
LISTMR=`curl --silent "${HOST}${CI_PROJECT_ID}/merge_requests?state=opened" --header "PRIVATE-TOKEN:${PRIVATE_TOKEN}"`;
COUNTBRANCHES=`echo ${LISTMR} | grep -o "\"source_branch\":\"${CI_COMMIT_REF_NAME}\"" | wc -l`;
# No MR found, let's create a new one
if [ ${COUNTBRANCHES} -eq "0" ]; then
curl -X POST "${HOST}${CI_PROJECT_ID}/merge_requests" \
--header "PRIVATE-TOKEN:${PRIVATE_TOKEN}" \
--header "Content-Type: application/json" \
--data "${BODY}";
echo "Opened a new merge request: WIP: ${CI_COMMIT_REF_SLUG} for user ${GITLAB_USER_LOGIN}";
exit;
fi
echo "No new merge request opened"
| GitLab | 51,104,622 | 17 |
i try to get Gitlab with SSH working, but it won't.
I have done following steps:
1 ) generate ssh-key
ssh-keygen -t rsa -C "[email protected]" -b 4096
2 ) named the key "id_rsa" in folder /Users/myUserName/.ssh/
3) copied the key via
pbcopy < ~/.ssh/id_rsa.pub
4) insert the key into gitlab
When i now try to clone a repository i receive the following error:
$ git clone [email protected]:myName/repositoryName/ repoName
Cloning into 'repoName'...
ssh: connect to host gitlab.com port 22: Operation timed out
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
What is going wrong?
| I found myself with the same problem.
As Adrian Dymorz describe you can check if you have access to Gitlab through SSH with the following command:
ssh [email protected]
You should receive a response like this:
...
Welcome to GitLab, <Gitlab ID Account>!
Shared connection to altssh.gitlab.com closed.
If not, then betarrabara should solve your issues BUT you should not provide your id_rsa but your id_rsa.pub instead:
Host gitlab.com
Hostname altssh.gitlab.com
User git
Port 443
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa.pub
| GitLab | 51,079,198 | 17 |
I want to host static page generated with Sphinx on GitLab Pages. Built index.html file is in:
project/docs/build/html
How .gitlab-ci.yml should look like to deploy the page? I have something like that and it isn't working:
pages:
stage: deploy
script:
- echo 'Nothing to do...'
artifacts:
paths:
- docs/build/html
only:
- master
| According to the documentation for .gitlab-ci.yml, the pages job has special rules it must follow:
Any static content must be placed under a public/ directory
artifacts with a path to the public/ directory must be defined
So the example .gitlab-ci.yml you gave would look something like this:
pages:
stage: deploy
script:
- mv docs/build/html/ public/
artifacts:
paths:
- public
only:
- master
And of course, if you don't want to move the html folder for whatever reason, you can copy it instead.
For further reference, an example sphinx project for GitLab Pages was pushed around the time you originally posted this question.
| GitLab | 48,223,039 | 17 |
I know that this can be done with dockerhub. I want to know if there is something similar available for gitlab registry.
The use case is that, I have written a fabric script to revert a deployment to a particular tag provided by the user. Before actually pulling in the images, I want to know whether an image with the specified tag exists in the registry and warn the user accordingly.
I've searched in their documentation, but couldn't find anything.
Note: User here is the person who is deploying the code.
| Hint: Also have a look at @filiprafaj's answer using crane.
Ok, here is a solution I came up with using the docker:stable image by enabling the experimental client features.
mkdir -p ~/.docker
"echo '{\"experimental\": \"enabled\"}' > ~/.docker/config.json"
docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
docker manifest inspect $IMGNAME:$IMGTAG > /dev/null && exit || true
The exit terminates the build script in case that tag already exists. Also you should be aware that ~/.docker/config.json is overwritten. That is also why the login must happen afterwards.
Update: Instead of writing to the config one can also set the DOCKER_CLI_EXPERIMENTAL environment variable to enabled. Thus the first two lines can be replaced with export DOCKER_CLI_EXPERIMENTAL=enabled
| GitLab | 47,660,841 | 17 |
I get this error on a fresh install of gitlab. The message looks like:
fatal: unable to access 'https://gitlab-ci-
token:[email protected]/something.git/': Peer's
Certificate issuer is not recognized.ERROR: Job
failed: exit status 1
Any suggestions on how to fix it?
| Had faced the same problem after enabling verbose mode by following command
export GIT_CURL_VERBOSE=1 and found the following issue:
NSS error -8179 (SEC_ERROR_UNKNOWN_ISSUER)
Found this following site helpful,But its good when you have entire control for
the proxy server as well to enter the certificates.
http://dropbit.com/?p=168
I instead ran following command to bypass ssl verification by porxy server and it worked
git config --global http.sslVerify "false"
| GitLab | 45,608,595 | 17 |
If one needs to install private repositories with npm the environment variable NPM_TOKEN needs to be set.
NPM_TOKEN=00000000-0000-0000-0000-000000000000
My build stage in gitlab pipelines needs to install a private repository. Thus I put this NPM_TOKEN secret variable in my gitlab pipeline settings.
My current gitlab-ci configuration :
image: x/node
build_job:
script:
- printenv NPM_TOKEN
- npm i @x/test
The docker image is one that I made it just sets a .npmrc file:
FROM node:latest
COPY .npmrc .
where I have .npmrc in the same directory:
//registry.npmjs.org/:_authToken=${NPM_TOKEN}
I've tried the docker image by:
run -it myimage bash
export NPM_TOKEN=...
npm i @x/test
That works, the private package is installed.
However on gitlab pipelines it does not find the package (404). When the job runs I can clearly see the NPM_TOKEN env variable being printed. So I don't know what's up.
| I changed gitlab-ci to this:
image: dasnoo/node
build_job:
script:
- printenv NPM_TOKEN
- npm config set //registry.npmjs.org/:_authToken ${NPM_TOKEN}
- npm i @dasnoo/testpriv
and it works. not sre why I had to do that though
| GitLab | 45,037,913 | 17 |
I'm currently using GitLab in combination with CI runners to run unit tests of my project, to speed up the process of bootstrapping the tests I'm using the built-in cache functionality, however this doesn't seem to work.
Each time someone commits to master, my runner does a git fetch and proceeds to remove all cached files, which means I have to stare at my screen for around 10 minutes to wait for a test to complete while the runner re-downloads all dependencies (NPM and PIP being the biggest time killers).
Output of the CI runner:
Fetching changes...
Removing bower_modules/jquery/ --+-- Shouldn't happen!
Removing bower_modules/tether/ |
Removing node_modules/ |
Removing vendor/ --'
HEAD is now at 7c513dd Update .gitlab-ci.yml
Currently my .gitlab-ci.yml
image: python:latest
services:
- redis:latest
- node:latest
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- ~/.cache/pip/
- vendor/
- node_modules/
- bower_components/
before_script:
- python -V
# Still gets executed even though node is listed as a service??
- '(which nodejs && which npm) || (apt-get update -q && apt-get -o dir::cache::archives="vendor/apt/" install nodejs npm -yqq)'
- npm install -g bower gulp
# Following statements ignore cache!
- pip install -r requirements.txt
- npm install --only=dev
- bower install --allow-root
- gulp build
test:
variables:
DEBUG: "1"
script:
- python -m unittest myproject
I've tried reading the following articles for help however none of them seem to fix my problem:
http://docs.gitlab.com/ce/ci/yaml/README.html#cache
https://fleschenberg.net/gitlab-pip-cache/
https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/336
| Turns out that I was doing some things wrong:
Your script can't cache files outside of your project scope, creating a virtual environment instead and caching that allows you to cache your pip modules.
Most important of all: Your test must succeed in order for it to cache the files.
After using the following config I got a -3 minute time difference:
Currently my configuration looks like follows and works for me.
# Official framework image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python
image: python:latest
# Pick zero or more services to be used on all builds.
# Only needed when using a docker container to run your tests in.
# Check out: http://docs.gitlab.com/ce/ci/docker/using_docker_images.html#what-is-service
services:
- mysql:latest
- redis:latest
cache:
untracked: true
key: "$CI_BUILD_REF_NAME"
paths:
- venv/
- node_modules/
- bower_components/
# This is a basic example for a gem or script which doesn't use
# services such as redis or postgres
before_script:
# Check python installation
- python -V
# Install NodeJS (Gulp & Bower)
# Default repository is outdated, this is the latest version
- 'curl -sL https://deb.nodesource.com/setup_8.x | bash -'
- apt-get install -y nodejs
- npm install -g bower gulp
# Install dependencie
- pip install -U pip setuptools
- pip install virtualenv
test:
# Indicate to the framework that it's being unit tested
variables:
DEBUG: "1"
# Test script
script:
# Set up virtual environment
- virtualenv venv -ppython3
- source venv/bin/activate
- pip install coverage
- pip install -r requirements.txt
# Install NodeJS & Bower + Compile JS
- npm install --only=dev
- bower install --allow-root
- gulp build
# Run all unit tests
- coverage run -m unittest project.tests
- coverage report -m project/**/*.py
Which resulted in the following output:
Fetching changes...
Removing .coverage --+-- Don't worry about this
Removing bower_components/ |
Removing node_modules/ |
Removing venv/ --`
HEAD is now at 24e7618 Fix for issue #16
From https://git.example.com/repo
85f2f9b..42ba753 master -> origin/master
Checking out 42ba7537 as master...
Skipping Git submodules setup
Checking cache for master... --+-- The files are back now :)
Successfully extracted cache --`
...
project/module/script.py 157 9 94% 182, 231-244
---------------------------------------------------------------------------
TOTAL 1084 328 70%
Creating cache master...
Created cache
Uploading artifacts...
venv/: found 9859 matching files
node_modules/: found 7070 matching files
bower_components/: found 982 matching files
Trying to load /builds/repo.tmp/CI_SERVER_TLS_CA_FILE ...
Dialing: tcp git.example.com:443 ...
Uploading artifacts to coordinator... ok id=127 responseStatus=201 Created token=XXXXXX
Job succeeded
For the coverage report, I used the following regular expression:
^TOTAL\s+(?:\d+\s+){2}(\d{1,3}%)$
| GitLab | 44,798,794 | 17 |
As claimed at their website Gitlab can be used to auto deploy projects after some code is pushed into the repository but I am not able to figure out how. There are plenty of ruby tutorials out there but none for meteor or node.
Basically I just need to rebuild an Docker container on my server, after code is pushed into my master branch. Does anyone know how to achieve it? I am totally new to the .gitlab-ci.yml stuff and appreciate help pretty much.
| Brief: I am running a Meteor 1.3.2 app, hosted on Digital Ocean (Ubuntu 14.04) since 4 months. I am using Gitlab v. 8.3.4 running on the same Digital Ocean droplet as the Meteor app. It is a 2 GB / 2 CPUs droplet ($ 20 a month). Using the built in Gitlab CI for CI/CD. This setup has been running successfully till now. (We are currently not using Docker, however this should not matter.)
Our CI/CD strategy:
We check out Master branch on our local laptop. The branch contains the whole Meteor project as shown below:
We use git CLI tool on Windows to connect to our Gitlab server. (for pull, push, etc. similar regular git activities)
Open the checked out project in Atom editor. We have also integrated Atom with Gitlab. This helps in quick git status/pull/push etc. within Atom editor itself. Do regular Meteor work viz. fix bugs etc.
Post testing on local laptop, we then do git push & commit on master. This triggers auto build using Gitlab CI and the results (including build logs) can be seen in Gitlab itself as shown below:
Below image shows all previous build logs:
Please follow below steps:
Install meteor on the DO droplet.
Install Gitlab on DO (using 1-click deploy if possible) or manual installation. Ensure you are installing Gitlab v. 8.3.4 or newer version. I had done a DO one-click deploy on my droplet.
Start the gitlab server & log into gitlab from browser. Open your project and go to project settings -> Runners from left menu
SSH to your DO server & configure a new upstart service on the droplet as root:
vi /etc/init/meteor-service.conf
Sample file:
#upstart service file at /etc/init/meteor-service.conf
description "Meteor.js (NodeJS) application for eaxmple.com:3000"
author "[email protected]"
# When to start the service
start on runlevel [2345]
# When to stop the service
stop on shutdown
# Automatically restart process if crashed
respawn
respawn limit 10 5
script
export PORT=3000
# this allows Meteor to figure out correct IP address of visitors
export HTTP_FORWARDED_COUNT=1
export MONGO_URL=mongodb://xxxxxx:[email protected]:59672/meteor-db
export ROOT_URL=http://<droplet_ip>:3000
exec /home/gitlab-runner/.meteor/packages/meteor-tool/1.1.10/mt-os.linux.x86_64/dev_bundle/bin/node /home/gitlab-runner/erecaho-build/server-alpha-running/bundle/main.js >> /home/gitlab-runner/erecaho-build/server-alpha-running/meteor.log
end script
Install gitlab-ci-multi-runner from here: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/install/linux-repository.md as per the instructions
Cheatsheet:
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.deb.sh | sudo bash
sudo apt-get install gitlab-ci-multi-runner
sudo gitlab-ci-multi-runner register
Enter details from step 2
Now the new runner should be green or activate the runner if required
Create .gitlab-ci.yml within the meteor project directory
Sample file:
before_script:
- echo "======================================"
- echo "==== START auto full script v0.1 ====="
- echo "======================================"
types:
- cleanup
- build
- test
- deploy
job_cleanup:
type: cleanup
script:
- cd /home/gitlab-runner/erecaho-build
- echo "cleaning up existing bundle folder"
- echo "cleaning up current server-running folder"
- rm -fr ./server-alpha-running
- mkdir ./server-alpha-running
only:
- master
tags:
- master
job_build:
type: build
script:
- pwd
- meteor build /home/gitlab-runner/erecaho-build/server-alpha-running --directory --server=http://example.org:3000 --verbose
only:
- master
tags:
- master
job_test:
type: test
script:
- echo "testing ----"
- cd /home/gitlab-runner/erecaho-build/server-alpha-running/bundle
- ls -la main.js
only:
- master
tags:
- master
job_deploy:
type: deploy
script:
- echo "deploying ----"
- cd /home/gitlab-runner/erecaho-build/server-alpha-running/bundle/programs/server/ && /home/gitlab-runner/.meteor/packages/meteor-tool/1.1.10/mt-os.linux.x86_64/dev_bundle/bin/npm install
- cd ../..
- sudo restart meteor-service
- sudo status meteor-service
only:
- master
tags:
- master
Check in above file in gitlab. This should trigger Gitlab CI and after the build process is complete, the new app will be available @ example.net:3000
Note: The app will not be available after checking in .gitlab-ci.yml for the first time, since restart meteor-service will result in service not found. Manually run sudo start meteor-service once on DO SSH console. Post this any new check-in to gitlab master will trigger auto CI/CD and the new version of the app will be available on example.com:3000 after the build is completed successfully.
P.S.: gitlab ci yaml docs can be found at http://doc.gitlab.com/ee/ci/yaml/README.html for your customization and to understand the sample yaml file above.
For docker specific runner, please refer https://gitlab.com/gitlab-org/gitlab-ci-multi-runner
| GitLab | 36,312,494 | 17 |
I'd like to use GitLab CI system for my Android application gradle project. The project repository is hosted on GitLab.com, so I'd like to use one of the Shared Runners provided by Gitlab Inc.
While the official tutorial provides an example for NodeJS project runner configuration and there are also shared runners for Ruby projects, I couldn't find any example or even a runner that supports Android applications.
Is there a shared runner provided by GitLab.com, which supports Android projects out of the box (by specifying image: android:4.2.2 or something like this)?
Is there a way to configure existing shared runner provided by GitLab.com to support Android projects (by modifying the .gitlab-ci.yml file)?
| I'm using this docker image to run android build on gitlab-ci
Update:
Moved to Gitlab registry
image: registry.gitlab.com/showcheap/android-ci:latest
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- chmod +x ./gradlew
cache:
paths:
- .gradle/wrapper
- .gradle/caches
build:
stage: build
script:
- ./gradlew assemble
test:
stage: test
script:
- ./gradlew check
Full Guide can check in this Gitlab Repository:
https://gitlab.com/showcheap/android-ci
If your Target SDK and Build Tools version are not listed, please make a pull request or fork my repo then make your custom target and build version.
| GitLab | 35,916,233 | 17 |
I'm currently on OS X Yosemite 10.10.3, and trying to git clone an existing repo which works fine on Windows. I've tried a combo of installing git through homebrew with curl/openssl with no luck. When i run the git clone, i get the following ssl read error:
GIT_CURL_VERBOSE=1 git clone http://myURL/gitlab/project/project.git
> remote: Counting objects: 1641, done. remote: Compressing objects:
> 100% (1588/1588), done.
> * SSLRead() return error -98061641), 136.73 MiB | 1.71 MiB/s
> * Closing connection 2 remote: Total 1641 (delta 910), reused 0 (delta 0) error: RPC failed; result=56, HTTP code = 200 Receiving objects:
> 100% (1641/1641), 137.48 MiB | 1.64 MiB/s, done. Resolving deltas:
> 100% (910/910), done.
I've tried using both the Https & Http with no luck. Has anyone else hit something similar to this?
Below are outputs of git, curl, & openssl versions if that helps.
curl --version
curl 7.37.1 (x86_64-apple-darwin14.0) libcurl/7.37.1 SecureTransport zlib/1.2.5
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IPv6 Largefile NTLM NTLM_WB SSL libz
git --version
git version 2.4.1
openssl version
OpenSSL 0.9.8zd 8 Jan 2015
Thanks in advance for any direction!
| Javabrett's link got me to the answer, it revolves around Yosemite using an incorrect SSL dependency, which Git ends up using.
Installing Git via homebrew with these flags works:
brew install git --with-brewed-curl --with-brewed-openssl
Or:
brew reinstall git --with-brewed-curl --with-brewed-openssl
| GitLab | 30,385,939 | 17 |
I have installed GitLab 7.2.1 with the .deb package from GitLab.org for Debian 7 on a virtual server where I have root access.
On this virtual server I have already installed Apache, version 2.2.22 and I don't want to use Ngnix for GitLab.
Now I have no idea where the public folders of GitLab are or what I have to do or on what I have to pay attention.
So my question is: How do I have to configure my vhost for apache or what do I have to do also that I can use a subdomain like "gitlab.example.com" on my apache web server?
| With two things in mind:
Unicorn is listening on 8080 (you can check this with sudo netstat -pant | grep unicorn)
Your document root is /opt/gitlab/embedded/service/gitlab-rails/public
You can create a new vhost for gitlab in apache with the following configuration:
<VirtualHost *:80>
ServerName gitlab.example.com
ServerSignature Off
ProxyPreserveHost On
<Location />
Order deny,allow
Allow from all
ProxyPassReverse http://127.0.0.1:8080
ProxyPassReverse http://gitlab.example.com/
</Location>
RewriteEngine on
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f
RewriteRule .* http://127.0.0.1:8080%{REQUEST_URI} [P,QSA]
# needed for downloading attachments
DocumentRoot /opt/gitlab/embedded/service/gitlab-rails/public
</VirtualHost>
| GitLab | 25,785,903 | 17 |
I am asking the question here, because documentation didn't help me.
During runner's setup, 2 things are being asked: url of gitlab CI coordinator and registration token. I don't get what any of them should be.
As for url, it could be either url of gitlab CI web interface (ex: http://localhost:80/) ot url related to build, which is described in build's advanced properties.
Registration token could be something from documentation - but the link to it is dead (see: http://gitlab-ci-domain.com/admin/runners) or registration token from build's advanced properties.
However, when i try to supply to runner's setup url and registration token from build properties, i get access error which informs me that registration failed. Due to lack of understanding what those parameters should be, i cannot determine what is wrong.
| the Url is your Gitci Url.
the token you mention its in your gitlabci under "runners" next to the line:
"To register new runner you should the following registration token. With this token the runner will request a unique runner token and use that for future communication"
| GitLab | 25,752,466 | 17 |
I have built the following post-receive hook:
#!/bin/sh
cd /var/node/heartbeat/ || exit
unset GIT_DIR
echo $USER
git pull --no-edit
When I'm pushing to the repository the following error is returned:
remote:
remote: From server.net:chris/heartbeat
remote: c0df678..5378ade master -> origin/master
remote:
remote: *** Please tell me who you are.
remote:
remote: Run
remote:
remote: git config --global user.email "[email protected]"
remote: git config --global user.name "Your Name"
remote:
remote: to set your account's default identity.
remote: Omit --global to set the identity only in this repository.
remote:
remote: fatal: empty ident name (for <[email protected]>) not allowed
When running git config -l from the git user:
user.name=server
[email protected]
core.autocrlf=input
core.repositoryformatversion=0
core.filemode=true
core.bare=false
core.logallrefupdates=true
remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
[email protected]:chris/heartbeat.git
branch.master.remote=origin
branch.master.merge=refs/heads/master
I don't understand why it's still thinking that my user.email and user.name are empty as git config -l tells me it's not. Does anyone know a solution/workaround for this problem?
| To make sure this work, a git config --local -l needs to return the user name and email.
If it doesn't, go to that repo on the server, and type;:
git config user.name server
git config user.email [email protected]
| GitLab | 24,850,247 | 17 |
I have set up my own server (at home) and i am reaching it via putty on my main PC.
Gitlab is installed and configured, i can reach gitlab and log in.
But when i try to push files (through HTTP) to my own project i get this message:
POST git-receive-pack (381 bytes)
remote: GitLab: You are not allowed to access master![K
remote: error: hook declined to update refs/heads/master[K
To http://myserver.com/root/push2jump.git
! [remote rejected] master -> master (hook declined)
error: failed to push some refs to 'http://myserver.com/root/push2jump.git'
I am using HTTP instead of SSH because there i get "Access denied", so basically neither is working.
When i run
sudo bundle exec rake gitlab:check RAILS_ENV=production
It tells me that the Sidekiq script is not running (which i can't seem to fix, not sure if it's related to this problem)
Ofcourse it tells me that the repository is empty. The rest seems fine.
I checked
.ssh/authorized_keys
Which seem correct as well, the key there is the same as my saved key.
And my repos_path in gitlab-shell/config.yml looks good, not using symlink:
repos_path: /home/git/repositories/
I have run the official gitlab installation guide.
Can anyone help me with this problem?
Thanks in advance
UPDATE
System information
System: Ubuntu 12.04
Current User: git
Using RVM: no
Ruby Version: 2.0.0p481
Gem Version: 2.0.14
Bundler Version:1.6.2
Rake Version: 10.3.1
Sidekiq Version:2.17.0
GitLab information
Version: 6.9.2
Revision: e46b644
Directory: /home/git/gitlab
DB Adapter: mysql2
URL: ***
HTTP Clone URL: ***/some-project.git
SSH Clone URL: ***:some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 1.9.4
Repositories: /home/git/repositories/
Hooks: /home/git/gitlab-shell/hooks/
Git: /usr/local/bin/git
| I had this problem because I had master as a protected branch
Once I unprotected the branch I was able to push fine
| GitLab | 23,973,185 | 17 |
Gitlab 6.0 was released yesterday. I am curious to know why they switched to Unicorn from Puma.
Versions prior to 5 were using Unicorn. I thought switch to Puma was for the better.
Is there a technical reason for this switch?
| Update April 2020, GitLab 12.10:
Puma will become the default application server
GitLab will be switching default application servers from Unicorn to Puma in 13.0.
And with GitLab 13.0 (May 2020):
Reduced memory consumption of GitLab with Puma
Read the last sections below.
Original answer 2013
The commit 3bc484587 offers some clues from Mathieu 'OtaK' Amiot:
We switched from Puma in GitLab 5.4 to unicorn in GitLab 6.0.
why switch back to Unicorn again?
Puma caused 100% CPU and greater memory leaks when running mult-ithreaded on systems with many concurrent users.
That's because people used MRI. You MUST use JRuby or Rubynius when using Puma. Or else the world breaks apart.
Mathieu adds in the comments:
Yes, Unicorn is better (but more memory-eager) on MRI setups.
Puma is better on Rubinius & JRuby, that's all.
They can't force people to use other implementations of the Ruby Runtime, so they just fell back to the best setup for most setups :) –
Light controversy ensues around:
Puma: Hongli comments:
Puma's multithreading works just fine with MRI.
I say this as one of the authors behind Ruby Enterprise Edition, so I know Ruby's threading system inside-out.
Evan Phoenix, Puma's author, has also stated that using Puma with MRI works just fine.
If there are issues then they are likely in Gitlab's code.
That being said, in Apr. 2020, Puma is now available as an alternative web server to Unicorn with GitLab 12.9
(mentioned by mbomb007 in the comments)
Phusion Passenger Enterprise:
Mathieu 'OtaK' Amiot comments:
Passenger is not as stable as most people think. A nginx + Unicorn is more stable IMHO. –
Hongli answers:
We have lots and lots of large users using Phusion Passenger, both open source and Enterprise, on a daily basis with great stability and success.
Think New York Times, 37signals, Motorola, UPS, Apple, AirBnB. Some of them even switched away from Unicorn in favor of Passenger (either open source or Enterprise)
Update August 2014: there is an article on "Running GitLab 7.1 using Puma instead of a Unicorn"
Update April 2020, GitLab 12.10:
Puma will become the default application server
GitLab will be switching default application servers from Unicorn to Puma in 13.0.
Puma is a multithreaded application server, allowing GitLab to reduce it’s memory consumption by about 40%.
A> s part of the GitLab 13.0 upgrade, users who have customized Unicorn settings will need to manually migrate these settings to Puma.
It will also be possible to remain on Unicorn, by disabling Puma and re-enabling Unicorn until Unicorn support is removed in a future release.
This is thanks to Dmitry Chepurovskiy, who has made a major contribution adding the Puma web server to the GitLab unicorn Helm chart (soon to be the webservice chart).
This work provides users of the GitLab Helm chart with the option to use Puma instead of Unicorn.
In testing, we have observed a 40% reduction in memory usage when using Puma as the web server.
See Epic.
With GitLab 13.0 (May 2020):
Reduced memory consumption of GitLab with Puma
Puma is now the default web application server for both the Omnibus-based and Helm-based installations. Puma reduces the memory footprint of GitLab by about 40% compared to Unicorn, increasing the efficiency of GitLab and potentially saving costs for self-hosted instances.
Installations which have customized the number of Unicorn processes, or use a slower NFS drive, may have to adjust the default Puma configuration.
See the Important notes on upgrading and GitLab chart improvements for additional details.
See documentation and Epic.
| GitLab | 18,398,626 | 17 |
I am trying to install GITLAB. I get this error executing "sudo gem install charlock_holmes --version '0.6.9'" (section Install Gems)
GEOGIT:/geogit/Administrative_Tools # sudo gem install charlock_holmes --version '0.6.9'
Building native extensions. This could take a while...
ERROR: Error installing charlock_holmes:
ERROR: Failed to build gem native extension.
/usr/bin/ruby1.9 extconf.rb
checking for main() in -licui18n... no
which: no brew in (/usr/sbin:/bin:/usr/bin:/sbin)
checking for main() in -licui18n... no
***************************************************************************************
*********** icu required (brew install icu4c or apt-get install libicu-dev) ***********
***************************************************************************************
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers. Check the mkmf.log file for more
details. You may need configuration options.
Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/usr/bin/ruby1.9
--with-icu-dir
--without-icu-dir
--with-icu-include
--without-icu-include=${icu-dir}/include
--with-icu-lib
--without-icu-lib=${icu-dir}/
--with-icui18nlib
--without-icui18nlib
--with-icui18nlib
--without-icui18nlib
Gem files will remain installed in /usr/lib64/ruby/gems/1.9.1/gems/charlock_holmes-0.6.9 for inspection.
Results logged to /usr/lib64/ruby/gems/1.9.1/gems/charlock_holmes-0.6.9/ext/charlock_holmes/gem_make.out
Someone, can help me debug those logs and error?
| This looks like issue 1952
It was actually weirdness with the way my ubuntu VPS is commissioned. Mine did not come with a C compiler or libdev obviously.
The problem fix I found was to install libdev first, then the GCC
Then apt-get install libicu-dev.
Update 2015: additional comments include:
yum install libicu-devel worked for me
You just need to make sure "patch" is installed (yum install patch) then it should work
| GitLab | 15,553,792 | 17 |
I'm setting up a SSH key for the first time on Gitlab.com. I'm stuck at verifying that you can connect: ssh -T [email protected].
The gitlab.example.com you are supposed to replace with your Gitlab instance url but I keep getting "ssh: Could not resolve hostname : Name or service not known".
I'm using the Gitlab SaaS solution and have tried various formats, such as:
ssh -T [email protected]/my-workspace-name
ssh -T [email protected]:my-workspace-name
ssh -T [email protected]/my-workspace-name/project-name
What is the correct format that should work?
| Correct format is ssh -T [email protected].
my-workspace-name is not part of the instance url.
| GitLab | 72,227,848 | 16 |
I want to run jobs in the same stage sequentially instead of parallel in GitLab CI. Currently this is what I have:
I want unit-test to run before integration-test and not in parallel. I have looked into the docs and have encountered DAG but it needs the job to be in a prior stage and cannot be on the same stage. Is there a way to achieve this?
| Yes its already described in the documentation for stages, jobs are started in parallel in one stage.
It says:
To make a job start earlier and ignore the stage order, use the needs keyword.
As you said, this is not possible in GitLab < 14.2 within a stage (needs):
needs: is similar to dependencies: in that it must use jobs from prior stages, meaning it's impossible to create circular dependencies. Depending on jobs in the current stage is not possible either, but support is planned.
As an alternative, you could define several stages and use the keyword needs between jobs in these stages.
Since GitLab 14.2 (issue) its possible.
| GitLab | 66,434,596 | 16 |
This question asked by coderss but restarting the computer seems to noneffective.
422
The change you requested was rejected.
Make sure you have access to the thing you tried to change.
Please contact your GitLab administrator if you think this is a mistake.
I have above error in Firefox under Linux but I have access in Chromium.
That's looks like typical cookie problem.
I tried clear all Gitlab related cookies then restarted computer without any new sign in attempt. and restarted computer :) yeah I just try
But still same error, same browser.
How can I handle this problem?
This error also occurs at forgot password section and in private tab of Firefox.
Is there another Gitlab related cookie?
| The issue should be fixed not only with cookies as discribed, but also with a correction of time system.
I faced exactly the same problem: unable to connect with Firefox, even with a reset of cookies, but I was able to connect with Chrome. (That sounds strange because my clock system was false even on Chrome.)
The solution came with this very short explanation:
"it's was because my local time zone wasn't set up properly (and was messing with cookies)"
Source: https://www.reddit.com/r/gitlab/comments/cv7pov/422_error_on_wwwgitlabcomuserssignin_and/ey7l7lz?utm_source=share&utm_medium=web2x&context=3
| GitLab | 65,821,162 | 16 |
I went to about.gitlab.com. from there I clicked on sign in. The browser keeps showing: "Checking your browser before accessing gitlab.com.". This has been going on for 10 hours. I have Already tried clearing my cookies and restarting my pc. This did not yield any result. My firewall is turned off so it isn't a firewall issue.
| This is so extremely frustrating and makes me want to move everything back from GitLab to GitHub.
In my case, the addon that GitLab (actually cloudflare) didn't like was the Chameleon (Random Agent Spoofer) plugin, which I use to prevent tracking/fingerprinting across websites.
https://addons.mozilla.org/en-US/firefox/addon/chameleon-ext/
Thanks Cloudflare for depending on fingerprinting and effectively issuing a DOS attack on GitLab customers that use tools to protect themselves.
| GitLab | 65,086,187 | 16 |
I know that you can reuse blocks of code in a before script using yaml anchors:
.something_before: &something_before
- echo 'something before'
before_script:
- *something_before
- echo "Another script step"
but this doesn't seem to work when the .something_before is declared in a shared .yml file via the include:file. It also does not seem that extends works for before_script. Does anyone know a way of reusing some steps in a before_script from a shared .yml file?
EDIT: My use case is that I have 2 gitlab projects with almost identical before_script steps. I don't want to have to change both projects whenever there's a change, so I have a third, separate Gitlab project that has a .yml template that I am including via include:file in both projects. I want to put all the common code in that shared template, and just have like two lines before_script for the git project that has the two extra steps.
| You can use the !reference tag.
.something:
before_script:
- echo 'something before'
before_script:
- !reference [".something", "before_script"]
- echo "Another script step"
| GitLab | 61,875,759 | 16 |
I want to set up a CI/CD pipelines in Gitlab that can read the latest tag and get that last tag to increment my next version application. I came with this configuration:
stages:
- version
calculate_version:
image:
name: alpine/git:latest
entrypoint: [""]
stage: version
script:
- VERSION=$(git tag);test -z "$VERSION" && echo "no version tag found" && exit 1
- CMDLINE="$VERSION";
- echo $VERSION
- echo $CMDLINE > cmdline
artifacts:
paths:
- cmdline
But i get no tags listed there on $VERSION. Its look like Gitlab not passing the tags on the repository. However if I create and push a new tag, it shows just that new tag, not all tag list I expected.
Is this the behaviour of the GitLab ci/cd? If yes, how can I get all of the tags in my repo inside the pipeline?
| You can obtain tags using Gitlab API
By default, results are ordered by the last updated tags, so if you want to get the last one, you can modify your script block like this:
script:
- VERSION=$(curl -Ss --request GET --header "PRIVATE-TOKEN: <REPLACE_BY_A_VARIABLE>" "https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/repository/tags" | jq -r '.[0] | .name')
- test -z "$VERSION" && echo "no version tag found" && exit 1
| GitLab | 59,948,031 | 16 |
Local Setup
I created a public and private SSH key via the ssh-keygen command.
I decided to setup the private key locally first, before setting it up on my repo's gitlab CI.
I setup the public key on the server (in this case, another gitlab repo, but this may change in the future and shouldn't affect the question).
I successfully communicated with the server locally via the following command (in this case I am using SSH via git, but this again may change in the future):
git clone [email protected]:...../......git
GitLab CI Setup
I then decided to setup the private key and communication on gitlab CI.
Inside my repo, I navigated to Settings -> Continuous Integration -> Variables, and added the following environment variables:
SSH_DEPLOY_PRIVATE_KEY - I used to same private key that I used locally
SSH_KNOWN_HOSTS
I took the gitlab.com known host from my local computer's ~/.ssh/known_hosts file
gitlab.com,35.231.145.151 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
I then setup the SSH inside .gitlab-ci.yml:
script:
- apt-get install openssh-client -y
- eval $(ssh-agent -s)
- echo "$SSH_DEPLOY_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p /.ssh && touch /.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS" >> /.ssh/known_hosts
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
This seemed to work fine and I got the following message: Identity added: (stdin) (runner@....)
I then added the same git clone command to communicate with the server, and it failed with the following error:
Cloning into '......'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Testing locally still works. I used the same commands above to setup the SSH locally (except I used pacman -S openssh to install instead).
How do I fix this?
Edit
I'm aware that I can execute ssh-keyscan directly in the GitLab CI and this should in theory solve the problem, but from what I know, this is susceptible to man-in-the-middle attacks. I'm trying to find a more secure solution.
Edit 2
After running ssh-keyscan directly in the GitLab CI, I get the same error message.
Verbose output is the same:
$ GIT_SSH_COMMAND="ssh -vvv" git clone [email protected]:..../.....git deployed
Cloning into 'deployed'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Edit 3
Seems to be connected to the internet. Plus apt-get install wouldn't work otherwise.
Edit 4
I don't understand why this is such a difficult task. I followed this article and am doing everything correctly. There seems to be plenty of other similar questions, which also do not have any answers. Is this just an issue with GitLab CI that we have no control over?
I'm also now thinking that it has something to do with the fact that the SSH server is another GitLab repo. Maybe GitLab CI blocks SSH connections within the same network. Not sure why but it's a possibility. Also don't know how you'd connect without SSH.
Edit 5
The verbose output clearly was not working using GIT_SSH_COMMAND, so I tried an ssh connection without git:
ssh -vvvv [email protected]
Log output:
OpenSSH_6.7p1 Debian-5+deb8u5, OpenSSL 1.0.1t 3 May 2016
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
Pseudo-terminal will not be allocated because stdin is not a terminal.
debug2: ssh_connect: needpriv 0
debug1: Connecting to gitlab.com [35.231.145.151] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_rsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.7p1 Debian-5+deb8u5
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.2p2 Ubuntu-4ubuntu2.8
debug1: match: OpenSSH_7.2p2 Ubuntu-4ubuntu2.8 pat OpenSSH* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug3: load_hostkeys: loading entries for host "gitlab.com" from file "/root/.ssh/known_hosts"
debug3: load_hostkeys: loaded 0 keys
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]
debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,[email protected],zlib
debug2: kex_parse_kexinit: none,[email protected],zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
debug2: kex_parse_kexinit: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519
debug2: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]
debug2: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]
debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: kex_parse_kexinit: none,[email protected]
debug2: kex_parse_kexinit: none,[email protected]
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: setup [email protected]
debug1: kex: server->client aes128-ctr [email protected] none
debug2: mac_setup: setup [email protected]
debug1: kex: client->server aes128-ctr [email protected] none
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ECDSA f1:d0:fb:46:73:7a:70:92:5a:ab:5d:ef:43:e2:1c:35
debug3: load_hostkeys: loading entries for host "gitlab.com" from file "/root/.ssh/known_hosts"
debug3: load_hostkeys: loaded 0 keys
debug3: load_hostkeys: loading entries for host "35.231.145.151" from file "/root/.ssh/known_hosts"
debug3: load_hostkeys: loaded 0 keys
debug1: read_passphrase: can't open /dev/tty: No such device or address
Host key verification failed.
The second-last line indicates that it's trying to communicate with the terminal using the /dev/tty file. Of course, this script is running in a non-interactive manor so it fails. Shouldn't it be using my key instead of requesting a passphrase from the terminal?
| You may need to try setting the mode to 644 rather than 700. 644 is what is suggested in the Verifying the SSH host keys documentation, and is also what SSH uses for this file by default. Some parts of SSH are very particular about this - I'm not sure if known_hosts is particular.
The docs also mention you should set the value of SSH_KNOWN_HOSTS variable to the entire output of ssh-keyscan since there are multiple keys.
EDIT:
The following .gitlab-ci.yml worked for me on GitLab.com. Note the use of ~/.ssh/ rather than /.ssh/.
image: ubuntu:latest
test_job:
script:
- apt-get update
- apt-get install openssh-client git-core -y
- eval $(ssh-agent -s)
- echo "$SSH_DEPLOY_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- git clone [email protected]:gitlab-org/gitlab-ce.git
| GitLab | 57,290,734 | 16 |
We have a continuously growing collection of Gitlab CI variables (around 40-50) in our current project. All these variables are used during our CI/CD pipeline and are crucial for our production environment.
I want to generate backups in regular intervals in case someone messes with these variables.
Unfortunately, I do not see any options to export the variables in Project -> Settings -> CI / CD -> Environment variables. All I can do is viewing / editing / deleting the variables.
Is there maybe a hidden export function for these variables? We are self-hosting our Gitlab instance (GitLab Community Edition 11.8.1).
| You can use the API in order to query all variables. For example:
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/1/variables/TEST_VARIABLE_1"
See: https://docs.gitlab.com/ce/api/project_level_variables.html#show-variable-details
| GitLab | 56,780,817 | 16 |
This morning when I logged into GitLab I noticed this "Next" flag on the top bar:
It appears whether I am logged in or not. What does "Next" indicate?
A Google search turns up nothing. It doesn't appear in any GitLab screen shots I am able to find either.
| This is the canary version of Gitlab you are being served, you can find out more here: https://about.gitlab.com/handbook/engineering/#canary-testing
You can swap the version of Gitlab you see here: https://next.gitlab.com/?nav_source=navbar
The reasoning behind providing the Canary versions to users either at random or by those who have opt-ed in:
GitLab makes use of a 'Canary' stage. Production Canary is a series of
servers running GitLab code in production environment. The Canary
stage contains code functional elements like web, container registry
and git servers while sharing data elements such as sidekiq, database,
and file storage with production. This allows UX code and most
application logic code to be consumed by smaller subset of users under
real world scenarios before being made available to all users on
GitLab.com.
| GitLab | 56,024,793 | 16 |
I would like to setup continuous deployment from a GitLab repository to an Azure App using a PowerShell script. I'm aware that you can do this manually as per:
https://christianliebel.com/2016/05/auto-deploying-to-azure-app-services-from-gitlab/
However, I'm trying to automate this with Powershell. I've looked at this sample script for GitHub:
https://learn.microsoft.com/en-us/azure/app-service/scripts/app-service-powershell-continuous-deployment-github
But as there is no provider for GitLab, and none of the existing providers accept a GitLab URL, I'm unsure of how to proceed. I've looked at setting up a manual deployment with GitLab in the Azure Portal (using the External Repository option) and exporting the resource group template to get details of how the repository is connected to the App, but I get the error:
Could not get resources of the type 'Microsoft.Web/sites/sourcecontrols'.
Resources of this type will not be exported. (Code: ExportTemplateProviderError, Target: Microsoft.Web/sites/sourcecontrols)
At the minute, I'm working around this by mirroring my GitLab repository in GitHub, and using the continuous deployment pipeline from there to Azure. Note, this is for a repository hosted in GitLab.com, not in a self-hosted GitLab server. There is no Windows Runner setup for the project.
How can I use a PowerShell script to setup a Continuous Deployment directly from GitLab to Azure? Once the setup script is run, each subsequent commit/merge to the GitLab repository should then automatically be deployed to Azure. Preferably, this PowerShell script should use the AzureRM modules, but I'm willing to accept a solution that uses PowerShell Core and the new Az module (based on the Azure CLI). The specific test repository I'm using is public (https://gitlab.com/MagicAndi/geekscode.net), but it isn't a specific requirement for the solution to work with private repositories (but if it does, even better!).
Update 17/12/2018
I've awarded the bounty to the-fish as his answer best met my specific needs. However, given that Windows Powershell and the Azure RM module are being deprecated in favour of PowerShell Core and the new Az module (using the Azure CLI), I've created a new question asking specificially for a canonical answer using the Azure CLI and Powershell Core. I plan on offering a bounty for this question when it is open to me in 2 days. Thanks.
| It sounds like you are looking for a direct deploy from GitLab to Azure Apps, however I'd suggest using a deployment pipeline tool to give you far more options.
Azure DevOps Services Pipelines would likely be a safe option and has a free tier and here's a very brief getting started guide for Web Apps deploys.
However it doesn't have built in support for GitLab, but there appears to be a marketplace tool for integrations with GitLab.
This doesn't appear to have the capability of release triggers, but could be triggered manually. Someone else has the question about release triggers in the marketplace Q&A so perhaps it will be in the roadmap.
| GitLab | 52,664,359 | 16 |
I recently got into CI/CD, and a good starting point for me was GitLab, since they provide an easy interface for that and i got started about what pipelines and stages are, but i have run into some kind of contradictory thought about GitLab CI running on Docker.
My app runs on Docker Compose. It contains (blah blah) that makes it easy to build & run containers. Each service in the Docker Compose creates a single Docker container, excepting the php-fpm one, which is able to do the thing called "horizontal scale", so I can scale it later.
I will use that Docker Compose for production, I am currently using it in development and I want to use it too in CI/CD pipelines.
However the .gitlab-ci.yml provides support for only one image, so I have to build it and push it to either their GitLab Registry or Docker Hub in order to pull it later in the CI/CD process.
How can I build my Docker Compose's service as a single image in order to push it to the Registry/Docker so I can pull it in the CI/CD?
My project contains a docker folder and a docker-compose.yml. In the docker folder, each service has its own separate directory (php-fpm, nginx, mysql, etc.) and each one (prepare yourself) contains a Dockerfile with build details, especially the php-fpm one (deps and libs are strong with this one)
Each service in the docker-compose.yml has a build context in each of their own folder.
If I was unclear, I can provide additonal info.
|
However the .gitlab-ci.yml provides support for only one image
This is not true. From the official documentation:
Your image will be named after the following scheme:
<registry URL>/<namespace>/<project>/<image>
GitLab supports up to three levels of image repository names.
Following examples of image tags are valid:
registry.example.com/group/project:some-tag
registry.example.com/group/project/image:latest
registry.example.com/group/project/my/image:rc1
So the solution to your problem is simple - just build individual images and push them to GitLab container registry under different image name.
If you would like an example, my pipelines are set up like this:
.template: &build_template
image: docker:stable
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest || true
- if [ -z ${CI_COMMIT_TAG+x} ];
then docker build
--cache-from $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
--file $DOCKERFILE_NAME
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_TAG
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest . ;
else docker build
--cache-from $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
--file $DOCKERFILE_NAME
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest . ;
fi
- docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
- if [ -z ${CI_COMMIT_TAG+x} ]; then
docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_TAG;
fi
- docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
build:image1:
<<: *build_template
variables:
IMAGE_NAME: image1
DOCKERFILE_NAME: Dockerfile.1
build:image2:
<<: *build_template
variables:
IMAGE_NAME: image2
DOCKERFILE_NAME: Dockerfile.2
And you should be able to pull the same image using $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA in later pipeline jobs or your compose file (provided that the variables are passed to where you run your compose file).
| GitLab | 50,637,641 | 16 |
I have a GitLab project that utilises GitLab CI.
The project also uses submodules, both the project and it's submodules are under the same GitLab account.
Here is my .gitmodules file
[submodule "proto_contracts"]
path = proto_contracts
url = https://gitlab.com/areller/proto_contracts.git
I also have this piece in the .gitlab-ci.yml file
variables:
GIT_SUBMODULE_STRATEGY: recursive
However, when i run the CI I get this error
fatal: could not read Username for 'https://gitlab.com': No such device or address
Both the project and the submodules are in a private repository so you would expect to be prompted for authentication, but as I've mentioned, the project and the submodule are under the same account and one of the runner's jobs is to clone the original repository
So it's odd that it's unable to reach the submodule
Is there a way around it?
| You must use relative URLs for submodules. Update your .gitmodules as follow:
[submodule "proto_contracts"]
path = proto_contracts
url = ../../areller/proto_contracts.git
Further reading: Using Git submodules with GitLab CI | GitLab Docs
| GitLab | 48,845,464 | 16 |
I am using a GitLab pipeline to run some tests and produce a coverage report.
What I would like to do is be able to publish the produced coverage folder (that includes an html page and an src folder) to some internal GitLab static page, viewable by some team members.
I am aware of the gitlab pages concept, but the steps indicate that I have to use a static site generator for this purpose.
My questions are the following:
is the concept usable only when publishing on the official GitLab website (gitlab.io) or can I make use of my on-prem GetLab installation (i.e. so that my pages are available in my.local.gitlab.server/mynamespace/thepagesproject)?
can I just upload an index.html file with the folder of its contents and make it accessible?
what is the optimal way of making use of an EXISTING project, so that just to add some html pages to it (ideally I would like to avoid creating a new project just for this purpose)
| Can I use GitLab Pages on self-hosted instance?
Yes, GitLab Pages works on self-hosted instances. You may need to register a wildcard domain name for *.pages.<your-gitlab-domain-name>, and generate SSL certs if you are running gitlab over https only.
Once you have a domain, edit /etc/gitlab/gitlab.rb and add the extra settings, and run a gitlab-ctl reconfigure (leave out the pages_nginx settings if you are running over http only):
gitlab_pages['enable'] = true
pages_external_url "https://pages.<your-gitlab-domain-name>"
pages_nginx['redirect_http_to_https'] = true
pages_nginx['ssl_certificate'] = "/etc/gitlab/ssl/pages.<your-gitlab-domain-name>.crt"
pages_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/pages.<your-gitlab-domain-name>.key"
Once that is done you will be able to access per-project pages via <group>.pages.<your-gitlab-domain-name>/<project>
Can I upload whatever I want to pages?
Yes. Each GitLab CI job can create content to publish in GitLab pages by writing it into a public folder, and registering public as an artifact directory. A final pages job should be added to the CI pipeline which causes the pages content to be published (overwriting anything that was there before). All the content of the public directory will be available via the <group>.pages.<your-gitlab-domain-name>/<project> URL, which means you have full control over content.
Note that the pages job in CI doesn't need to have any script, it just needs to be present with the job name "pages". This is a magic job name that triggers the pages publish. You may want to add job restrictions so that this only runs on master branch pipelines.
Can I add pages publish to an existing project?
Yes. Any steps that create content you want to publish should write the content to a public subdirectory, and register the public directory as an artifact directory.
my job:
stage: build
script:
- echo "Do some things and write them to public directory" > public/index.html
artifacts:
paths:
- public
expire_in: 2 weeks
Note: I like to add expire_in: 2 weeks to limit the length of time artifacts are kept around. Once the pages have been published the artifacts aren't really needed.
Finally you need to add a pages job to trigger the pages publish:
# This job does nothing but collect artifacts from other jobs and triggers the pages build
# The artifacts are picked up by the pages:deploy job.
pages:
stage: deploy
script:
- ls -l public
artifacts:
paths:
- public
only:
- master
Usually you will only want to publish on master branch, but you have freedom to choose when you want the pages publish to run. It is important to note that when the pages publish runs it will fully replace any content that was previously published, so you can't append to existing content (although there are some hacks that allow you to achieve something similar).
| GitLab | 48,743,076 | 16 |
I have a GitLab repository with documentation in the attached wiki (i.e. NOT in the repo itself) and an image file inside the repository itself that I want to embed in wiki pages.
How can this be done?
From a wiki page, I can successfully link to the image using
[[../tree/master/pathto/myimage.jpg]] or [[../raw/master/pathto/myimage.jpg]] but

doesn't seem to work.
(GitLab Community Edition 10.0.3)
| Embedding using the absolute path to the repo and image worked:

| GitLab | 47,830,691 | 16 |
I want to know, if it's possible to set custom Gitlab CI variable from if-else condition statement.
In my .gitlab-ci.yml file I have the following:
variables:
PROJECT_VERSION: (if [ "${CI_COMMIT_TAG}" == "" ]; then "${CI_COMMIT_REF_NAME}-${CI_PIPELINE_ID}"; else ${CI_COMMIT_TAG}; fi);
Trying to set project version:
image: php:7.1-cli
stage: test
script:
# this echoes correct string (eg. "master-2794")
- (if [ "${CI_COMMIT_TAG}" == "" ]; then echo "${CI_COMMIT_REF_NAME}-${CI_PIPELINE_ID}"; else echo ${CI_COMMIT_TAG}; fi);
# this echoes something like "(if [ "" == "" ]; then "master-2794"; else ; fi);"
- echo $PROJECT_VERSION
Can this be done? If so, what have I missed? Thanks
| This is expected behavior.
CI_COMMIT_TAG is only set to a value in a GitLab job. From https://docs.gitlab.com/ee/ci/variables/README.html
CI_COMMIT_TAG - The commit tag name. Present only when building tags.
Therefore in the variables section CI_COMMIT_TAG is not defined, hence equals to "".
So if you want to use CI_COMMIT_TAG use in job where tags are defined. See https://docs.gitlab.com/ee/ci/yaml/README.html#tags
| GitLab | 47,204,720 | 16 |
So I'm trying to set up a gitlab-ce instance on docker swarm using traefik as reverse proxy.
This is my proxy stack;
version: '3'
services:
traefik:
image: traefik:alpine
command: --entryPoints="Name:http Address::80 Redirect.EntryPoint:https" --entryPoints="Name:https Address::443 TLS" --defaultentrypoints="http,https" --acme --acme.acmelogging="true" --acme.email="[email protected]" --acme.entrypoint="https" --acme.storage="acme.json" --acme.onhostrule="true" --docker --docker.swarmmode --docker.domain="mydomain.com" --docker.watch --web
ports:
- 80:80
- 443:443
- 8080:8080
networks:
- traefik-net
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
placement:
constraints:
- node.role == manager
networks:
traefik-net:
external: true
And my gitlab stack
version: '3'
services:
omnibus:
image: 'gitlab/gitlab-ce:latest'
hostname: 'lab.mydomain.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://lab.mydomain.com'
nginx['listen_port'] = 80
nginx['listen_https'] = false
registry_external_url 'https://registry.mydomain.com'
registry_nginx['listen_port'] = 80
registry_nginx['listen_https'] = false
gitlab_rails['gitlab_shell_ssh_port'] = 2222
gitlab_rails['gitlab_email_from'] = '[email protected]'
gitlab_rails['gitlab_email_reply_to'] = '[email protected]'
ports:
- 2222:22
volumes:
- gitlab_config:/etc/gitlab
- gitlab_logs:/var/log/gitlab
- gitlab_data:/var/opt/gitlab
networks:
- traefik-net
deploy:
labels:
traefik.enable: "port"
traefik.frontend.rule: 'Host: lab.mydomain.com, Host: registry.mydomain.com'
traefik.port: 80
placement:
constraints:
- node.role == manager
runner:
image: 'gitlab/gitlab-runner:v1.11.4'
volumes:
- gitlab_runner_config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
volumes:
gitlab_config:
gitlab_logs:
gitlab_data:
gitlab_runner_config:
networks:
traefik-net:
external: true
traefik-net is an overlay network
So when I deploy using docker stack deploy and visit lab.mydomain.com, i get the Gateway Timeout error. When I execute curl localhost within the gitlab container, it seems to work fine. Not sure what the problem is, any pointers would be appreciated
| Turns out all I had to do was set the traefik label, traefik.docker.network to traefik-net, see https://github.com/containous/traefik/issues/1254
| GitLab | 46,698,425 | 16 |
I use gitlab-ci to test, compile and deploy a small golang application but the problem is that the stages take longer than necessary because they have to fetch all of the dependencies every time.
How can I keep the golang dependencies between two stages (test and build)?
This is part of my current gitlab-ci config:
test:
stage: test
script:
# get dependencies
- go get github.com/foobar/...
- go get github.com/foobar2/...
# ...
- go tool vet -composites=false -shadow=true *.go
- go test -race $(go list ./... | grep -v /vendor/)
compile:
stage: build
script:
# getting the same dependencies again
- go get github.com/foobar/...
- go get github.com/foobar2/...
# ...
- go build -race -ldflags "-extldflags '-static'" -o foobar
artifacts:
paths:
- foobar
| As mentioned by Yan Foto, you can only use paths that are within the project workspace. But you can move the $GOPATH to be inside your project, as suggested by extrawurst blog.
test:
image: golang:1.11
cache:
paths:
- .cache
script:
- mkdir -p .cache
- export GOPATH="$CI_PROJECT_DIR/.cache"
- make test
| GitLab | 45,537,023 | 16 |
Is there a way to configure multiple specifically-named environments (specifically, test, stage, and prod)?
In their documentation (https://docs.gitlab.com/ce/ci/environments.html) they talk about dynamically-created environments, but they are all commit based.
My build steps are the same for all of them, save for swapping out the slug:
deploy_to_test:
environment:
name: test
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
deploy_to_stage:
environment:
name: stage
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
deploy_to_prod:
environment:
name: prod
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
Is there any way to compress this down into one set of instructions? Something like:
deploy:
environment:
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
| Yes, you can use anchors. If I follow the documentation properly, you would rewrite it using a hidden key .XX and then apply it with <<: *X.
For example this to define the key:
.job_template: &deploy_definition
environment:
url: ${CI_ENVIRONMENT_SLUG}.mydomain.com
scripts:
- deploy ${CI_ENVIRONMENT_SLUG}
And then all blocks can be writen using <<: *job_template. I assume environment will merge the name with the predefined URL.
deploy_to_test:
<<: *deploy_definition
environment:
name: test
deploy_to_stage:
<<: *deploy_definition
environment:
name: stage
deploy_to_prod:
<<: *deploy_definition
environment:
name: prod
Full docs section from the link above:
YAML has a handy feature called 'anchors', which lets you easily duplicate content across your document. Anchors can be used to duplicate/inherit properties, and is a perfect example to be used with hidden keys to provide templates for your jobs.
The following example uses anchors and map merging. It will create two jobs, test1 and test2, that will inherit the parameters of .job_template, each having their own custom script defined:
.job_template: &job_definition # Hidden key that defines an anchor named 'job_definition'
image: ruby:2.1
services:
- postgres
- redis
test1:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test1 project
test2:
<<: *job_definition # Merge the contents of the 'job_definition' alias
script:
- test2 project
& sets up the name of the anchor (job_definition), << means "merge the given hash into the current one", and * includes the named anchor (job_definition again). The expanded version looks like this:
.job_template:
image: ruby:2.1
services:
- postgres
- redis
test1:
image: ruby:2.1
services:
- postgres
- redis
script:
- test1 project
test2:
image: ruby:2.1
services:
- postgres
- redis
script:
- test2 project
| GitLab | 44,287,955 | 16 |
I'd like to create a Docker based Gitlab CI runner which pulls the docker images for the build from a private Docker Registry (v2). I cannot make the Gitlab Runner to pull the image from a local Registry, it tries to GET something from a /v1 API. I get the following error message:
ERROR: Build failed: Error while pulling image: Get http://registry:5000/v1/repositories/maven/images: dial tcp: lookup registry on 127.0.1.1:53: no such host
Here's a minimal example, using docker-compose and a web browser.
I have the following docker-compose.yml file:
version: "2"
services:
gitlab:
image: gitlab/gitlab-ce
ports:
- "22:22"
- "8080:80"
links:
- registry:registry
gitlab_runner:
image: gitlab/gitlab-runner
volumes:
- /var/run/docker.sock:/var/run/docker.sock
links:
- registry:registry
- gitlab:gitlab
registry:
image: registry:2
After the first Gitlab login, I register the runner into the Gitlab instance:
root@130d08732613:/# gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/ci):
http://192.168.61.237:8080/ci
Please enter the gitlab-ci token for this runner:
tE_1RKnwkfj2HfHCcrZW
Please enter the gitlab-ci description for this runner:
[130d08732613]: docker
Please enter the gitlab-ci tags for this runner (comma separated):
Registering runner... succeeded runner=tE_1RKnw
Please enter the executor: docker-ssh+machine, docker, docker-ssh, parallels, shell, ssh, virtualbox, docker+machine:
docker
Please enter the default Docker image (eg. ruby:2.1):
maven:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
After this, I see the Gitlab runner in my Gitlab instance:
After this I push a simple maven image to my newly created Docker repository:
vilmosnagy@vnagy-dell:~/$ docker tag maven:3-jdk-7 172.19.0.2:5000/maven:3-jdk7
vilmosnagy@vnagy-dell:~/$ docker push 172.19.0.2:5000/maven:3-jdk7
The push refers to a repository [172.19.0.2:5000/maven]
79ab7e0adb89: Pushed
f831784a6a81: Pushed
b5fc1e09eaa7: Pushed
446c0d4b63e5: Pushed
338cb8e0e9ed: Pushed
d1c800db26c7: Pushed
42755cf4ee95: Pushed
3-jdk7: digest: sha256:135e7324ccfc7a360c7641ae20719b068f257647231d037960ae5c4ead0c3771 size: 1794
(I got the 172.19.0.2 IP-address from a docker inspect command's output)
After this I create a test project in the Gitlab and add a simple .gitlab-ci.yml file:
image: registry:5000/maven:3-jdk-7
stages:
- build
- test
- analyze
maven_build:
stage: build
script:
- "mvn -version"
And after the build the Gitlab gives the error in seen in the beginning of the post.
If I enter into the running gitlab-runner container, I can access the registry under the given URL:
vilmosnagy@vnagy-dell:~/$ docker exec -it comptest_gitlab_runner_1 bash
root@c0c5cebcc06f:/# curl http://registry:5000/v2/maven/tags/list
{"name":"maven","tags":["3-jdk7"]}
root@c0c5cebcc06f:/# exit
exit
vilmosnagy@vnagy-dell:~/$
But the error still the same:
Do you have any idea how to force the gitlab-runner to use the v2 api of the private registry?
| Current Gitlab and Gitlab Runners support this, see: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#use-a-private-container-registry
On older Gitlab I've solved this with copying an auth key into ~/.docker/config.json
{
"auths": {
"my.docker.registry.url": {
"auth": "dmlsbW9zLm5hZ3k6VGZWNTM2WmhC"
}
}
}
I've logged into this container from my computer and copied this auth key into the Gitlab Runner's docker container.
| GitLab | 38,511,864 | 16 |
i have one repository. In this repository multiple folders are available.
i have required only one folder in this repository.
i am already try to following command but it's not working.
git clone
| If only the content of that folder is of interest (not its history), you can, since GitLab 1.11 (May 2019) download only a folder.
Download archives of directories within a repository
Depending on the type of project and its size, downloading an archive of the entire project may be slow or unhelpful – particularly in the case of large monorepos.
In GitLab 11.11, you can now download an archive of the contents of the current directory, including subdirectories, so that you download only the files you need.
From issue 24704: see documentation.
With GitLab 14.4 (Oct. 2021), you have:
issue 28827 "Download a (sub-)folder from a repository via the Repositories API",
resolved with MR 71431 and commit 1b4e0a1:
curl --header "PRIVATE-TOKEN: <your_access_token>" \
"https://gitlab.com/api/v4/projects/<project_id>/repository/archive?sha=<commit_sha>&path=<path>"
It was not somehow mentioned in the "GitLab 1.44 released" page though.
| GitLab | 38,047,757 | 16 |
It's easy enough to create them, but I can't find out how to clone them and edit offline.
Is it possible?
| May 2020, for GitLab 13.0: yes!
Versioned Snippets
Snippets are useful for sharing small bits of code and text that may not belong in the main project’s codebase.
These items are important to groups and users who rely on them for other tasks, like scripts to help generate diagnostic output or setup supporting services for testing and demo environments.
Unfortunately, a lack of version control has made it hard to know if a snippet was the latest version or what changes may have happened and how to reconcile those.
Snippets in GitLab are now version controlled by a Git repository.
When editing a Snippet, each change creates a commit. Snippets can also be cloned to make edits locally, and then pushed back to the Snippet repository.
This is the first step in enabling more collaboration on Snippets.
In future releases we’ll introduce support for multiple files, continue to expand features and expand permissions.
See documentation and issue.
And with GitLab 13.5 (October 2020):
Snippets with multiple files
Engineers often use Snippets to share examples of code, reusable components, logs, and other items. These valuable pieces of information often require additional context and may require more than one file. Sharing a link to multiple files or multiple Snippets makes it challenging for users to piece this context together and understand the scope of what is being presented.
In GitLab 13.0, we laid a foundation for Snippets by giving them version control support based on a Git repository. Version control and the history it provides are an important piece of context when looking at code and understanding its purpose, but it may not be everything.
GitLab now supports multiple files inside of a single Snippet, so you can create Snippets composed of multiple parts. It broadens its use to endless possibilities. For example:
A snippet that includes a script and its output.
A snippet that includes HTML, CSS, and JS code, from which the result can be easily previewed.
A snippet with a docker-compose.yml file and its associated .env file.
A gulpfile.js file coupled with a package.json file, which together are used to bootstrap the project and manage its dependencies.
Providing all of these files in a single Snippet gives more options for the types of content that can be shared and the context that is provided when looking at them. We’re excited to see the types of content you will create and share using Snippets with multiple files!
See Documentation and Issue.
2015: original answer: Not directly.
Gitlab already have snippets section under each project.
Like: http://gitabhq.com/project-name/snippets/
But it is not available for cloning.
There was a request for a GitHub Gist-like feature for GitLab (based on Gistie), also asked in GitLab suggestions.
But that was not implemented at the time.
Update 2019, as commented by eli, and documented in "Downloading snippets" (GitLab 10.8+)
For now its just possible to download snippets, e.g.
https://gitlab.com/snippets/SNIPPET_ID/raw?line_ending=raw –
| GitLab | 33,906,431 | 16 |
Subsets and Splits