question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
We're using TeamCity, and I've set up jobs to pull from branches. But when those branches are deleted they still appear in Teamcity: (List of outdated branches, but only refs/master is actually active) The Teamcity documentation actually specifies what constitutes an active branch: Active branches In a build configuration with configured branches, the Overview page shows active branches. A number of parameters define whether a branch is active. The parameters can be changed either in a build configuration (this will affect one build configuration only), project, or in the internal properties (this defines defaults for the entire server). A parameter in the configuration overrides a parameter in the internal properties. A branch is considered active if: it is present in the VCS repository and has recent commits (i.e. commits with the age less than the value of teamcity.activeVcsBranch.age.days parameter, 7 days by default). or it has recent builds (i.e. builds with the age less than the value of teamcity.activeBuildBranch.age.hours parameter, 24 hours by default). ! A closed VCS branch with builds will still be displayed as active during 24 hours after last build. To remove closed branches from display, set teamcity.activeBuildBranch.age.hours=0. But... I don't understand their description! :) What do they mean with "parameters in the configuration"? I've tried making parameters in my jobs like so: (Adding parameter teamcity.activeBuildBranch.age.hours) But that doesn't do anything. Maybe I'm exposing myself as a total TC noob, but can anyone guide me through how to correctly alter these settings so I only show repository-active branches in my build jobs?
I suddenly had success after adding parameters to the project configuration. Before now I've been adding parameters to individual builds and never saw a difference.. Maybe that's just me misunderstanding the obvious. teamcity.activeBuildBranch.age.hours = 0 teamcity.activeVcsBranch.age.days = 1 This works in-so-far as the list of active branches is culled. There's still one deleted branch it considers active for reasons I can't yet decipher (history was rewritten several times in it), but at least all others are now inactive.
TeamCity
29,252,305
33
I have got configured TeamCity to execute NUnit tests. When I run manually it then it is working fine. But somehow it accumulates pending changes and doesn't run test even if I refresh overview page of TeamCity. I am wondering which setting I have to use so pending changes will run? Basically I would like to start first pending changes to execute as soon as it appears.
It sounds like you are missing your build trigger. When you edit the project settings, you should see the Build Triggers step (#5). It's the spot where you need to add the event that tells TeamCity it should kick off a build. It is generally tied to your source control check-ins/commits. You probably want to use the VCS Trigger to kick off the build.
TeamCity
20,519,980
33
How do we put a timeout on a TeamCity build? We have a TeamCity build which runs some integration tests. These tests read/write data to a database and sometimes this is very slow (why it is slow is another open quesiton). We currently have timeouts in our integration tests to check that e.g. the data has been written within 30 seconds, but these tests are randomly failing during periods of heavy use. If we removed the timeouts from the tests, we would want to fail the build only if the entire run took more than some much larger timeout. But I can't see how to do that.
On the first page of the build setup you will find the field highlights in my screenie - use that
TeamCity
8,339,668
33
What's the best way to move a single TeamCity build configuration from one server to another? I have a local instance of TeamCity that I test builds on. Then when the build is sufficiently mature, I manually create it (eyeball-copy) on our main TeamCity server. Is there an Export & Import feature that will do this for me?
Unfortunately there is no such thing. TeamCity 8 made the situation a little bit better though by introducing a Build Id format (project name + build config name, can be overwritten) that makes it feasible to "hand copy" build configurations: Basically under the hood all your TeamCity build configurations are really just XML files in the BuildServer\config\projects\ folder and sub folders. While I haven't tried this you should be able to just copy your project folder or build config XML to the appropriate destination on your new TeamCity instance if the ids don't collide. At the very least you can definitely overwrite existing projects with updates this way (something I have done in the past to dynamically change build configs "on the fly"). Of course if your build config depends on other builds / artifacts those ids have to match as well, so either you have to copy those as well or adjust the ids accordingly. Same goes for agent requirements. Edit: With TeamCity 9 out now there's a much better option to move projects between TeamCity servers built in: Now TeamCity provides the ability to move projects among servers: you can transfer projects with all their data (settings, builds and changes history, etc.) and with your TeamCity user accounts from one server to another. All you need to do is create a usual backup file on the source TeamCity server containing the projects to be imported, put the backup file into the /import directory on the target server and follow the import steps on the Administration | Projects Import page. For a full summary see what's new in TeamCity 9.
TeamCity
23,224,078
32
Using Team City to check out from a Git Repo. (Gitlabs if it matters) Start with Empty build directory. Get this error: fatal: could not set 'core.filemode' to 'false' (Running on a Windows machine, if that matters) The user that Team City is running on was changed to an Admin just in case. The .Git directory is not a valid Repo when this command exits. Wiping the entire 'work' directory doesn't help. It randomly comes and goes... AND this: git config --global --replace-all core.fileMode false Does nothing useful - with or without the --replace-all, and run as admin, or another user (if you change 'false' to 'true' you get the same error, if you change it to 'falseCD' it changes the error to that being an invalid value - so clearly, it is changing it. Anyone got any ideas?
In my case using "sudo" worked for me. For example: asif@asif-vm:/mnt/prog/protobuf_tut$ git clone https://github.com/protocolbuffers/protobuf.git Cloning into 'protobuf'... error: chmod on /mnt/prog/protobuf_tut/protobuf/.git/config.lock failed: Operation not permitted fatal: could not set 'core.filemode' to 'false' After doing a "sudo" I could get it working: asif@asif-vm:/mnt/prog/protobuf_tut$ sudo git clone https://github.com/protocolbuffers/protobuf.git Cloning into 'protobuf'... remote: Enumerating objects: 5, done. remote: Counting objects: 100% (5/5), done. remote: Compressing objects: 100% (5/5), done. remote: Total 66782 (delta 0), reused 0 (delta 0), pack-reused 66777 Receiving objects: 100% (66782/66782), 55.83 MiB | 2.04 MiB/s, done. Resolving deltas: 100% (45472/45472), done. Checking out files: 100% (2221/2221), done.
TeamCity
50,108,363
31
I have a GitHub status check generated by TeamCity, and I'm trying to delete it (not just disable it). I've tried (line breaks added for readability): curl -u <myusername>:<mytoken> -X DELETE https://:github_instance/api/v3/repos/:user/:repo/statuses/:hash I got the url from: curl -u <myusername>:<mytoken> https://:github_instance/api/v3/repos/:user/:repo/statuses/:branch_name Am I missing something?
Like @VonC I couldn't find a deletion option. However, you can disable any existing checks so that they no longer run on your PRs. Settings Branches Branch protection rules Edit (next to your desired branch, e.g. 'master') Rule settings Require status checks to pass before merging Require branches to be up to date before merging < Uncheck any statuses you want to disable! >
TeamCity
48,106,989
31
I currently use the MSBuild runner in TeamCity for continuous integration on my local server and this works very well. However, I'm having trouble finding a full list of supported command line switches for MSDeploy in the format that TeamCity expects them. In my 'Parameters' section at the moment I using the following switches: /P:Configuration=OnCommit /P:DeployOnBuild=True /P:DeployTarget=MSDeployPublish /P:MsDeployServiceUrl=https://CIServer:8172/MsDeploy.axd /P:AllowUntrustedCertificate=True /P:MSDeployPublishMethod=WMSvc /P:CreatePackageOnPublish=True /P:UserName=Kaine /P:Password=********** /P:DeployIISAppPath="OnCommit/MySite" /P:SkipExtraFilesOnServer=True /P:DeployAsIisApp=True All of these seem to work fine and the MSDeploy works as expected. The trouble comes when I want to add additional parameters. I've looked up MSBuild parameters and the MSDeploy documentation and I only seem to find command line parameters like these: msbuild SlnFolders.sln /t:NotInSolutionfolder:Rebuild;NewFolder\InSolutionFolder:Clean http://msdn.microsoft.com/en-us/library/ms164311.aspx It seems that these references for command line arguments don't correspond with the /P: format - for example CreatePackageOnPublish and DeployIISAppPath aren't recognised command line parameters, but they work fine in the TeamCity build process. Where can I find a full documented list of MSDeploy arguments in the format /P:Param=Value Additional info: There's a list of parameters here: http://msdn.microsoft.com/en-us/library/microsoft.teamfoundation.build.workflow.activities.msbuild_properties.aspx However this is not a complete list - for example, this list doesn't include DeployAsIisApp or SkipExtraFilesOnServer, which are both parameters that work from the Team City Build. Also this related question (possibly duplicate): Valid Parameters for MSDeploy via MSBuild which contains some arguments - but still not a definitive list.
Firstly, the short answer is you can't find the complete list. MSBuild does not have a complete list of parameters you can chose from since you can send any parameter you like. It is a means of communication between the caller of MSBuild and the author of the MSBuild build script (a vs sln or csproj file for instance). If the build script use the parameter it is used otherwise it is ignored. So this is a valid call to msbuild: msbuild /p:<anything>=<anything> Secondly, you shouldn't send parameters to msbuild from teamcity using the /p: command options. Instead, set configuration or system properties in your teamcity build configuration. They will be passed to msbuild automatically as parameters.
TeamCity
23,112,165
31
We recently moved from SVN to git. We work with a main "release" branch (master), and feature branches for every feature a dev is working on. In TeamCity we have a project for every feature branch, and of course a project for the master. When we worked with SVN, whenever someone merged from master to his feature branch or vice-versa, the merge was treated by TeamCity as one commit. Now, with git, every merge causes TeamCity to show all of the commits that came with this merge. This causes some problems, for example when someone merge from master to his feature branch, and now his TeamCity project shows "283 pending changes" due to that merge, if builds fail, the authors of these changes will be notified, as if they did these changes on the feature branch. Is there a way to tell TeamCity to treat git merges as single commit? We could solve it using squashed merges but that's something we would really like to avoid.
I'm pretty sure this is the same issue that we had a few days ago, but vice-versa. We merged a dev branch into master, which caused TC to attempt to build each and every check-in that was part of the merge. Obviously not what we wanted. To fix it, keep the Trigger build on each check-in option unchecked in the Build Trigger. You get the full change history from the source branch, but TeamCity will only build the destination branch using the latest merged code. If that build fails, the merger should be the only one notified.
TeamCity
13,876,417
31
I'm looking at migrating from TFS (Team Foundation Server) to Git, but can't find anything matching TFS' support for gated check-ins (also called pre-tested or delayed commits). Atlassian Bamboo has no support for gated check-ins. TeamCity does support it ("delayed commits" using their terminology), but not for Git. Using Jenkins by itself or Jenkins+Gerrit has huge drawbacks and doesn't come close to the gated check-in functionality in TFS. (Drawbacks explained by the creator of Jenkins himself in this video: http://www.youtube.com/watch?v=LvCVw5gnAo0) Git is very popular (for good reason), so how are people solving this problem? What is currently the best solution?
We have just started using git and have implemented pretested commits using workflows (I finished testing this just today). basically each dev has a personal repository which they have read/write access. The build server TeamCity in our case, builds using these personal repositories, and then if successful pushes the changes to the 'green' repository. Devs have no write access to 'green', only TeamCity build agents can write to that, but devs pull common updates from 'green'. So dev pulls from 'green', pushes to personal, TeamCity builds from personal, pushes to green. This blog post shows the basic model we are using, with GitHub forks for the personal repositories (using forks means that the number of repositories doesn't get out of hand and end up costing more, and means that the developers can manage the personal builds, as they can fork and then create the team city build jobs to get their code pushed to 'green'): This is more work to set up in TeamCity as each developer has to have their own build configuration. Which actually has to be 2 configurations as TeamCity seems to execute all build steps (including the final 'push to green' step) even if the previous build steps fail (like the tests :)), which meant that we had to have a personal build for the developer, then a another build config which was dependent on that, which would just do the push assuming the build worked.
TeamCity
12,484,424
31
How to get the unit test name from the within unit test? I have the below method inside a BaseTestFixture Class: public string GetCallerMethodName() { var stackTrace = new StackTrace(); StackFrame stackFrame = stackTrace.GetFrame(1); MethodBase methodBase = stackFrame.GetMethod(); return methodBase.Name; } My Test Fixture class inherits from the base one: [TestFixture] public class WhenRegisteringUser : BaseTestFixture { } and I have the below system test: [Test] public void ShouldRegisterThenVerifyEmailThenSignInSuccessfully_WithValidUsersAndSites() { string testMethodName = this.GetCallerMethodName(); // } When I run this from within the Visual Studio, it returns my test method name as expected. When this runs by TeamCity, instead _InvokeMethodFast() is returned which seems to be a method that TeamCity generates at runtime for its own use. So how could I get the test method name at runtime?
If you are using NUnit 2.5.7 / 2.6 you can use the TestContext class: [Test] public void ShouldRegisterThenVerifyEmailThenSignInSuccessfully() { string testMethodName = TestContext.CurrentContext.Test.Name; }
TeamCity
9,666,562
31
We have 3 environments: Development: Team City deploys here for Subversion commits on trunk. Staging: User acceptance is done here, on builds that are release candidates. Production: When UAT passed, the passing code set is deployed here. We're using Team City and only have Continuous Integration setup with our development environment. I don't want to save artifacts for every development deployment that Team City does. I want an assigned person to be able to fire a build configuration that will deploy a certain successful development deployment to our staging server. Then, I want each staging deployment to save artifacts. When a staging deployment passes UAT, I want to deploy that package to Production. I'm not sure how to set this up in Team City. I'm using version 6.5.4, and I'm aware there's a "Promote..." action/trigger, but I think it depends on saved artifacts. I don't want to save development deployments each time as artifacts, but I do want the person running the staging deployment to be able to specify which successful development deployment to deploy to staging. I'm aware there may be multiple ways to do this, is there a best practice? What is your setup and why do you recommend it? Update: I have one answer so far, and it's an idea we had considered internally. I'd really like to know if anyone has a somewhat automated way for deploying to a staging/production environemnt via Team City itself, where only people with certain role/permission can run a deploy script to production rather than having to manually deal with any kind of artifact package. Anyone? Update 2 I still have 1 day to award bounty, and I thought the answer below didn't answer my question, but after rereading it I see that my question wasn't what I thought it was. Are there any ways to use Team City for some kind of automated deployment to Staging/Production environments?
I think you're actually asking two different questions here; one is about controlling access rights to TeamCity builds and another is about the logistics of artifact management. Regarding permissions, I assume what you mean by "only people with certain role/permission can run a deploy script to production" and your response to Julien is that you probably don't want devs deploying direct to production but you do want them to be able to see other builds in the project. This is possibly also similar to Julien's scenario when IT then take the process "offline" from TeamCity (either that or it's just IT doing what IT do and insisting they must use a separate, entirely inefficient process because "that's just the way we do it" - don't get me started on that!) The problem is simply that all permissions in TeamCity are applied against the project and never the build so if you've got one project with all your builds, there's no ability to apply permissions granularity to dev versus production builds. I've previously dealt with this in two ways: Handle it socially. Everyone knows what their responsibilities are and you don't run what you're not meant to run. If you do, it's audited and traceable back to YOU. Work fine when there's maturity, a clear idea of responsibilities and not compliance requirement that prohibits it. Create separate projects. I don't like having to do this but it does fix the problem. You can still use artifacts from another project and means you simply end up with one project containing builds that deploy to environments you're happy for all the devs to access and another project to sensitive environments. The downside is that if the production build fails, the very people you probably want support from won't be able to access it! Regarding artifact management, there's no problem with retaining these in the development build, just define a clean-up policy that only keeps artifacts from the last X builds if you're worried about capacity. A lot of people want certainty they're deploying the same compiled output to every environment which means once you build it, you want to keep it around for later use. Once you have these artefacts from your dev deployment, you can re-deploy them to your other environments through separate builds. You'll have an issue with config transforms (assuming you're using them), but have a read of this 2 part series for some ideas on how to address that (I'm yet to absorb it in detail but I believe he's on the right track). Does that answer your question? Is there anything still missing?
TeamCity
7,772,311
31
When using TeamCity to compile my MSBuild XML task script, it fails with this: [10:43:03]: myWebProject1\ myWebProject 1 .csproj (3s) [10:43:07]: [ myWebProject1\ myWebProject1 .csproj] _CopyWebApplicationLegacy [10:43:07]: [_CopyWebApplicationLegacy] Copy [10:43:07]: [Copy] C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets(131, 5): error MSB3021: Unable to copy file "obj\Release\myWebProject1.dll" to "C:\MSBUILDRELEASE\myWebProject1\\bin\myWebProject1.dll". Could not find file 'obj\Release\myWebProject1.dll'. When I run it locally, it works. When I compare my local output to my build server output, there are files missing on my build server. Like the global.asax file is missing from my build server output directory (but not when I compile this locally). Why is that? Here is my current MSBuildScript: <?xml version="1.0" encoding="utf-8"?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0" DefaultTargets="Build"> <PropertyGroup> <OutputDir>C:\MSBUILDRELEASE</OutputDir> </PropertyGroup> <ItemGroup> <ProjectToBuild Include="UtilityApp.sln" > <Properties>OutputPath=$(OutputDir);Configuration=MSBuildRelease;Platform=x86</Properties> </ProjectToBuild> </ItemGroup> <Target Name="Build"> <MSBuild Projects="@(ProjectToBuild)"/> <CallTarget Targets="Publish WebProject1" /> <CallTarget Targets="Publish WebProject2" /> </Target> <Target Name="Publish WebProject1"> <RemoveDir Directories="$(OutputFolder)" ContinueOnError="true" /> <MSBuild Projects="WebProject1\WebProject1.csproj" Targets="ResolveReferences;_CopyWebApplication" Properties="WebProjectOutputDir=$(OutputDir)\WebProject1\; OutDir=$(OutputDir)\WebProject1\;Configuration=Release;Platform=AnyCPU" /> </Target> <Target Name="Publish WebProject2"> <RemoveDir Directories="$(OutputFolder)" ContinueOnError="true" /> <MSBuild Projects="WebProject2\WebProject2.csproj" Targets="ResolveReferences;_CopyWebApplication" Properties="WebProjectOutputDir=$(OutputDir)\WebProject2\; OutDir=$(OutputDir)\WebProject2\;Configuration=Release;Platform=AnyCPU" /> </Target> </Project> I can run this script locally and it seems to work fine (no errors generated). When I run it on my build server, it fails with MSBuild error MSB3021. Now when I compare my local build output files to my server build output files, the server output does not have as many files. For instance, the global.ASAX file is missing in the output on my buildserver. Why would it work local for me, but not on my TeamCity build server? What's the difference and how can I fix it? I noticed the TeamCity build agent error message has a funny directory path: "C:\MSBUILDRELEASE\myWebProject1\bin\myWebProject1.dll" ^ There are two slashes before the bin folder. I do not specify that anywhere. What gives? I have a feeling I am not building my Web Projects correctly (maybe use a different task approach?). It seems to work locally but not on my build server. Am I building my web projects correctly? These are simply web projects for Web Service (ASMX) deployment. Help?
Alright, I figured it out. It's a "Configuration" mismatch. You have one project building with Configuration=MSBuildRelease and two other projects building with Configuration=Release. MSBuild then looks in the wrong place for the "intermediate" assemblies. Change your code to this: <?xml version="1.0" encoding="utf-8"?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0" DefaultTargets="Build"> <PropertyGroup> <OutputDir>C:\MSBUILDRELEASE</OutputDir> </PropertyGroup> <ItemGroup> <ProjectToBuild Include="UtilityApp.sln" > <Properties>OutputPath=$(OutputDir);Configuration=MSBuildRelease;Platform=x86</Properties> </ProjectToBuild> </ItemGroup> <Target Name="Build"> <MSBuild Projects="@(ProjectToBuild)"/> <CallTarget Targets="Publish WebProject1" /> <CallTarget Targets="Publish WebProject2" /> </Target> <Target Name="Publish WebProject1"> <RemoveDir Directories="$(OutputFolder)" ContinueOnError="true" /> <MSBuild Projects="WebProject1\WebProject1.csproj" Targets="ResolveReferences;_CopyWebApplication" Properties="WebProjectOutputDir=$(OutputDir)\WebProject1\; OutDir=$(OutputDir)\WebProject1\;Configuration=MSBuildRelease;Platform=AnyCPU" /> </Target> <Target Name="Publish WebProject2"> <RemoveDir Directories="$(OutputFolder)" ContinueOnError="true" /> <MSBuild Projects="WebProject2\WebProject2.csproj" Targets="ResolveReferences;_CopyWebApplication" Properties="WebProjectOutputDir=$(OutputDir)\WebProject2\; OutDir=$(OutputDir)\WebProject2\;Configuration=MSBuildRelease;Platform=AnyCPU" /> </Target> </Project>
TeamCity
5,158,313
31
I am trying to do "continuous integration" with TeamCity. I would like to label my builds in a incremental way and the GUID provided by the VCS is not as usefull as a simple increasing number. I would like the number to actually match the revision in number in Mercurial. My state of affairs: Mercurial info: I would like the build to be labeled 0.0.12 rather than the GUID. Would someone be so kind and save me hours of trying to figure this out ?
As Lasse V. Karlsen mentioned those numerical revision numbers are local-clone specific and can be different for each clone. They're really not suitable for versioning -- you could reclone the same repo and get different revision numbers. At the very least include the node id also creating something like 0.0.12-6ec760554f2b then you still get sortable release artifacts but are still firmly identifying your release. If you're using numeric tags to tag releases there's a particularly nice option: % hg log -r tip --template '{latesttag}.{latesttagdistance}' which, if the most recent tag on that clone was called 1.0.1 and was 84 commits ago gives a value like: 1.0.1.84 Since you can have different heads that are 84 commits away from a tag in different repos you should still probably include the node id like: % hg log -r tip --template '{latesttag}.{latesttagdistance}-{node|short}' giving: 1.0.1.84-ec760554f2b which makes a great version string.
TeamCity
4,363,522
31
I have a few tests that need to be fed with external data from excel files. The files are included in the test project, and in Visual Studio, I have edited the test settings file (Local.testsettings) to deploy the data files. This makes it work fine i VS. We are, however, also running continous integration with TeamCity, and in TeamCity this doesn't work. My data files are unavailable to the test. Seems that the tests are run from a temporary folder named "C:\TeamCity\buildAgent\temp\buildTmp\ciuser_AS40VS6 2009-12-11 09_40_17\Out", and the data files are not copied there. I have tried changing the build action for the data files to "Resource" and setting copy to output dir to "Always", but that didn't help. Does anyone know how to make this work? I am running Visual Studio 2010 beta 2 and TeamCity 4.5.5, which is why I'm running MSTest in the first place, and not NUnit...
I get round this by adding my data files (in my case usually XML) as embedded resources and I extract them from the test assembly. [TestInitialize] public void InitializeTests() { var asm = Assembly.GetExecutingAssembly(); this.doc = new XmlDocument(); this.doc.Load(asm.GetManifestResourceStream("TestAssembly.File.xml")); }
TeamCity
1,886,716
30
I'm looking for a way to attach some specific build parameter to a scheduled trigger. The idea is that we are continuously building debug versions of our products. Our nightly build has to be a release build, though. The build configurations for most of our projects is absolutely the same. It even has a configuration parameter, already. So all I would need is a trigger which allows for specifying an override for a single build parameter. That would cut the build configurations to maintain by half. Is there a way to achieve this?
Not right now, you can follow this issue.
TeamCity
10,007,874
30
I want my Noda Time continuous build - hosted by a private TeamCity server in my home - to fetch the Mercurial log as an XML file. The source code is hosted on Google Code. This is so that I can use it for benchmark browsing on the public web site. It's all very much a work in progress, but it's basically starting to come together. I'd expected that fetching the log as part of the TeamCity build would be simple. After all, it's already fetched the source in order to perform the build. From a normal repository directory, I can just run: hg log --style xml > hg-log.xml Unfortunately, as far as I can see, the "checkout" directory in Team City isn't an actual Mercurial repository - it's a copy of just the contents of the repository at the appropriate commit. That means I can't run hg log in that directory... or any other directory that I've been able to find so far. None of the predefined build parameters seem to have a local repository path, although I'm hoping I've missed one. My current workaround is to fetch the source again as part of the build (just the default branch, of course) and then use that to get the log. It works, but it feels insanely wasteful. It's not clear to me how or where TeamCity actually performs the source checkout - I'm really hoping there's a local repo somewhere that I can use to get the log.
Do you have your agent checkout settings set to "on agent"? By default, the server does a checkout and then sends the bits to the client. You can find the setting here:
TeamCity
22,722,823
29
I need to deploy a custom jar to Artifactory along with the jar generated from my Java project. Currently the only method I can find is through command line goal using: mvn deploy:deploy-file -DgroupId=<group-id> \ -DartifactId=<artifact-id> \ -Dversion=<version> \ -Dpackaging=<type-of-packaging> \ -Dfile=<path-to-file> \ -Durl=<url-of-the-repository-to-deploy> Is there a way of including this in the pom file? As a plugin or something?
Sure. Just define an execution of the maven-deploy-plugin:deploy-file goal bound to the deploy phase, configured with your values. When deploying your project, this execution will be invoked and the JAR will be deployed. <plugin> <artifactId>maven-deploy-plugin</artifactId> <version>2.8.2</version> <executions> <execution> <id>deploy-file</id> <phase>deploy</phase> <goals> <goal>deploy-file</goal> </goals> <configuration> <file><!-- path-to-file --></file> <url><!-- url-of-the-repository-to-deploy --></url> <groupId><!-- group-id --></groupId> <artifactId><!-- artifact-id --></artifactId> <version><!-- version --></version> <packaging><!-- type-of-packaging --></packaging> </configuration> </execution> </executions> </plugin> Note that you will probably need to add a repositoryId also. This is the server id to map on the <id> under the <server> section of the settings.xml.
TeamCity
35,158,890
29
I have a TeamCity 7 Build Configuration which is pretty much only an invocation of a .ps1 script using various TeamCity Parameters. I was hoping that might be a simple matter of setting: Script File Script File %system.teamcity.build.workingDir%/Script.ps1 Script execution mode Execute .ps1 script with "-File" argument Script arguments %system.teamcity.build.workingDir% -OptionB %BuildConfigArgument% %BuildConfigArg2% And then I would expect: if I mess up my arguments and the script won't start, the Build fails if my Script.ps1 script throws, the Build fails If the script exits with a non-0 Error Level I want the Build to Fail (maybe this is not idiomatic PS error management - should a .ps1 only report success by the absence of exceptions?) The question: It just doesn't work. How is it supposed to work? Is there something I'm doing drastically wrong that I can fix by choosing different options?
As doc'd in the friendly TeamCity manual: Setting Error Output to Error and adding build failure condition In case syntax errors and exceptions are present, PowerShell writes them to stderr. To make TeamCity fail the build, set Error Output option to Error and add a build failure condition that will fail the build on any error output. The keys to making this work is to change two defaults: At the top level in the Build Failure Conditions, switch on an error message is logged by build runner: In the [PowerShell] Build Step, Show advanced options and set Error output: Error In 9.1 the following works (I wouldn't be surprised if it works for earlier versions too): create a PowerShell Build Step with the default options change the dropdown to say Script: Source code Add a trap { Write-Error "Exception $_" ; exit 98 } at the top of the script (Optional but more correct IMO for the kind of scripting that's appropriate for within TeamCity build scripts) Show advanced options and switch on Options: Add -NoProfile argument (Optional, but for me this should be the default as it renders more clearly as suggested by @Jamal Mavadat) Show advanced options and switch on Error output: Error (ASIDE @JetBrains: if the label was "Format stderr output as" it would be less misleading) This covers the following cases: Parse errors [bubble up as exceptions and stop execution immediately] Exceptions [thrown directly or indirectly in your PS code show and trigger an exit code for TC to stop the build] An explicit exit n in the script propagates out to the build (and fails it if non-zero)
TeamCity
11,647,987
29
I have a warning in my build log in teamcity. I've updated Xcode on my CI-Server from 7.3.1 to 8. The step run successfully but I have this: [Step 3/3] Starting: /Users/teamcity/local/teamcity-build-agent/temp/agentTmp/custom_scriptxxxxxxx [Step 3/3] in directory: /Users/teamcity/local/teamcity-build-agent/work/yyyy [Step 3/3] 2016-10-11 09:04:41.706 xcodebuild[18180:5010256] CoreSimulator is attempting to unload a stale CoreSimulatorService job. Detected Xcode.app relocation or CoreSimulatorService version change. Framework path (/Applications/Xcodes/Xcode_8.0.app/Contents/Developer/Library/PrivateFrameworks/CoreSimulator.framework) and version (303.8) does not match existing job path (/Applications/Xcodes/Xcode-7.3.1.app/Contents/Developer/Library/PrivateFrameworks/CoreSimulator.framework/Versions/A/XPCServices/com.apple.CoreSimulator.CoreSimulatorService.xpc) and version (209.19). [Step 3/3] 2016-10-11 09:04:41.961 xcodebuild[18180:5010256] Failed to locate a valid instance of CoreSimulatorService in the bootstrap. Adding it now. How can i fix this warning?
I had the same issue. I've to run both Xcode 7 (to build old version) and Xcode 8 (to build current develop branch) in my Jenkins server and I was having the issue all the time. Solution: launchctl remove com.apple.CoreSimulator.CoreSimulatorService || true This happens because, even if you quit the simulator app, the service is still running. The above command is needed to remove the service called com.apple.CoreSimulator.CoreSimulatorService. The || true is to avoid failure when that service is not running.
TeamCity
39,972,105
28
I have a package on my TeamCity NuGet feed, built by TeamCity, but a dependent TC project cannot see it during package restore. [14:05:02][Exec] E:\TeamCity-BuildAgent\work\62023563850993a7\Web.nuget\nuget.targets(88, 9): Unable to find version '1.0.17.0' of package 'MarkLogicManager40'. [14:05:02][Exec] E:\TeamCity-BuildAgent\work\62023563850993a7\Web.nuget\nuget.targets(88, 9): error MSB3073: The command ""E:\TeamCity-BuildAgent\work\62023563850993a7\Web.nuget\nuget.exe" install "E:\TeamCity-BuildAgent\work\62023563850993a7\ProductMvc\packages.config" -source "" -RequireConsent -solutionDir "E:\TeamCity-BuildAgent\work\62023563850993a7\Web\ "" exited with code 1. Note that the source parameter in the NuGet command line is empty. Could this be the cause?
As of today, NuGet.targets has the following way to specify custom feed(s): <ItemGroup Condition=" '$(PackageSources)' == '' "> <!-- Package sources used to restore packages. By default, registered sources under %APPDATA%\NuGet\NuGet.Config will be used --> <!-- The official NuGet package source (https://nuget.org/api/v2/) will be excluded if package sources are specified and it does not appear in the list --> <PackageSource Include="https://nuget.org/api/v2/" /> <PackageSource Include="\\MyShare" /> <PackageSource Include="http://MyServer/" /> </ItemGroup> Another option is to put NuGet.config next to the solution file: <?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="nuget.org" value="https://www.nuget.org/api/v2/" /> <add key="MyShare" value="\\MyShare" /> <add key="MyServer" value="http://MyServer" /> </packageSources> <activePackageSource> <add key="All" value="(Aggregate source)" /> </activePackageSource> </configuration>
TeamCity
17,151,709
28
We're using TeamCity's command line build runner to call a bat-file. The bat-file builds our solution by calling the Visual Studio 2008's "devenv.exe" and then it executes the unit tests and creates the correct folder structure. What we would like to do is to stop executing the bat-file if the call to devenv fails and make the TeamCity to realize that the build failed. We can catch the failed devenv call by checking the ErrorLevel (which is 1 if the build failed) and we can exit our bat-file at that point. But how can we tell to the TeamCity that the build failed? This is what we've tried: call "build.bat" IF ERRORLEVEL 1 EXIT /B 1 But TeamCity doesn't recognize our exit code. Instead the build log looks like this: [08:52:12]: ========== Build: 28 succeeded or up-to-date, 1 failed, 0 skipped ========== [08:52:13]: C:\_work\BuildAgent\work\bcd14331c8d63b39\Build>IF ERRORLEVEL 1 EXIT /B 1 [08:52:13]: Process exited with code 0 [08:52:13]: Publishing artifacts [08:52:13]: [Publishing artifacts] Paths to publish: [build/install, teamcity-info.xml] [08:52:13]: [Publishing artifacts] Artifacts path build/install not found [08:52:13]: [Publishing artifacts] Publishing files [08:52:13]: Build finished So TeamCity will report that the build was successful. How can we fix this? Solution: TeamCity provides a mechanism called Service Messages which can be used to handle situations like this. I've updated my build script to look like the following: IF %ERRORLEVEL% == 0 GOTO OK echo ##teamcity[buildStatus status='FAILURE' text='{build.status.text} in compilation'] EXIT /B 1 :OK As a result TeamCity will report my build as failed because of a "Failure in compilation".
See Build Script Interaction with TeamCity topic. You can report messages for build log in the following way: ##teamcity[message text='<message text>' errorDetails='<error details>' status='<status value>'] where: The status attribute may take following values: NORMAL, WARNING, FAILURE, ERROR. The default value is NORMAL. The errorDetails attribute is used only if status is ERROR, in other cases it is ignored. This message fails the build in case its status is ERROR and "Fail build if an error message is logged by build runner" checkbox is checked on build configuration general settings page. For example: ##teamcity[message text='Exception text' errorDetails='stack trace' status='ERROR'] Update 2013-08-30: As of TeamCity 7.1 build failures should be reported using the buildProblem service message instead: ##teamcity[buildProblem description='<description>' identity='<identity>']
TeamCity
3,674,163
28
Is it possible for TeamCity to integrate to JIRA like how Bamboo integrates to JIRA? I couldnt find any documentation on JetBrains website that talks about issue-tracker integration. FYI: I heard that TeamCity is coming out with their own tracker called Charisma. Is that true?
TeamCity 5 EAP has support for showing issues from Jira on the tabs of your build. EAP Release Notes you still don't have the integration in Jira itself which I would prefer
TeamCity
754,195
27
In TeamCity is there an easy way to get a variable for the current date in the format MMdd (eg 0811 for 8-Aug)? My google-fu did not turn up an existing plugins. I looked into writing a plugin, but not having a jdk installed, that looks time consuming.
This is quite easy to do with a PowerShell build step (no plugin required) using the following source code: echo "##teamcity[setParameter name='env.BUILD_START_TIME' value='$([DateTime]::Now)']" or (for UTC): echo "##teamcity[setParameter name='env.BUILD_START_TIME' value='$([DateTime]::UtcNow)']" This uses TeamCity's Service Message feature that allows you to interact with the build engine at runtime e.g. set build parameters. You can then reference this build parameter from other places in TeamCity using the syntax %env.BUILD_START_TIME% The advantage of this approach is you don't need to use a plugin. The disadvantage is you need to introduce a build step.
TeamCity
7,019,954
27
When I run my TeamCity build with the only build step being of runner type Visual Studio (sln), I get the following error: C:\TeamCity\buildAgent\work\4978ec6ee0ade5b4\Test\Code\Test.sln(2, 1): error MSB4025: The project file could not be loaded. Data at the root level is invalid. Line 2, position 1. This is on a dedicated CI server running TeamCity Professional 8.1.1 (build 29939). There are several other successfully-running builds on this server. The odd bit is that the same build runs successfully on TeamCity on my dev machine. I followed an answer to a similar question, and copied the specified folders across, but that didn't help. I'm sure the project/solution file isn't invalid because in addition to the build running on my dev box, I have opened the solution in Visual Studio and built it there with no problems. Any suggestions?
I just fixed this. Look inside the Test.sln file for Project or EndProject tags that aren't closed. For us, the EndProject was missing and it broke on teamcity, but no issues in Visual Studio.
TeamCity
22,986,402
27
I'm trying to set up a new build configuration in TeamCity using the Powershell runner. However, I can't seem to find a way to access the TeamCity System Properties in the build script. I've seen hints that it is possible, but cannot find documentation on how to do it. I have tried accessing the system properties using Powershell variable syntax, $variable. I have also printed out all variables in memory and see no teamcity variables to use. Is this possible with the Powershell runner, and if so what is the syntax necessary to get it working?
TeamCity will set up environment variables, such as build.number (you can see a list of these within TeamCity). In Powershell you can access environment variables using the env "provider", e.g. $env:PATH TeamCity variables are accessible by replacing the . with a _, so the build.number variable can be accessed as $env:build_number
TeamCity
13,278,615
27
I'm pulling my hair out over this MSBuild issue. We're using TeamCity to build a solution with two MVC websites in it. As part of the build we're deploying to a folder on the build server. IIS points to this folder to give us an integration build visible to management. Here's the code from the MSBuild file that uses MSDeploy to publish a package - but not as a zip file. <Target Name="Deploy"> <MSBuild Projects="$(SolutionFile)" Properties="Platform=$(Platform);Configuration=$(Configuration); DeployOnBuild=true; DeployTarget=Package; PackageLocation=$(PackageLocation); PackageAsSingleFile=False; AutoParameterizationWebConfigConnectionStrings=False" /> </Target> The problem here is that we get an incredibly deep folder structure. Here's an example... C:[ANYFOLDERS]\obj\Release\Package\PackageTmp[published files] I really want to deploy to predictable folders like... C:\build\website[published files] C:\build\mobilewebsite[published files] That's the background. Here are the specific questions. Are we making a mistake trying to use MSDeploy to publish to a local filesystem? We basically need the equivalent of the VS2010 "publish" feature, with config transforms. We're not trying to deploy to remote IIS instances or anything. Is there any way of doing this but specifying the publish folders? I've been trying to use the MSBuild Copy task to copy the files into more sensible folders - but I can't work out how to use wildcards to specify the folders we need to take - it would need to be something like... C:\FolderPackageEndsUpIn[ANYFOLDERS]\Website[ANYFOLDERS]\PackageTmp**. Help!
If you add the _PackageTempDir parameter to MSBuild it will give you the same results as doing a local publish. e.g. msbuild C:\PathToMyProj.csproj /p:Configuration=UAT;DeployOnBuild=true;PackageAsSingleFile=False;DeployTarget=Package;_PackageTempDir=c:\PathToMyDeploy\;AutoParameterizationWebConfigConnectionStrings=false This command will publish all my files to c:\PathToMyDeploy\ without the crazy subfolders
TeamCity
4,193,788
27
Continuous integration concept has just been integrated in my team. Assume we have an integration branch named Dev. From it derived 3 branches, one for each specific current project : Project A Project B Project C First, Teamcity is configured on a dedicated server and it goals is : Compiles and launches unit and integration tests from versioned sources from each branch including Dev Then, of course, each project branch (A,B and C) must be tested in a cloned production environment so that UAT can be carried out. But I wonder what frequency should we deploy on? Everytime a source code changes ? Should we deploy only Dev that contains mix of the 3 projects after merging each one to it (corresponding to the reality in next production release) or the 3 projects independently? If Dev is deployed, potentially future changes on Dev must not be taken in account. Indeed, there might be a new project starting called Project D and that mustn't be part of the next release. So taking Dev for integration (UAT) is risked because deployer could unvoluntary integrate content of Project D and so environment will not reveal the reality of the next release. Other solution: we're not taking Dev but independently the 3 projects, so must there be 3 cloned production environments in parallel? If yes, UAT couldn't be reliable since behaviour of integration environment might change very often... Concept of continuous deployment for UAT isn't clear for me...
Oh boy. You're hitting real world CD problems. Really good questions. The answer depends a bit on highly tightly coupled the development work is on the various projects. In my ideal situation for you would be to have a number of "effort" specific test environments. In one case, you could consider a test environment for each project. When there is a completed build of Project A, you push it into Environment A which has the latest approved / production versions for B/C and you can perform basic integration tests there. If they pass, you promote the build to an integration test environment where the latest good A, is deployed along the latest B & C for the same release. When the integration test environment is passing tests, you can promote the contents of it as a release set containing known versions of A, B, & C. That release set would be deployed to any UAT, Staging, or Production environments. The basic idea is to give each project a degree of isolation so that it can be tested well even if the other projects are (temporarily) badly broken, while getting to full integration tests as quickly as possible. We also want to make sure that whatever we find actually passes integration tests will be promoted together. Picking and choosing project versions to release that haven't been tested together is too risky for my taste. This is actually a topic I get to talk about quite a lot. If you don't mind, I'll list out a few presentations I've given around these topics. 1) Scaling CI for Parallel Development (co-presented with Chris Lucca of Accurev) This talks a good about broad strategies for balancing isolation and integration. Much of it assumes the sub-projects are being merged into a common code base, but the principals can be applied to independently built and deployed modules with only a little imagination. 2) Using uDeploy with Jenkins (registration required) This is more product focused, but shows almost exactly the idea of using an integration test environment for multiple projects, creating a release set (we call it a "snapshot") and promoting that. Our integration with TeamCity is quite similar, but I think the strategy held in there may be more important 3) Slides visualizing a multi-component pipeline: http://www.slideshare.net/Urbancode/adapting-deployment-pipelines-for-complex-applications
TeamCity
9,105,459
26
I need to limit the number of Artifacts a particular build is keeping. This one build generates very large artifact output which will eat through disk space. Ideally I would like to configure just that build to keep a maximum of the last 3 successful builds but I don't want this limit applied to all projects.
Go to: Administration Build History Clean-up (right menu) At the bottom select your project / build under "Manage cleanup rules for" Click "Edit" In the popup, select "Custom" for "Clean artifacts" Put "3" in "Older than the -th successful build" Save. This is as close to what you want. The only devation being that it will only discard artifacts after the nth successful build. Or, another option presented in the settings is cleanup based on a date, like "Only keep the past 7 days". Update for TeamCity 9.x and above: Administration Click the Edit link for any of your branches or <root project>* Clean-up rules on left hand menu Under What to clean-up choose the Edit link. Under the Artifacts section, put a value in the box: Older than the []-th successful build. *Please note that TeamCity uses inheritance so if you edit the <root project>, all your projects will be affected. This is also the case if you set options for project groups. Update for TeamCity 2019 Find the builds of the project you want to change. Select Edit Build Configuration in the top right. Find the project inheritance hierarchy breadcrumb in the top left. It will look similar to: Administration / <Root project> / YourParentProject Click on the project that is the direct parent to the project you want to edit. (YourParentProject in the example above.) Click on Clean-up Rules from the menu on the left. Find you project in the list shown in the main window and click the edit button found at the end of the project's row. Select retention rules as desired.
TeamCity
9,007,366
26
I am able to run tests via Karma in TeamCity since you can run anything that's accessible via command line. But, TeamCity only reports overall pass/fail -- does not report details of any failed tests. If it fails, I just get "Process exited with code 1". The karma homepage says there is a teamcity integration, but the teamcity link says "Not available yet". There seems to be a GIT project with npm install package. But, the npm install failed with messages that don't mean much to me. > npm http GET https://registry.npmjs.org/karma-teamcity-reporter npm > http 304 https://registry.npmjs.org/karma-teamcity-reporter npm http > GET https://registry.npmjs.org/karma npm http 304 > https://registry.npmjs.org/karma npm WARN `git config --get > remote.origin.url` returned wrong result > (git://github.com/vojtajina/node-di.git) undefined npm WARN `git > config --get remote.origin.url` returned wrong result > (git://github.com/vojtajina/node-di.git) undefined npm http GET > https://registry.npmjs.org/chokidar npm http GET > https://registry.npmjs.org/socket.io npm http GET > https://registry.npmjs.org/http-proxy npm http GET > https://registry.npmjs.org/glob npm http GET > https://registry.npmjs.org/optimist npm http GET > https://registry.npmjs.org/coffee-script npm http GET > https://registry.npmjs.org/colors/0.6.0-1 npm http GET > https://registry.npmjs.org/minimatch npm http GET > https://registry.npmjs.org/pause/0.0.1 npm http GET > https://registry.npmjs.org/mime npm ERR! git clone > git://github.com/vojtajina/node-di.git undefined npm ERR! git clone > git://github.com/vojtajina/node-di.git undefined npm http GET > https://registry.npmjs.org/q npm http GET > https://registry.npmjs.org/lodash npm http GET > https://registry.npmjs.org/log4js npm http GET > https://registry.npmjs.org/rimraf npm ERR! Error: spawn ENOENT npm > ERR! at errnoException (child_process.js:975:11) npm ERR! at > Process.ChildProcess._handle.onexit (child_process.js:766:34) npm ERR! > If you need help, you may report this log at: npm ERR! > <http://github.com/isaacs/npm/issues> npm ERR! or email it to: npm > ERR! <[email protected]> > > npm ERR! System Windows_NT 6.1.7601 npm ERR! command "C:\\Program > Files\\nodejs\\\\node.exe" "C:\\Program > Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" > "karma-teamcity-reporter" npm ERR! cwd C:\Users\steve npm ERR! node -v > v0.10.5 npm ERR! npm -v 1.2.18 npm ERR! syscall spawn npm ERR! code > ENOENT npm ERR! errno ENOENT npm http 304 > https://registry.npmjs.org/chokidar npm http 304 > https://registry.npmjs.org/optimist npm http 304 > https://registry.npmjs.org/socket.io npm http 304 > https://registry.npmjs.org/glob npm http 304 > https://registry.npmjs.org/http-proxy npm http 304 > https://registry.npmjs.org/coffee-script npm http 304 > https://registry.npmjs.org/colors/0.6.0-1 npm http 304 > https://registry.npmjs.org/minimatch npm http 304 > https://registry.npmjs.org/mime npm http 304 > https://registry.npmjs.org/pause/0.0.1 npm http 304 > https://registry.npmjs.org/q npm http 304 > https://registry.npmjs.org/lodash npm http 304 > https://registry.npmjs.org/log4js npm http 304 > https://registry.npmjs.org/rimraf npm ERR! npm ERR! Additional > logging details can be found in: npm ERR! > C:\Users\steve\npm-debug.log npm ERR! not ok code 0 I'm new to npm. So maybe I'm doing something wrong with npm. But, even if the npm install works, then what do I do? Should I expect the next teamcity run of karma to include the special teamcity log messages? Anyone know how to fully integrate Karma into teamcity?
Use stable karma, which contains the teamcity reporter. npm install -g karma And then, use teamcity reporter, it will generate teamcity output on the stdout. karma start --reporters teamcity --single-run
TeamCity
16,343,543
26
I am having problems with Teamcity, where it is proceeding to run build steps even if the previous ones were unsuccessful. The final step of my Build configuration deploys my site, which I do not want it to do if any of my tests fail. Each build step is set to only execute if all previous steps were successful. In the Build Failure Conditions tab, I have checked the following options under Fail build if: -build process exit code is not zero -at least one test failed -an out-of-memory or crash is detected (Java only) This doesn't work - even when tests fail TeamCity deploys my site, why? I even tried to add an additional build failure condition that will look for specific text in the build log (namely "Test Run Failed.") When viewing a completed test in the overview page, you can see the error message against the latest build: "Test Run Failed." text appeared in build log But it still deploys it anyway. Does anyone know how to fix this? It appears that the issue has been running for a long time, here. Apparently there is a workaround: So far we do not consider this feature as very important as there is an obvious workaround: the script can check the necessary condition and do not produce the artifacts as configured in TeamCity. e.g. a script can move the artifacts from a temporary directory to the directory specified in the TeamCity as publish artifacts from just before the finish and in case the build operations were successful. But that is not clear to me on exactly how to do that, and doesn't sound like the best solution either. Any help appreciated. Edit: I was also able to workaround the problem with a snapshot dependency, where I would have a separate 'deploy' build that was dependent on the test build, and now it doesn't run if tests fail. This was useful for setting the dependency up.
This is a known problem as of TeamCity 7.1 (cf. http://youtrack.jetbrains.com/issue/TW-17002) which has been fixed in TeamCity 8.x+ (see this answer). TeamCity distinguishes between a failed build and a failed build step. While a failing unit test will fail the build as a whole, unfortunately TeamCity still considers the test step itself successful because it did not return a non-zero error code. As a result, subsequent steps will continue running. A variety of workarounds have been proposed, but I've found they either require non-trivial setup or compromise on the testing experience in TeamCity. However, after reviewing a suggestion from @arex1337, we found an easy way to get TeamCity to do what we want. Just add an extra Powershell build step after your existing test step that contains the following inline script (replacing YOUR_TEAMCITY_HOSTNAME with your actual TeamCity host/domain): $request = [System.Net.WebRequest]::Create("http://YOUR_TEAMCITY_HOSTNAME/guestAuth/app/rest/builds/%teamcity.build.id%") $xml = [xml](new-object System.IO.StreamReader $request.GetResponse().GetResponseStream()).ReadToEnd() Microsoft.PowerShell.Utility\Select-Xml $xml -XPath "/build" | % { $status = $_.Node.status } if ($status -eq "FAILURE") { throw "Failing this step because the build itself is considered failed. This is our way to workaround the fact that TeamCity incorrectly considers a test step to be successful even if there are test failures. See http://youtrack.jetbrains.com/issue/TW-17002" } This inline PowerShell script is just using the TeamCity REST API to ask whether or not the build itself, as a whole, is considered failed (the variable %teamcity.build.id%" will be replaced by TeamCity with the actual build id when the step is executed). If the build as a whole is considered failed (say, due to a test failure), then this PowerShell script throws an error, causing the process to return a non-zero error code which results in the individual build step itself to be considered unsuccessful. At that point, subsequent steps can be prevented from running. Note that this script uses guestAuth, which requires the TeamCity guest account to be enabled. Alternately, you can use httpAuth instead, but you'll need to update the script to include a TeamCity username and password (e.g. http://USERNAME:PASSWORD@YOUR_TEAMCITY_HOSTNAME/httpAuth/app/rest/builds/%teamcity.build.id%). So, with this additional step in place, all subsequent steps set to execute "Only if all previous steps were successful" will be skipped if there are any previous unit test failures. We're using this to prevent automated deployment if any of our NUnit tests are not successful until JetBrains fixes the problem. Thanks to @arex1337 for the idea.
TeamCity
15,254,581
26
We are using Teamcity 6.5.6 professional version, which gives me the option to run a backup but I do not see any option to schedule it to a particular time. I am not sure if this version of teamcity even supports scheduled backups. If it is not possible through teamcity GUI, I wonder if there is any other option? Could someone please help? Thanks.
I wrote Powershell script for TeamCity auto backups, which you can schedule and run backup. Here's the code: function Execute-HTTPPostCommand() { param( [string] $url, [string] $username, [string] $password ) $authInfo = $username + ":" + $password $authInfo = [System.Convert]::ToBase64String([System.Text.Encoding]::Default.GetBytes($authInfo)) $webRequest = [System.Net.WebRequest]::Create($url) $webRequest.ContentType = "text/html" $PostStr = [System.Text.Encoding]::Default.GetBytes("") $webrequest.ContentLength = $PostStr.Length $webRequest.Headers["Authorization"] = "Basic " + $authInfo $webRequest.PreAuthenticate = $true $webRequest.Method = "POST" $requestStream = $webRequest.GetRequestStream() $requestStream.Write($PostStr, 0, $PostStr.length) $requestStream.Close() [System.Net.WebResponse] $resp = $webRequest.GetResponse(); $rs = $resp.GetResponseStream(); [System.IO.StreamReader] $sr = New-Object System.IO.StreamReader -argumentList $rs; [string] $results = $sr.ReadToEnd(); return $results; } function Execute-TeamCityBackup() { param( [string] $server, [string] $addTimestamp, [string] $includeConfigs, [string] $includeDatabase, [string] $includeBuildLogs, [string] $includePersonalChanges, [string] $fileName ) $TeamCityURL = [System.String]::Format("{0}/httpAuth/app/rest/server/backup?addTimestamp={1}&includeConfigs={2}&includeDatabase={3}&includeBuildLogs={4}&includePersonalChanges={5}&fileName={6}", $server, $addTimestamp, $includeConfigs, $includeDatabase, $includeBuildLogs, $includePersonalChanges, $fileName); Execute-HTTPPostCommand $TeamCityURL "USER" "PASSWORD" } $server = "http://YOUR_SERVER" $addTimestamp = $true $includeConfigs = $true $includeDatabase = $true $includeBuildLogs = $true $includePersonalChanges = $true $fileName = "TeamCity_Backup_" Execute-TeamCityBackup $server $addTimestamp $includeConfigs $includeDatabase $includeBuildLogs $includePersonalChanges $fileName
TeamCity
10,548,726
25
This is a more generic version of this question: How to run a build step on a specific branch only? For example, I can use a PowerShell script to run MSBuild if '%teamcity.build.branch.is_default%' -eq 'true' or if '%teamcity.build.branch%' -eq 'master' but then I will miss the collapsible log that comes with the TeamCity MSBuild build runner. Isn't there any easier way to conditionally run a build step?
It is not possible to execute build step based on condition. Vote for the related request: https://youtrack.jetbrains.com/issue/TW-17939. The recommended approach is to create separate build configuration for each branch. You can use templates to simplify the setup. In this case it will be easier to interpret the results and the statistics of the builds will be informative. Also see the related answer (hack is suggested).
TeamCity
33,158,624
25
We are using TeamCity 6.0 to build VS C# solutions each commit. Once the build is complete, a different test TC project runs. So that developers can add/remove/edit VS unit test projects, how can I make TeamCity use the the sln file or search for test dll's? I don't want to have to edit the build each time a new test project is added to the VS solution. Run tests from: **\*Test*.dll Doesn't appear to work, it only get s the first Test (which is currently failing)
Fixed :) - RTFL (Read the log!) Run tests from: **\bin\debug\*Test*.dll
TeamCity
5,646,598
25
I have the latest PhpStorm (2016.2) and PHPUnit phar (5.5.4). For some reason when I run a PHPUnit test in my project in PhpStorm, it is adding on --teamcity to the run command, resulting in a failure: Testing started at 12:52 PM ... Unit test suite invoked with a path to a non-unit test: --teamcity Process finished with exit code 1 I have no idea where this --teamcity option is coming from, it happens no matter what test I run, and even when starting from a blank configuration. I also do NOT have the TeamCity plugin installed, I don't even use TeamCity. Here's what the full command appears as: /usr/local/Cellar/php70/7.0.9/bin/php /Users/name/bin/phpunit-5.5.4.phar --configuration /path/to/config/my-phpunit.xml ClassNameTest /Users/name/PhpstormProjects/path/to/tests/unit/app/ClassNameTest.php --teamcity (sensitive information swapped out) All I want to do is get rid of this --teamcity option, everything works if I run in a separate terminal window without that option. This only recently started happening, maybe after a PhpStorm update.
tl;dr I only could resolve this by removing the system installed phpunit instance from my system (Linux): sudo apt remove phpunit-* Details Even if the setting in PhpStorm was to use composer autoloader: for some reason it ended up using TeamCity from /usr/share/php/PHPUnit/Util/Log/TeamCity.php: Project's local PHPUnit was 6.2 while the system default was 5.1 -> they're incompatible.
TeamCity
39,599,961
24
I want to install a TeamCity Build Agent as a user. When entering my user credentials here: I always get this error: NOTE: My account (user) is an administrator with full permission! How can I do this?
The error message says it does not have "enough rights to run as a service", this is slightly different from just being an administrator. Go to Control Panel> Administrative Tools> Local Security Policy. Select Local Policies> User Rights Assignment. Scroll down through the list of policies and look for Log on as a service. Add the account you're using to the list of accounts with this right. That should in theory be all you need to allow the service to run under that user.
TeamCity
30,718,514
24
I want to convert a value from bigint to datetime. For example, I'm reading the HISTORY table of teamcity server. On the field build_start_time_server, I have this value on one record 1283174502729. How can I convert it to a datetime value?
Does this work for you? It returns 30-8-2010 13:21:42 at the moment on SQL Server 2005: select dateadd(s, convert(bigint, 1283174502729) / 1000, convert(datetime, '1-1-1970 00:00:00')) I've divided by 1000 because the dateadd function won't work with a number that large. So you do lose a little precision, but it is much simpler to use.
TeamCity
3,650,320
24
I have teamcity project that use mercurial. I did a few manually changes to the files in teamcity/buildAgent/work directory. The problem is that now I cannot update the files to the files in the repository. How can I force re checkout for the teamcity? Is there any option to get rid of the old checkout?
There is a "Clean Sources" button on the project or build configuration page somewhere. If you click that the next build will automatically do a full checkout.
TeamCity
2,785,463
23
I am using the MSBuild runner in TeamCity to build an ASP.net web api and running unit tests. Everything was working, until I upgraded to "Microsoft Build Tools 2017 15.7.2". Suddenly msbuild was copying an older version of Newtonsoft.Json.dll (version 6.0.4.17603) from either "C:\Program Files (x86)\ISS\Microsoft Web Deploy V3" or "C:\Program Files\ISS\Microsoft Web Deploy V3" to the output folder when building the solution. All the projects are referencing the 9.0.1 version using NuGet. Monitoring the output folder as the build was running, I could see the .dll switching back and forth between 6.0.4 and 9.0.1 until the build ended, and the 6.0.4 version remained. I found this question and when I renamed the Newtonsoft.Json.dll files in the Web deploy folders to Newtonsoft.Json_old.dll", msbuild did not replace my 9.0.1 version and everything was working fine. I have checked that all the projects referencing to Newtonsoft.Json are referencing the 9.0.1 version and using the correct Hint-Path in .csproj files. Does anyone have any idea how to solve the problem? My solution seems more like a workaround and I would like to know why msbuild was copying this file in the first place.
Summary When MSBuild is resolving assemblies, it will search in some pretty weird directories, including that Web Deploy folder, depending on what you have installed. Based on the MSBuild reference, I believe that this is legacy behavior. You can stop it from doing that with an MSBuild property defined in your project file. In the affected project file, find the following line: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> And add this below it: <PropertyGroup> <AssemblySearchPaths>$(AssemblySearchPaths.Replace('{AssemblyFolders}', '').Split(';'))</AssemblySearchPaths> </PropertyGroup> This will cause MSBuild to no longer look in the problematic folders when resolving assemblies. Full Story My team ran into a similar problem when we moved to Visual Studio 2019. Some of our projects are still targeting .NET Framework 4.0, and after installing Visual Studio 2019 on our build agents, we started getting a mysterious error with projects that referenced some of our core libraries: The primary reference "OurCoreLibrary, Version=3.4.2.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxxxxxxx, processorArchitecture=MSIL" could not be resolved because it has an indirect dependency on the assembly "Newtonsoft.Json, Version=9.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed" which was built against the ".NETFramework,Version=v4.5" framework. This is a higher version than the currently targeted framework ".NETFramework,Version=v4.0". The problem went away upon switching the project to target 4.5, but for reasons I won't get into here, we couldn't do that for every affected project, so I decided to dig in a bit deeper. As it turns out, your question offered some insight into what was going on. The version of Newtonsoft.Json that we were referencing matched the version in "C:\Program Files (x86)\IIS\Microsoft Web Deploy V3", and when I removed the file, the build succeeded. Our specific problem was that the copy of Newtonsoft.Json in the Web Deploy folder was the same version (9.0.0.0) but the wrong framework (4.5 instead of 4.0), and for whatever reason the resolution logic doesn't check the target framework, causing a mismatch at build time. Updating to VS2019 involved updating Web Deploy, which also updated that copy of Newtonsoft.Json to 9.0.0.0, causing our collision. To see why that assembly was even being looked at to begin with, I set the MSBuild project build output verbosity to Diagnostic and took a look at what was happening. Searching for the offending path showed that in the ResolveAssemblyReferences task, MSBuild was going through some unexpected places to find matches: 1> For SearchPath "{AssemblyFolders}". (TaskId:9) 1> Considered "C:\Program Files (x86)\Microsoft.NET\ADOMD.NET\140\OurCoreLibrary.winmd", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files (x86)\Microsoft.NET\ADOMD.NET\140\OurCoreLibrary.dll", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files (x86)\Microsoft.NET\ADOMD.NET\140\OurCoreLibrary.exe", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\OurCoreLibrary.winmd", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\OurCoreLibrary.dll", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\OurCoreLibrary.exe", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\OurCoreLibrary.winmd", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\OurCoreLibrary.dll", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\OurCoreLibrary.exe", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files\IIS\Microsoft Web Deploy V3\OurCoreLibrary.winmd", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files\IIS\Microsoft Web Deploy V3\OurCoreLibrary.dll", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files\IIS\Microsoft Web Deploy V3\OurCoreLibrary.exe", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies\OurCoreLibrary.winmd", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies\OurCoreLibrary.dll", but it didn't exist. (TaskId:9) 1> Considered "C:\Program Files (x86)\Microsoft SQL Server\140\SDK\Assemblies\OurCoreLibrary.exe", but it didn't exist. (TaskId:9) Further digging shows that the paths searched are passed in as AssemblySearchPaths, which is defined in Microsoft.Common.CurrentVersion.targets: <AssemblySearchPaths Condition=" '$(AssemblySearchPaths)' == ''"> {CandidateAssemblyFiles}; $(ReferencePath); {HintPathFromItem}; {TargetFrameworkDirectory}; $(AssemblyFoldersConfigFileSearchPath) {Registry:$(FrameworkRegistryBase),$(TargetFrameworkVersion),$(AssemblyFoldersSuffix)$(AssemblyFoldersExConditions)}; {AssemblyFolders}; {GAC}; {RawFileName}; $(OutDir) </AssemblySearchPaths> According to the MSBuild Task Reference for the ResolveAssemblyReferences task, SearchPaths parameter is defined as: Specifies the directories or special locations that are searched to find the files on disk that represent the assemblies. The order in which the search paths are listed is important. For each assembly, the list of paths is searched from left to right. When a file that represents the assembly is found, that search stops and the search for the next assembly starts. ...and it defines a few special constants, including our friend {AssemblyFolders}: {AssemblyFolders}: Specifies the task will use the Visual Studio.NET 2003 finding-assemblies-from-registry scheme. Because the directories are checked in order, you might expect {HintPathFromItem} to take precedence, and in most cases it does. However, if you have a dependency with a dependency on an older version of Newtonsoft.Json, there won't be a HintPath for that version and so it will continue on until it resolves. Later on in Microsoft.Common.CurrentVersion.targets we can see that there are cases where this constant is explicitly removed, which is where the answer above comes from: <PropertyGroup Condition="'$(_TargetFrameworkDirectories)' == '' and '$(AssemblySearchPaths)' != '' and '$(RemoveAssemblyFoldersIfNoTargetFramework)' == 'true'"> <AssemblySearchPaths>$(AssemblySearchPaths.Replace('{AssemblyFolders}', '').Split(';'))</AssemblySearchPaths> </PropertyGroup> Removing this constant removes the offending folders from consideration, and to be honest I cannot think of a situation where I would want an assembly to implicitly resolve to whatever version of say, Newtonsoft.Json, was hanging out in the Web Deploy or SQL Server SDK folder. That being said, I am sure there's a case out there where turning this off will cause somebody issues, so keep that in mind.
TeamCity
50,638,711
22
There are many sites that explain how to run signtool.exe on a .pfx certificate file, which boil down to: signtool.exe sign /f mycert.pfx /p mypassword /t http://timestamp.server.com \ /d "My description" file1.exe file2.exe I have a continuous integration CI process setup (using TeamCity) which like most CI processes, does everything: checks out source, compiles, signs all .exes, packages into an installer, and signs the installer .exe. There are currently 3 build agents, running identical VMs, and any of them can run this process. Insecure implementation To accomplish this today, I do a couple Bad Things(TM) as far as security is concerned: the .pfx file is in source control, and the password for it is in the build script (also in source control). This means that any developers with access to source code repository can take the pfx file and do whatever nefarious things they'd like with. (We're a relatively small dev shop and trust everyone with access, but clearly this still isn't good). The ultimate secure implementation All I can find about doing this "correctly", is that you: Keep the pfx and password on some secure medium (like an encrypted USB drive with finger-based unlock), and probably not together Designate only a couple of people to have access to sign files Only sign final builds on a non-connected, dedicated machine that's kept in a locked-up vault until you need to bring it out for this code-signing ceremony. While I can see merit in the security of this.. it is a very heavy process, and expensive in terms of time (running through this process, securely keeping backups of certificates, ensuring the code-signing machine is in a working state, etc). I'm sure some people skip steps and just manually sign files with the certificate stored on their personal system, but that's still not great. It also isn't compatible with signing files that are then used within the installer (which is also built by the build server) -- and this is important when you have an installed .exe that has a UAC prompt to get admin access. Middle ground? I am far more concerned with not presenting a scary "untrusted application" UAC prompt to users than proving it is my company. At the same time, storing the private key AND password in the source code repository that every developer (plus QA and high-tier tech support) have access to is clearly not a good security practice. What I'd like is for the CI server to still sign during the build process like it does today, but without the password (or private key portion of the certificate) to be accessible to everyone with access to the source code repository. Is there a way to keep the password out of the build or secure somehow? Should I be telling signtool to use a certificate store (and how do I do that, with 3 build agents and the build running as a non-interactive user account)? Something else?
I ended up doing a very similar approach to what @GiulioVlan suggested, but with a few changes. MSBuild Task I created a new MSBuild task that executes signtool.exe. This task serves a couple main purposes: It hides the password from ever being displayed It can retry against the timestamp server(s) upon failures It makes it easy to call Source: https://gist.github.com/gregmac/4cfacea5aaf702365724 This specifically takes all output and runs it through a sanitizer function, replacing the password with all *'s. I'm not aware of a way to censor regular MSBuild commands, so if you pass the password on commandline directly to signtool.exe using it will display the password -- hence the need for this task (aside from other benefits). Password in registry I debated about a few ways to store the password "out-of-band", and ended up settling on using the registry. It's easy to access from MSBuild, it's fairly easy to manage manually, and if users don't have RDP and remote registry access to the machine, it's actually reasonably secure (can anyone say otherwise?). Presumably there are ways to secure it using fancy GPO stuff as well, but that's beyond the length I care to go. This can be easily read by msbuild: $(Registry:HKEY_LOCAL_MACHINE\SOFTWARE\1 Company Dev@CodeSigningCertPassword) And is easy to manage via regedit: Why not elsewhere? In the build script: it's visible by anyone with source code Encrypted/obfuscated/hidden in source control: if someone gets a copy of the source, they can still figure this out Environment variables: In the Teamcity web UI, there is a detail page for each build agent that actually displays all environment variables and their values. Access to this page can be restricted but it means some other functionality is also restricted A file on the build server: Possible, but seems a bit more likely it's inadvertently made accessible via file sharing or something Calling From MSBuild In the tag: <Import Project="signtool.msbuild.tasks"/> (You could also put this in a common file with other tasks, or even embed directly) Then, in whichever target you want to use for signing: <SignTool SignFiles="file1.exe;file2.exe" PfxFile="cert.pfx" PfxPassword="$(Registry:HKEY_LOCAL_MACHINE\SOFTWARE\1 Company Dev@CodeSigningCertPassword)" TimestampServer="http://timestamp.comodoca.com/authenticode;http://timestamp.verisign.com/scripts/timstamp.dll" /> So far this works well.
TeamCity
27,022,632
22
I am trying to update an enviroment variable in TeamCity using Powershell script. However, it does not update the value of the variable. How can I do this? Below is my current code which gets the currentBuildNumber fine: $currentBuildNumber = "%env.currentBuildNumber%" $newBuildNumber = "" Write-Output $currentBuildNumber If ($currentBuildNumber.StartsWith("%MajorVersion%") -eq "True") { $parts = $currentBuildNumber.Split(".") $parts[2] = ([int]::Parse($parts[2]) + 1) + "" $newBuildNumber = $parts -join "." } Else { $newBuildNumber = '%MajorVersion%.1' } //What I have tried $env:currentBuildNumber = $newBuildNumber Write-Host "##teamcity[env.currentBuildNumber '$newBuildNumber']" Write-Host "##teamcity[setParameter name='currentBuildNumber' value='$newBuildNumber']"
Try "##teamcity[setParameter name='env.currentBuildNumber' value='$newBuildNumber']" (note the env. prefix in the name) Also, you can try to increase the PowerShell std out column default (80 using TeamCity's command runner). If your service message is longer than that, then TeamCity will fail to parse it. if ($env:TEAMCITY_VERSION) { $host.UI.RawUI.BufferSize = New-Object System.Management.Automation.Host.Size(8192,50) }
TeamCity
24,160,533
22
Using github for windows on same machine, with same credentials works fine. Can pull/clone. However teamcity installed as a windows service on the same machine, returns the following error List remote refs failed: org.eclipse.jgit.errors.TransportException: https://github.com/my-private-repo.git: not authorized
In teamcity, in the project VCS Root, if the authentication method is based on ssh public/private key, then the fetch URL should be like [email protected]:.../repository.git. Using https:// in the fetch URL causes the error message: List remote refs failed: org.eclipse.jgit.errors.TransportException.. to occur.
TeamCity
22,958,859
22
I've read a handful of posts (see references below) and have yet to find a guide on best practices that is specific to my tech stack. The goal: Create a single NuGet package targeting multiple .NET frameworks built from a single .csproj file via TeamCity using MSBuild and NuGet. The constraints: Pull the code from the VCS only once. All compiled assemblies should be versioned the same. Single .csproj (not one per target framework). I have two approaches in mind: Create a single build configuration. It would contain three build steps: compile .NET 3.5, compile .NET 4.0, pack with NuGet. Each build step would be contingent upon success of the last. The only real problem I see with this approach (and hopefully there's a solution that I'm not aware of) is that each build step would require its own set of build parameters (e.g., system.TargetFrameworkVersion and system.OutputPath) to designate the unique location for the DLL to sit (e.g., bin\release\v3.5 and bin\release\v4.0) so that the NuGet pack step would be able to do its thing based upon the Files section in the .nuspec file. Create multiple build configurations. One build configuration per the build steps outlined above. With this approach, it is easy to solve the TargetFrameworkVersion and OutputPath build parameters issue but I now have to create snapshot dependencies and share the assembly version number across the builds. It also eats up build configuration slots which is ok (but not optimal) for us since we do have an Enterprise license. Option #1 seems like the obvious choice. Options #2 feels dirty. So my two questions are: Is it possible to create parameters that are unique to a build step? Is there a third, better approach? References: Multi-framework NuGet build with symbols for internal dependency management Nuget - packing a solution with multiple projects (targeting multiple frameworks) http://lostechies.com/joshuaflanagan/2011/06/23/tips-for-building-nuget-packages/ http://msdn.microsoft.com/en-us/library/hh264223.aspx https://stackoverflow.com/a/1083362/607701 http://confluence.jetbrains.com/display/TCD7/Configuring+Build+Parameters http://docs.nuget.org/docs/creating-packages/creating-and-publishing-a-package
Here is my preferred solution (Option #1): The magic relies on an unfortunate workaround. If you're willing to make this compromise, this solution does work. If you are not, you can follow the issue that I opened on JetBrains' issue tracker. The single build configuration looks like this: Note the name of the first two build steps. They are, in fact, named explicitly as the TargetFrameworkVersion values for .NET 3.5 and 4.0, respectively. Then, in the Build Parameters section, I have configured the following parameters: And finally, the Nuget Pack step does the file path translation according to my .nuspec's files section: <files> <file src="bin\release\v3.5\*.*" target="lib\net35" /> <file src="bin\release\v4.0\*.*" target="lib\net40" /> </files>
TeamCity
15,816,094
22
I have a CI build that is setup in TeamCity that will trigger when a pull request is made in BitBucket (git). It currently builds against the source branch of the pull request but it would be more meaningful if it could build the merged pull request. My research has left me with the following possible solutions: Script run as part of build - rather not do it this way if possible Server/agent plugin - not found enough documentation to figure out if this is possible Has anyone done this before in TeamCity or have suggestions on how I can achieve it? Update: (based on John Hoerr answer) Alternate solution - forget about TeamCity doing the merge, use BitBucket web hooks to create a merged branch like github does and follow John Hoerr's answer.
Add a Branch Specification refs/pull-requests/*/merge to the project's VCS Root. This will cause TeamCity to monitor merged output of pull requests for the default branch.
TeamCity
25,266,136
21
Working with github and teamcity, builds seem to either be refs/heads/master or master branch. Whenever the github service hook launches a build, it is on the branch master. Whenever TeamCity launches a build (e.g. when I start a build, or a dependency building triggers a build) the branch is refs/heads/master. This causes two build numbers to be shown on the same page, the last build for master and the last build for refs/heads/master. Is there a way to make TeamCity triggered builds build master instead of refs/heads/master? Or is there a way to get master and refs/heads/master to be treated as the same branch, not as different ones?
I think I found a solution to this, though it isn't ideal because I had to delete all passed builds. I had to first copy the projects and delete the old ones to get rid of all builds that had been run. Then I configured the default branch to be master. And I set the other branch specifications to: +:(master) +:refs/heads/(master) Also, I updated the VSC trigger to listen on +:master instead of +:*. Then I tested by manually triggering a build, and having github test hook trigger a build. It seem to have worked, they are both grouped under master!
TeamCity
20,460,577
21
I am trying to trigger a single teamcity build for a single merge in VCS. The way my CI is laid out is I one branch staging which we merge all of our changes into. Then when we want to deploy to production we merge staging into the master branch in git. Unfortunately this triggers a lot of builds, one for probably every checkin to the staging branch. So instead we would want that to be a single build. Because it was a single merge into the master branch. So, does anyone know how to trigger a single build on a change in VCS no matter how many check-ins from how many different people were made? The options I have selected in the build triggers in team city are the following. Trigger a build on each check-in Include several check-ins in a build if they are from the same committer I think I could do it with a custom build trigger but I would rather not go down that path. Thanks in advance for the help.
As counterintuitive as it is, unchecking the Trigger a build on each check-in checkbox should solve this issue as long as you have Quiet Period enabled for long enough that all of the checkins are included. Essentially Trigger a build on each check-in actually means "only include 1 checkin in each build." Disabling the option will still cause builds to be triggered by checkins, but will include all checkins (from all users) that occur before the build actually starts. TeamCity should really clarify this in their documentation or rename the option.
TeamCity
20,380,881
21
We are setting up TeamCity to run our jasmine tests using node and karma. The tests run fine and results are reported under the "Tests" tab in TeamCity. However we would like to report code coverage in TeamCity (and even set Build Failure Conditions on the level). I have installed the karma-coverage module npm install karma-coverage --save-dev And tried to configure it in karma.conf.js by adding preprocessors: { 'myProject/Scripts/app/**/*.js': 'coverage' }, reporters: ['progress', 'coverage'], When karma is run, no errors are reported, and lots of files are created below the folder coverage, including a very nicely formatted code coverage report in index.html But nothing new shows up in TeamCity. No "Code Coverage" tab. How do I configure karma to produce reports that show up in TeamCity? Perhaps I can use set coverageReporter to something appropriate, but what? This setting makes no difference: coverageReporter: { type : 'html', dir : 'coverage/' }, Bonus question: how do I set Build Failure Conditions on the karma reported code coverage?
The easiest way to get TeamCity to recognize your coverage report is to output a build artifact that contains that nice html coverage report. Edit the configuration settings for your build and under Artifact Paths add something like: coverage/** => coverage.zip TeamCity will recognize the coverage.zip artifact if it finds the index.html file in the root and will add a Code Coverage tab to each build. Source: https://confluence.jetbrains.com/pages/viewpage.action?pageId=74847395#HowTo...-ImportcoverageresultsinTeamCity (Teamcity version 9.x)
TeamCity
19,266,655
21
I have set up a build step on TeamCity,as described here, to do automatic release deployments to our test server. But it is not using the latest nuget packages that was build in TeamCity. Use Case : Teamcity will create nuget package with version 1.0.0.9, all the dlls that is in the package is the correct version, and the Release in Octopus, that was deployed has got the same version number , but the packages that octopus uses is of an earlier package eg 1.0.0.5. I have specified the --force parameter on the build step so it should use the latest packages but it is not. If I manually create a release in Octopus, and select the latest packages it is working 100% Please can someone tell me if I am missing something. thanks in advance
I think what you need to do is create two build configurations in TeamCity, one to build and one to deploy with Octopus. Refer to this link that has a small blurb toward the end: Note that NuGet packages created from your build won't appear in TeamCity until after the build completes. This means you'll usually need to configure a secondary build configuration, and use a snapshot dependency and build trigger in TeamCity to run the deployment build configuration after the first build configuration completes. So in my case I created 2 build configurations, then setup a snapshot dependency from the build to the deploy config and also a trigger to kick off the deploy after a successful build.
TeamCity
17,107,988
21
We have a TeamCity (7.0.3) agent running on a 64-bit Windows Server 2008 machine. When we recently upgraded the agent to use Java 7 (1.7.0_10) the builds started failing with the following stacktrace: Error occurred during initialization of VM java.lang.ExceptionInInitializerError at java.lang.Runtime.loadLibrary0(Runtime.java:841) at java.lang.System.loadLibrary(System.java:1084) at java.lang.System.initializeSystemClass(System.java:1145) Caused by: java.lang.StringIndexOutOfBoundsException: String index out of range: 0 at java.lang.String.charAt(String.java:658) at java.io.Win32FileSystem.<init>(Win32FileSystem.java:40) at java.io.WinNTFileSystem.<init>(WinNTFileSystem.java:37) at java.io.FileSystem.getFileSystem(Native Method) at java.io.File.<clinit>(File.java:156) at java.lang.Runtime.loadLibrary0(Runtime.java:841) at java.lang.System.loadLibrary(System.java:1084) at java.lang.System.initializeSystemClass(System.java:1145) The problem seems to be caused by the inclusion of the "-Dfile.separator=\" java option that TeamCity uses in the executable command for the agent. I was able to reproduce the problem by writing a simple "Hello World" class and compiling it on the Windows box and then running the program with the file.separator option (i.e. java -Dfile.separator=\ HelloWorld) I haven't found any similar bug reports. Has anyone seen anything like this? Has the behaviour of file.separator changed in Java 7? Furthermore I realise that \ is the default file.separator for Windows anyway so I don't think the agent really needs to use it in the executable command, however I can't see a way in TeamCity to tell the agent not to include it. Is it possible to do this?
Try the JVM command line parameter -Dfile.separator=\/ (i.e., specify both a backward and forward slash).
TeamCity
13,913,196
21
I am trying to figure out how to make TeamCity run my MSTests. I have setup a build step using the following parameters: Path to MSTest.exe: %system.MSTest.10.0% List assembly files: Projects\Metadude..Tests\bin\Debug\Metadude..Test.dll MSTest run configuration file: Local.testsettings However when this step runs, it does not execute any tests. This is the output from the log: [02:13:49]: Step 2/2: Run Unit Tests (MSTest) [02:13:49]: [Step 2/2] Starting: "D:\Program Files (x86)\TeamCity\buildAgent\plugins\dotnetPlugin\bin\JetBrains.BuildServer.NUnitLauncher.exe" #TeamCityImplicit [02:13:49]: [Step 2/2] in directory: D:\Program Files (x86)\TeamCity\buildAgent\work\1f82da3df0f560b6 [02:13:50]: [Step 2/2] Microsoft (R) Test Execution Command Line Tool Version 10.0.30319.1 [02:13:50]: [Step 2/2] Copyright (c) Microsoft Corporation. All rights reserved. [02:13:50]: [Step 2/2] [02:13:50]: [Step 2/2] Please specify tests to run, or specify the /publish switch to publish results. [02:13:50]: [Step 2/2] For switch syntax, type "MSTest /help" [02:13:50]: [Step 2/2] Process exited with code 1 [02:13:50]: Publishing internal artifacts [02:13:50]: [Publishing internal artifacts] Sending build.finish.properties.gz file [02:13:50]: Build finished I have tried to specify the tests to run using the following: Tests: Tests.Metadude.Core.Extensions.StringExtensionsTests But that doesn't work. I can't seem to find any documentation on google related to the MSTest build step in TeamCity. UPDATE Ok, I am an idiot. Well that might be a little harsh, but the test assembly was missing an "s" from the assembly name. Would have been nice to get something to that effect in the build log though.
Firstly, Ensure the assembly you are trying to test exists at that location. ie your relative path: Projects\Metadude..Tests\bin\Debug\Metadude..Test.dll However I would expect something logged by TC if your file didnt exist. It looks like its running MSTest without any arguments somehow. If you are sure the path is correct try it without specifying the .testsettings file to see what happens. I'm using MSTest succesfully in TC without this (but you may need it). The other thing I'm doing different is I specify the FULL path to MSTest.exe, ie C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\MSTest.exe instead of their variable '%system.MSTest.10.0%' I can't recall why I did this but there would have been a good reason (like it didnt work when using their variable)
TeamCity
8,382,632
21
I want to adjust the output from my TeamCity build configuration of my class library so that the produced dll files have the following version number: 3.5.0.x, where x is the subversion revision number that TeamCity has picked up. I've found that I can use the BUILD_NUMBER environment variable to get x, but unfortunately I don't understand what else I need to do. The "tutorials" I find all say "You just add this to the script", but they don't say which script, and "this" is usually referring to the AssemblyInfo task from the MSBuild Community Extensions. Do I need to build a custom MSBuild script somehow to use this? Is the "script" the same as either the solution file or the C# project file? I don't know much about the MSBuild process at all, except that I can pass a solution file directly to MSBuild, but what I need to add to "the script" is XML, and the solution file decidedly does not look like XML. So, can anyone point me to a step-by-step guide on how to make this work? This is what I ended up with: Install the MSBuild Community Tasks Edit the .csproj file of my core class library, and change the bottom so that it reads: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> <Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets" /> <Target Name="BeforeBuild"> <AssemblyInfo Condition=" '$(BUILD_NUMBER)' != '' " CodeLanguage="CS" OutputFile="$(MSBuildProjectDirectory)\..\GlobalInfo.cs" AssemblyVersion="3.5.0.0" AssemblyFileVersion="$(BUILD_NUMBER)" /> </Target> <Target Name="AfterBuild"> Change all my AssemblyInfo.cs files so that they don't specify either AssemblyVersion or AssemblyFileVersion (in retrospect, I'll look into putting AssemblyVersion back) Added a link to the now global GlobalInfo.cs that is located just outside all the project Make sure this file is built once, so that I have a default file in source control This will now update GlobalInfo.cs only if the environment variable BUILD_NUMBER is set, which it is when I build through TeamCity. I opted for keeping AssemblyVersion constant, so that references still work, and only update AssemblyFileVersion, so that I can see which build a dll is from.
The CSPROJ file is effectively an MSBuild file. Unload the relevant class project in VS.NET, edit it and uncomment the BeforeBuild target. Add the FileUpdate MSBuild task from the MSBuild Community Extensions. In your MSBuild file, you can retrieve the BUILD_NUMBER from TeamCity by using the environment variable $(build_vcs_number_1). Note that you may want to create an additional Configuration for 'Production' which' condition you check for, as this will obviously not work when you build locally. Simply use that as input for the FileUpdate task's ReplacementText property. Note that if your revisions numbers go above the 65535 mark (UInt16), you cannot use it in the AssemblyVersion attribute. What I'd suggest you use instead though, is AssemblyInformationalVersion, which is just a string that does not have that limitation. Unless of course you are confident that you won't hit this revision upper boundary, but that seems a dodgy way of going about it. Alternatively, you can devise a way (a.b.c.d being your version number) of using revision div 1000 for c and revision mod 1000 for d.
TeamCity
2,027,857
20
If I have a Repository called my_project and make a large number of commits over a few different weeks, my contribution history on my GitHub/GitLab (Should be true for both), the main page profile will look like the following: As above you can see varied commits in the contribution panel, dark colours for more and light green for less, with grey as no commits to the repo. If I remove the repo does this all become grey? As in does the commit history for this graph on my landing page turn all those coloured cells from green to grey as there is no repo there any more associated with those commits. The only reason I ask this, is because I have made a repo which I no longer need and was using to test out some python and git, however if I remove the repo I am worried that my profile will show that I have made no contributions for the past month on GitHub/GitLab, but I have, it's only I want to remove the project as it's no longer needed. Any help is appreciated!
Yes, it will be removed too. However, if you don't want it you can make that repo as private so that publically it is not accessible to anyone and your contribution still shows up in history
GitLab
66,104,527
25
I have multiple jobs working with a single external resource (Server). The first job deploys the app to the environment, the second execute tests at this environment, third execute integration tests at this environment. I know there is Resource group option. But it locks only jobs. If two pipelines run concurrently I need to execute job1, job2, job3 from the first pipeline, and only when the first pipeline release resource - the second pipeline can launch jobs1-3. Is there a way to achieve this? There are other jobs in the pipeline - they should work concurrently.
This should be possible in 13.9 by using resource_group with process mode = oldest_first. Details available at: https://docs.gitlab.com/ee/ci/resource_groups/index.html#pipeline-level-concurrency-control-with-cross-projectparent-child-pipelines
GitLab
59,745,807
25
Using our own instance of Gitlab we get the error background worker "logical replication launcher" exited with exit code 1 when trying to use the postgres service in our runners. Haven't found anything useful over the internet. Any idea what's going on? Versions: Gitlab 12.4.3 gitlab-runner 12.5.0 (limit of 4 concurrent jobs) postgres 12.1 (tried with 11 and same result) DigitalOcean droplet: CPU-Optimized / 8 GB / 4 vCPUs Relevant part in gitlab-ci.yml: image: golang:1.12 services: - postgres variables: POSTGRES_USER: postgres POSTGRES_DB: xproject_test POSTGRES_PASSWORD: postgres Failure log: Running with gitlab-runner 12.5.0 (577f813d) on xproject sEZeszwx Using Docker executor with image xproject-ci ... Starting service postgres:latest ... Pulling docker image postgres:latest ... Using docker image sha256:9eb7b0ce936d2eac8150df3de7496067d56bf4c1957404525fd60c3640dfd450 for postgres:latest ... Waiting for services to be up and running... *** WARNING: Service runner-sEZeszwx-project-18-concurrent-0-postgres-0 probably didn't start properly. Health check error: service "runner-sEZeszwx-project-18-concurrent-0-postgres-0-wait-for-service" timeout Health check container logs: Service container logs: 2019-11-20T10:16:23.805738908Z The files belonging to this database system will be owned by user "postgres". 2019-11-20T10:16:23.805807212Z This user must also own the server process. 2019-11-20T10:16:23.805818432Z 2019-11-20T10:16:23.806094094Z The database cluster will be initialized with locale "en_US.utf8". 2019-11-20T10:16:23.806120707Z The default database encoding has accordingly been set to "UTF8". 2019-11-20T10:16:23.806208494Z The default text search configuration will be set to "english". 2019-11-20T10:16:23.806264704Z 2019-11-20T10:16:23.806282587Z Data page checksums are disabled. 2019-11-20T10:16:23.806586302Z 2019-11-20T10:16:23.806931287Z fixing permissions on existing directory /var/lib/postgresql/data ... ok 2019-11-20T10:16:23.807763042Z creating subdirectories ... ok 2019-11-20T10:16:23.808045789Z selecting dynamic shared memory implementation ... posix 2019-11-20T10:16:23.835644353Z selecting default max_connections ... 100 2019-11-20T10:16:23.866604734Z selecting default shared_buffers ... 128MB 2019-11-20T10:16:23.928432088Z selecting default time zone ... Etc/UTC 2019-11-20T10:16:23.929447992Z creating configuration files ... ok 2019-11-20T10:16:24.122662589Z running bootstrap script ... ok 2019-11-20T10:16:24.706975030Z performing post-bootstrap initialization ... ok 2019-11-20T10:16:24.819117668Z initdb: warning: enabling "trust" authentication for local connections 2019-11-20T10:16:24.819150100Z You can change this by editing pg_hba.conf or using the option -A, or 2019-11-20T10:16:24.819157763Z --auth-local and --auth-host, the next time you run initdb. 2019-11-20T10:16:24.819272849Z syncing data to disk ... ok 2019-11-20T10:16:24.819313390Z 2019-11-20T10:16:24.819328954Z 2019-11-20T10:16:24.819340787Z Success. You can now start the database server using: 2019-11-20T10:16:24.819349374Z 2019-11-20T10:16:24.819357407Z pg_ctl -D /var/lib/postgresql/data -l logfile start 2019-11-20T10:16:24.819365840Z 2019-11-20T10:16:24.857656160Z waiting for server to start....2019-11-20 10:16:24.857 UTC [46] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit 2019-11-20T10:16:24.860371378Z 2019-11-20 10:16:24.860 UTC [46] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2019-11-20T10:16:24.886271885Z 2019-11-20 10:16:24.886 UTC [47] LOG: database system was shut down at 2019-11-20 10:16:24 UTC 2019-11-20T10:16:24.892844968Z 2019-11-20 10:16:24.892 UTC [46] LOG: database system is ready to accept connections 2019-11-20T10:16:24.943542403Z done 2019-11-20T10:16:24.943591286Z server started 2019-11-20T10:16:25.084670051Z CREATE DATABASE 2019-11-20T10:16:25.086153670Z 2019-11-20T10:16:25.086604000Z 2019-11-20T10:16:25.086694058Z /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/* 2019-11-20T10:16:25.086711933Z 2019-11-20T10:16:25.088473308Z 2019-11-20 10:16:25.088 UTC [46] LOG: received fast shutdown request 2019-11-20T10:16:25.090893184Z waiting for server to shut down....2019-11-20 10:16:25.090 UTC [46] LOG: aborting any active transactions 2019-11-20T10:16:25.092499368Z 2019-11-20 10:16:25.092 UTC [46] LOG: background worker "logical replication launcher" (PID 53) exited with exit code 1 2019-11-20T10:16:25.093942785Z 2019-11-20 10:16:25.093 UTC [48] LOG: shutting down 2019-11-20T10:16:25.112341160Z 2019-11-20 10:16:25.112 UTC [46] LOG: database system is shut down 2019-11-20T10:16:25.189351710Z done 2019-11-20T10:16:25.189393803Z server stopped 2019-11-20T10:16:25.189929555Z 2019-11-20T10:16:25.189967760Z PostgreSQL init process complete; ready for start up. 2019-11-20T10:16:25.189982340Z 2019-11-20T10:16:25.214046388Z 2019-11-20 10:16:25.213 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit 2019-11-20T10:16:25.214092434Z 2019-11-20 10:16:25.213 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2019-11-20T10:16:25.214172706Z 2019-11-20 10:16:25.214 UTC [1] LOG: listening on IPv6 address "::", port 5432 2019-11-20T10:16:25.219769380Z 2019-11-20 10:16:25.219 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2019-11-20T10:16:25.241614800Z 2019-11-20 10:16:25.241 UTC [64] LOG: database system was shut down at 2019-11-20 10:16:25 UTC 2019-11-20T10:16:25.248887712Z 2019-11-20 10:16:25.248 UTC [1] LOG: database system is ready to accept connections *********
First thing, your Database container is ready to accept the connection, as you can see from the logs 2019-11-20T10:16:25.248887712Z 2019-11-20 10:16:25.248 UTC [1] LOG: database system is ready to accept connections This is the expected behaviour of the offical Postgres image If you look into the entrypoint of Postgres it performs two tasks. it will run any *.sql files, run any executable *.sh scripts, and source any non-executable *.sh scripts found in that directory to do further initialization before starting the service from this directory /docker-entrypoint-initdb.d after initialization completion, it stops the process and runs it again as the main process of the container. do you know why I get that shutdown request? # stop postgresql server after done setting up user and running scripts docker_temp_server_stop() { PGUSER="${PGUSER:-postgres}" \ pg_ctl -D "$PGDATA" -m fast -w stop } then after completion . . echo 'PostgreSQL init process complete; ready for start up.' . . exec "$@"
GitLab
58,952,919
25
git is asking me to enter my gitlab user credentials when pushing or pulling code. I am using gitlab.com, I'm not self-hosting gitlab. I followed the instructions to set up my ssh key. I created a key, I copied the contents from ~/.ssh/id_rsa.pub, added the key to gitlab using gitlab's user interface, and git still asks me for my user and password. git remote -v origin https://gitlab.com/<my_user>/<my_repo> (fetch) origin https://gitlab.com/<my_user>/<my_repo> (push)
You are using HTTPS authentication. Switch to an SSH-based URL (in this case, probably ssh://[email protected]/path/to/repo.git or [email protected]:path/to/repo.git). For instance: git remote set-url origin ssh://[email protected]/<user>/<repo> (Alternatively, if you're comfortable with using your configured editor, you can run git config --edit and edit the URL directly in the configuration file. Make sure whatever editor you choose stores files in plain-text format, not any enhanced rich-text or UTF-16 encoding.) Background In order to connect one Git to another, you may need to authenticate yourself. There are several ways to do that. (Whether you need to do that depends on the kind of connection you are making, and whether the repository is public.) If you are authenticating over https://, your Git may ask for a user name and password. It can use various credential helpers, which may also store user names and/or passwords. You can configure which credential helper to use, including the "cache" and "store" helpers, which use different additional data. Note that the available set of credential helpers varies based on your underlying operating system as well. If you are authenticating over ssh://, you will use an ssh key (SSH authentication). If you use a URL that begins with user@host:, you are using SSH authentication.
GitLab
47,125,916
25
I've tried to get my setup work with gitlab-ci. I have a simple gitlab-ci.yml file build_ubuntu: image: ubuntu:14.04 services: - rikorose/gcc-cmake:gcc-5 stage: build script: - apt-get update - apt-get install -y python3 build-essential curl - cmake --version tags: - linux I want to get a ubuntu 14.04 LTS with gcc and cmake (apt-get version is to old) installed. If i use it locally (via docker --link command) everything works, but when the gitlab-ci-runner will process it i get the following waring (which is in my case an error) Running with gitlab-ci-multi-runner 9.2.0 (adfc387) on xubuntuci1 (19c6d3ce) Using Docker executor with image ubuntu:14.04 ... Starting service rikorose/gcc-cmake:gcc-5 ... Pulling docker image rikorose/gcc-cmake:gcc-5 ... Using docker image rikorose/gcc-cmake:gcc-5 ID=sha256:ef2ac00b36e638897a2046c954e89ea953cfd5c257bf60103e32880e88299608 for rikorose/gcc-cmake service... Waiting for services to be up and running... *** WARNING: Service runner-19c6d3ce-project-54-concurrent-0-rikorose__gcc- cmake probably didn't start properly. Error response from daemon: Cannot link to a non running container: /runner- 19c6d3ce-project-54-concurrent-0-rikorose__gcc-cmake AS /runner-19c6d3ce- project-54-concurrent-0-rikorose__gcc-cmake-wait-for-service/runner- 19c6d3ce-project-54-concurrent-0-rikorose__gcc-cmake Does anybody know how i can fix this? Thanks in advance Tonka
You must start the gitlab-runner container with --privileged true but that is not enough. Any runner containers that are spun up by gitlab after registering need to be privileged too. So you need to mount the gitlab-runner docker exec -it runner /bin/bash nano /etc/gitlab-runner/config.toml and change privileged flag from false into true: privileged = true That will solve the problem! note: you can also mount the config.toml as a volume on the container then you won't have to log into the container to change privileged to true because you can preconfigure the container before running it.
GitLab
44,257,172
25
I have not found a way to disable the automatic startup and it ends up using too much RAM when I'm not using. The init files are not inside /etc/init or init.d. I try update-rc.d gitlab remove and no results. I am using GitLab 8.5.4 in Debian 8.
Problem is solved! I contacted the GitLab by their official page on Facebook and here is the answer. I am using GitLab in a Desktop and it was using ~700MB. If you too want turn off GitLab on startup, just execute in a terminal: sudo systemctl disable gitlab-runsvdir.service
GitLab
35,838,438
25
I want to get a list of all the projects which are under a particular group in Gitlab. Here is the example scenario: Group A (id: 1) has 3 Project Group A / Project 1 Group A / Project 2 Group A / Project 3 Group B (id: 2) has 5 Projects Group B / Project 1 Group B / Project 2 Group B / Project 3 Group B / Project 4 Group B / Project 5 Now if I hit the rest api GET /groups it will give me only the list of groups. If i hit the rest api GET /projects/all, it will give me a list of all the projects. What I am looking for, is an operation something like GET /groups/:groupid/projects/all That is: all the projects for that particular group. Like if I say GET /groups/1/projects/all it will give me Project 1, Project 2 and Project 3. The only way I can think of is to get a list of all the projects and loop over them to see if it matches my group name, but this will be lot of unnecessary parsing. How can I achieve this in a better way? I am working on Gitlab CE 7.2.1. I am referring the Gitlab API documententation
You can also use the recently released Gitlab GraphQL API to query groups by name : { group(fullPath: "your_group_here") { projects { nodes { name description httpUrlToRepo nameWithNamespace starCount } } } } You can go to the following URL : https://[your_gitlab_host]/-/graphql-explorer and past the above query The Graphql endpoint is a POST on "https://$gitlab_url/api/graphql" An example using curl and jq: gitlab_url=<your gitlab host> access_token=<your access token> group_name=<your group> curl -s -H "Authorization: Bearer $access_token" \ -H "Content-Type:application/json" \ -d '{ "query": "{ group(fullPath: \"'$group_name'\") { projects {nodes { name description httpUrlToRepo nameWithNamespace starCount}}}}" }' "https://$gitlab_url/api/graphql" | jq '.'
GitLab
31,498,473
25
I'm using Gitlab 5.0 to manage my git repositories and I've never used github before Gitlab. When I create a group, I see a new directory with this group name in /home/git/repositories. But with team, no such thing is done. Also, with group, I can create a project for the group and the assignments (for users of this group) is done automatically. I can't see any other differences between group and team and I would like to understand that. Thank you in advance and sorry for the bad English (I'm french),
GitLab 6.0 (August 2013, 22d) See commit 3bc4845: Feature: Replace teams with group membership We introduce group membership in 6.0 as a replacement for teams. The old combination of groups and teams was confusing for a lot of people. And when the members of a team where changed, this wasn't reflected in the project permissions. In GitLab 6.0 you will be able to add members to a group with a permission level for each member. These group members will have access to the projects in that group. Any changes to group members will immediately be reflected in the project permissions. You can even have multiple owners for a group, greatly simplifying administration. Why do references to Teams still exist in GitLab 7 then? e.g. "Filter by Team" "Team" seems now (GitLab 6.x->7.x 2015) seems limited to a project (see for example features/project/team_management.feature, and app/models/project_team.rb or spec/models/project_team_spec.rb). A project can be part of a group: see "Gitlab API for all projects under group". "Group" references users, and can group multiple projects, (See features/groups.feature, app/models/group.rb, app/models/members/group_member.rb) As a user, you are a first a member of a group, and have roles ('Reporter', 'Developer', ...) associated to a project (which makes you a member of that project, part of the "team" for that project). No role, means "not a member of the team for a project". See db/migrate/20140914145549_migrate_to_new_members_model.rb. Answer for GitLab 5.x (before August 2013, 22d) Group is for grouping projects, similar to a folder (git repositories) Team is for grouping resources (people) Those notions have been refined in GitLab 4.2. That allows you to manage authorization in a more convenient way, given permissions to a group of projects in one operation, and/or given permission to a group of people, referenced by their team. GitLab 5.x no longer used Gitolite, but before 5.0, teams and groups are coming from Gitolite, and its gitolite.conf configuration file. This is where team and groups were declared and associated in order to grant permission access. Even without gitolite, the idea persists: managing the authorization through association between teams (of people) and groups (of projects).
GitLab
15,894,624
25
GitLab : .gitlab-ci.yml syntax error docker exec -i XXX pip3 install -r ./requirements_os_specific.txt --target=./packages --platform=manylinux1_x86_64 --only-binary=:all: this command giving a syntax error . "Error: before_script config should be an array of strings" This work fine if I remove "--only-binary=:all:" variables : IMAGE_NAME: xxx before_script: - whoami - echo $GitLabPassword - docker login -u Prasenjit.Chowdhury -p $GitLabPassword xxxxxxx - docker -v - docker exec -i abc python -V - docker exec -i abc aws --version - docker exec -i abc pip3 install -r ./requirements_os_specific.txt --target=./packages --platform=manylinux1_x86_64 --only-binary=:all: : This script works fine if I remove the last line
You have to escape a colon : in yaml. This can be done by surrounding the whole entry with quotes ". Replace: - docker exec -i abc pip3 install -r ./requirements_os_specific.txt --target=./packages --platform=manylinux1_x86_64 --only-binary=:all: with: - "docker exec -i abc pip3 install -r ./requirements_os_specific.txt --target=./packages --platform=manylinux1_x86_64 --only-binary=:all:"
GitLab
54,865,364
24
In my pipeline, I'd like to have a job run only if the Merge Requests target branch is a certain branch, say master or release. Is this possible? I've read through https://docs.gitlab.com/ee/ci/variables/ and unless I missed something, I'm not seeing anything that can help.
Update: 2019-03-21 GitLab has variables for merge request info since version 11.6 (https://docs.gitlab.com/ce/ci/variables/ see the variables start with CI_MERGE_REQUEST_). But, these variables are only available in merge request pipelines.(https://docs.gitlab.com/ce/ci/merge_request_pipelines/index.html) To configure a CI job for merge requests, we have to set: only: - merge_requests And then we can use CI_MERGE_REQUEST_* variables in those jobs. The biggest pitfall here is only: merge_request has complete different behavior from normal only/except parameters. usual only/except parameters: (https://docs.gitlab.com/ce/ci/yaml/README.html#onlyexcept-basic) only defines the names of branches and tags for which the job will run. except defines the names of branches and tags for which the job will not run. only: merge_request: (https://docs.gitlab.com/ce/ci/merge_request_pipelines/index.html#excluding-certain-jobs) The behavior of the only: merge_requests parameter is such that only jobs with that parameter are run in the context of a merge request; no other jobs will be run. I felt hard to reorganize jobs to make them work like before with only: merge_request exists on any job. Thus I'm still using the one-liner in my original answer to get MR info in a CI job. Original answer: No. But GitLab have a plan for this feature in 2019 Q2: https://gitlab.com/gitlab-org/gitlab-ce/issues/23902#final-assumptions Currently, we can use a workaround to achieve this. The method is as Rekovni's answer described, and it actually works. There's a simple one-liner, get the target branch of an MR from the current branch: script: # in any script section of gitlab-ci.yml - 'CI_TARGET_BRANCH_NAME=$(curl -LsS -H "PRIVATE-TOKEN: $AWESOME_GITLAB_API_TOKEN" "https://my.gitlab-instance.com/api/v4/projects/$CI_PROJECT_ID/merge_requests?source_branch=$CI_COMMIT_REF_NAME" | jq --raw-output ".[0].target_branch")' Explanation: CI_TARGET_BRANCH_NAME is a newly defined variable which stores resolved target branch name. Defining a variable is not necessary for various usage. AWESOME_GITLAB_API_TOKEN is the variable configured in repository's CI/CD variable config. It is a GitLab personal access token(created in User Settings) with api scope. About curl options: -L makes curl aware of HTTP redirections. -sS makes curl silent(-s) but show(-S) errors. -H specifies authority info accessing GitLab API. The used API could be founded in https://docs.gitlab.com/ce/api/merge_requests.html#list-project-merge-requests. We use the source_branch attribute to figure out which MR current pipeline is running on. Thus, if a source branch has multiple MR to different target branch, you may want to change the part after | and do your own logic. About jq(https://stedolan.github.io/jq/), it's a simple CLI util to deal with JSON stuff(what GitLab API returns). You could use node -p or any method you want.
GitLab
52,746,338
24
In our organisation we recently moving to git, I have created a group accidentally I would be appreciate if there is a way to delete this group.
In the most recent version (Mar. 2019): At top bar, click on Groups. Type part of the name of the desired group or click on Explore Groups. Click on the desired group. Go to Settings > General Expand Path, transfer, remove At the bottom of page you find Remove group. Click on the button and type the name of the group to activate Confirm button. Then click on Confirm to delete the group.
GitLab
38,094,527
24
I am in the process of migrating my svn repsitories to git with GitLab. Now I have seen that there is a continuous integration implementation with GitLab CI and just want to try it out. I already installed and configured a Runner but Gitlab complains that I don't have a .gitlab-ci.yml file. I already use TeamCity for continuous integration so I don't want to put too much effort into writing a build script. Can anybody tell me where I can find a basic example of a gitlab-ci.yml file that basically just builds my Solution and runs all tests (MSTests)?
Apparently there is no simple msbuild example but this should get you started: variables: Solution: MySolution.sln before_script: - "echo off" - 'call "%VS120COMNTOOLS%\vsvars32.bat"' # output environment variables (usefull for debugging, propably not what you want to do if your ci server is public) - echo. - set - echo. stages: - build - test - deploy build: stage: build script: - echo building... - 'msbuild.exe "%Solution%"' except: - tags test: stage: test script: - echo testing... - 'msbuild.exe "%Solution%"' - dir /s /b *.Tests.dll | findstr /r Tests\\*\\bin\\ > testcontainers.txt - 'for /f %%f in (testcontainers.txt) do mstest.exe /testcontainer:"%%f"' except: - tags deploy: stage: deploy script: - echo deploying... - 'msbuild.exe "%Solution%" /t:publish' only: - production Figuring out which tests to run is a bit tricky. My convention is that every project has a folder tests in which the test projects are named after the schema MyProject.Core.Tests (for a project called MyProject.Core) Just as a first feedback towards gitlab-ci I like the simplicity and the source control integration. But I would like to be able to modify the script before execution (especially while changing the script) but I could imaging to rerun a specific commit and inject variables or change the script (I can do that with teamcity). Or even ignore a failed test and rerun the script again (I do that a lot with teamcity). I know gitlab-ci does not know anything about my tests I just have a command line that returns an error code.
GitLab
32,964,953
24
I am following this document to install gitlab docker image, and get confused with the command: docker run --name gitlab_data genezys/gitlab:7.5.2 /bin/true I know "/bin/true" command just returns a success status code, but how can I understand the role of /bin/true in this docker run ... command?
Running and thus creating a new container even if it terminates still keeps the resulting container image and metadata lying around which can still be linked to. So when you run docker run ... /bin/true you are essentially creating a new container for storage purposes and running the simplest thing you can. In Docker 1.5 was introduced the docker create command so I believe you can now "create" containers without confusingly running something like /bin/true See: docker create This is new method of managing data volume containers is also clearly documented in the section Creating and mounting a Data Volume Container
GitLab
29,762,231
24
We have a normal repository, with some code and tests. One job has 'rules' statement: rules: - changes: - foo/**/* - foo_scenarios/**/* - .gitlab-ci.yml The problem is that presence of rules causes Gitlab to run 'detached pipeline', which wasn't my intention, and it's annoying. Is there any way to disable those 'detached' pipelines, but keep the rules section in place?
rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' when: never - changes: - foo/**/* - foo_scenarios/**/* - .gitlab-ci.yml when: always I have not tested this, but I believe this is what you are looking for. This page and this one too are both easily navigable and are very helpful for finding the answer to basic gitlab-ci.yml questions. Edit- Gitlab will evaluate the rules in order, and it stops evaluating subsequent rules as soon as one of the conditions are met. In this case, it will evaluate if: '$CI_PIPELINE_SOURCE == "merge_request_event"' first, and if it evaluates to true, no more rules will be checked. If the first rule evaluates to false, it will move on to the next rule.
GitLab
68,955,071
23
I want to apply consistent commit message rule globally for all developers to have jira issue in commit message. Using enterprise github. As found, it can be achieve using, .git/hooks/commit-msg file to updated accordingly. Issue: How to apply this updated git hooks to available for all the developers. I don't see any changes appear on git status. Not sure if it has to configure different way so, commit-msg constrain will apply for all developers instead of locally one. Can you please guide on this.
Any files in .git/hooks are not part of the repository and will not be transmitted by any push or fetch operation (including the fetch run by git pull). The reason is simple enough; see below. By the same token, a developer who has a Git repository has full control over his/her/their .git/hooks directory and whether any pre-commit and commit-msg hook is run (git commit --no-verify will skip both of these hooks). What all this means is that you cannot force a developer to use any particular pre-commit or commit-msg hook. Instead of making this happen (which you can't do), you should do what you can to make it easy for your developers to use some hook(s) that you supply. There are many ways to achieve this. In the examples below I'll mention pre-commit hooks rather than commit-msg hooks. One very simple one is to have a script that you write as a setup command that: clones the repository or repositories developers should use; then adds to those clones the hooks that you'd like them to use. If the clones already exist, this command can simply update the hooks in place, so that your developers can just run the setup script again if the hooks have changed. Another is to have a script in the repository that installs symbolic links, so that .git/hooks/pre-commit becomes a symbolic link to ../../scripts/pre-commit. This path climbs back out of hooks and then .git and then reaches into scripts/pre-commit, which means that the script in scripts/pre-commit is the one that will be run. You can of course use different names, e.g., make .git/hooks/pre-commit a symlink to ../../.git-precommit, if you like. This method does of course require that your developers' systems support symbolic links. Your scripts, whether they're all in the repository itself or in some separate repository or whatever, can be as simple or as fancy as you like, and you can write them in any language that you like. This applies both to "install some hooks" scripts, and to the hooks themselves. The only constraint is that the hooks must be able to run on the developers' machines. Note that there are entire frameworks for Git hooks, such as Husky and the various options on pre-commit.com. One small warning: it's easy to go overboard here. I'll recommend that anyone contemplating setting up fancy hooks have a look at this blog post for instance. Your check on commit messages is an example of not going overboard: it's simple and fast and does not get in anyone's way. Why hooks are not copied Imagine if git clone did copy and install hooks. Then imagine that the hook in some repository includes this bit of pseudo shell script in, say, the post-checkout hook: if [ ! -x /tmp/trojan-horse ]; then install-trojan-horse & fi As soon as you clone this repository, boom, you've been pwned.
GitLab
66,228,019
23
I am not able to push code to GitLab for the first time. I have created a Gitlab project on the web interface. I have created an ASP MVC project. I did "git init" in that directory. I added remote origin. I added and committed the changes and then, when I want to push the changes to remote I get the following error: remote: A default branch (e.g. master) does not yet exist for ... remote: Ask a project Owner or Maintainer to create a default branch: ... error: failed to push some refs to So I want to know if I can create a branch with the role Developer? If yes how to create a default branch? If no, what should my next steps be?
This is followed by issue 54155 It seems as though 'developer' should not be able to create projects if it does not have permission to then populate that project without asking an admin to intervene. I don't understand, why I have enough permissions to create a project, but not to make it ready enough so I can push into it, and need to use another persons time in my organization to get ready to work. IMHO it would be logical to either allow users with status developer to create a project in a way that it's ready to push code to - or just don't let developers create a project and tell them they have to ask a maintainer or owner. Issue 51688 mentions: Then I have to login with gitlab admin admin...Aggregate access to user test to test6.git as a dev or maintainer... and finally I can make push... But if user test creates another repo, I have to repeat all these steps again.... Maintainer can create new branch. Please use maintainer account to push the new project. But it doesn't make sense that a user has permissions to create empty repositories but not initialize a default branch or push to their newly created repository.
GitLab
60,984,084
23
I am new to GitLab and facing a problem where if I trigger two pipelines at the same time on same gitlab-runner, they both run in parallel and results in failure. What I want is to limit the run to one pipeline at a time and others in queue. I have set the concurrent = 1 in config.toml and restarted the runner but it didn't help. My ultimate goal is to prevent multi-pipeline run on the runner. Thanks.
Set resource_group in the Job, and give a unique name for all other tasks that should be blocked. Example from the documentation: deploy-to-production: script: deploy resource_group: production
GitLab
60,965,478
23
In Visual Studio, I'm trying to pull some changes from the repository on GitLab, but it gives me an error: Git failed with a fatal error. unable to access https://gitlab...git/: SSL certificate problem: certificate has expired* How can I generate a new certificate and add it to VS? I don't have any experience with GitLab.
There's a quick fix you can run in the command line: git config --global http.sslVerify "false" The solution was found in the following article. Updated: While the original solution provided a quick workaround, it's essential to emphasize the security implications and responsible usage due to the concerns raised in the comments. Warning: The below quick fix can expose you to security risks by disabling SSL verification: git config --global http.sslVerify "false" Use this solution with utmost caution and strictly for troubleshooting purposes. A Safer Alternative: Update Git: Ensure you are using the latest version of Git which might have improved handling of SSL and certificate issues. For Windows download and install from the official website For macOS (using Homebrew): brew upgrade git For Debian-based Linux distributions: sudo apt-get update && sudo apt-get upgrade git 2. Verify Certificate: Identify why the certificate has expired and engage the repository administrator or IT department for a resolution, as managing and renewing certificates is typically their responsibility. Using the Quick Fix Responsibly: If you find yourself obliged to use the quick fix: Local Environment Only: Ensure it is applied only in a secure and controlled environment. Never in a production setting. Limited Time: Re-enable SSL verification as soon as possible by running git config --global http.sslVerify "true" Additional Notes: Quality vs. Security: As commented by Eric K, having a valid SSL certificate doesn’t equate to the safety of the code you pull. Always ensure code quality and integrity. Engage Experts: If unsure, consult your IT department or cybersecurity experts regarding expired certificates and any temporary workarounds. Conclusion: Security should always be paramount. Adopt solutions that not only solve the immediate problem but also uphold the integrity and security of your development environment and code.
GitLab
60,024,912
23
i use soureTree for clone ssh project. i've already created ssh key i've already set up gitlab ssh key setting i've ssh-add "mysshkey" i've ssh-add -K 'mysshkey' when i print ssh -T ,i can make it success in the command line. when i git clone, pull ,push in ssh ways...It still works in the command line.(terminal) but in sourcetree still get error now: Permission denied (publickey) how can i solve it?
I downloaded SourceTree 2.7.6 and encountered same problem. I think @Frankie_0927 are right, private key must be named id_rsa and must be registered in ssh agent. for other people encountered this problem: try generate a pair of key following the instructions in below link: https://help.github.com/articles/connecting-to-github-with-ssh/ store the private key id_rsa in ~/user/YOURUSERNAME/.ssh (path for mac) and post public key in github account. then run ssh-add -l you will see The agent has no identities. so you run ssh-add -K ~/.ssh/id_rsa to add the key into ssh agent. after this, run ssh-add -l again, you will see the key is added and the problem should be solved.
GitLab
51,650,052
23
I have following gitlab-ci conf. file: before_script: - echo %CI_BUILD_REF% - echo %CI_PROJECT_DIR% stages: - createPBLs - build - package create PBLs: stage: createPBLs script: - xcopy /y /s "%CI_PROJECT_DIR%" "C:\Bauen\" - cd "C:\Bauen\" - ./run_orcascript.cmd build: stage: build script: - cd "C:\Bauen\" - ./run_pbc.cmd except: - master build_master: stage: build script: - cd "C:\Bauen\" - ./run_pbcm.cmd only: - master package: stage: package script: - cd "C:\Bauen\" - ./cpfiles.cmd artifacts: expire_in: 1 week name: "%CI_COMMIT_REF_NAME%" paths: - GitLab-Build How can I add the rule that the pipeline will ONLY trigger if a new tag has been added to a branch? The tag should start with "Ticket/ticket_" Currently he is building for every push.
You need to use only syntax: only: - tags This would trigger for any Tag being pushed. If you want to be a bit more specific you can do: only: - /Ticket\/ticket\_.*/ which would build for any push with the Ticket/ticket_ tag.
GitLab
49,514,416
23
I am facing an issue when cached files are not used in project builds. In my case, I want to download composer dependencies in build stage and then add them into final project folder after all other stages succeeds. I thought that if you set cache attribute into .gitlab-ci.yml file, it will be shared and used in other stages as well. But this sometime works and sometimes not. Gitlab version is 9.5.4 Here is my .gitlab-ci.yml file: image: ponk/debian:jessie-ssh variables: WEBSERVER: "[email protected]" WEBSERVER_DEPLOY_DIR: "/domains/example.com/web-presentation/deploy/" WEBSERVER_CDN_DIR: "/domains/example.com/web-presentation/cdn/" TEST_VENDOR: '[ "$(ls -A ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}/vendor)" ]' cache: key: $CI_PIPELINE_ID untracked: true paths: - vendor/ before_script: stages: - build - tests - deploy - post-deploy Build sources: image: ponk/php5.6 stage: build script: # Install composer dependencies - composer -n install --no-progress only: - tags - staging Deploy to Webserver: stage: deploy script: - echo "DEPLOYING TO ... ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}" - ssh $WEBSERVER mkdir -p ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA} - rsync -rzha app bin vendor www .htaccess ${WEBSERVER}:${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA} - ssh $WEBSERVER '${TEST_VENDOR} && echo "vendor is not empty, build seems ok" || exit 1' - ssh $WEBSERVER [ -f ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA}/vendor/autoload.php ] && echo "vendor/autoload.php exists, build seems ok" || exit 1 - echo "DEPLOYED" only: - tags - staging Post Deploy Link PRODUCTION to Webserver: stage: post-deploy script: - echo "BINDING PRODUCTION" - ssh $WEBSERVER unlink ${WEBSERVER_DEPLOY_DIR}production-latest || true - ssh $WEBSERVER ln -s ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA} ${WEBSERVER_DEPLOY_DIR}production-latest - echo "BOUNDED $CI_COMMIT_SHA -> production-latest" - ssh $WEBSERVER sudo service php5.6-fpm reload environment: name: production url: http://www.example.com only: - tags Post Deploy Link STAGING to Webserver: stage: post-deploy script: - echo "BINDING STAGING" - ssh $WEBSERVER unlink ${WEBSERVER_DEPLOY_DIR}staging-latest || true - ssh $WEBSERVER ln -s ${WEBSERVER_DEPLOY_DIR}${CI_COMMIT_REF_NAME}/${CI_COMMIT_SHA} ${WEBSERVER_DEPLOY_DIR}staging-latest - echo "BOUNDED ${CI_COMMIT_SHA} -> staging-latest" - ssh $WEBSERVER sudo service php5.6-fpm reload environment: name: staging url: http://staging.example.com only: - staging In Gitlab documentation it says: cache is used to specify a list of files and directories which should be cached between jobs. From what I understand I've set up cache correctly - I have untracked set to true, path includes vendor folder and key is set to Pipeline ID, which should be the same in other stages as well. I've seen some set ups which contained Artifacts, but unless you use it with Dependencies, it shouldn't have any effect. I don't know what I'm doing wrong. I need to download composer dependencies first, so I can copy them via rsync in next stage. Do you have any ideas/solutions? Thanks
Artifacts should be used to permanently make available any files you may need at the end of a pipeline, for example generated binaries, required files for the next stage of the pipeline, coverage reports or maybe even a disk image. But cache should be used to speed up the build process, for example if you compiling a C/C++ binary it usually takes a long time for the first build but subsequent builds are usually faster because it doesn't start from scratch, so if you were to store the temporary files made by the compiler by using cache, it would speed up the compilation across different pipelines. So to answer you, you should use artifacts because you seem to need to run composer every pipeline but want to pass on the files to the next job. You do not need to explicitly define dependencies in your gitlab-ci.yml because if not defined each job pulls all the artifacts from all previous jobs. Cache should work but it is unreliable and is better for a config where it makes it better but is not a necessity.
GitLab
46,281,351
23
How should I authenticate if I want to use an image from the Gitlab Registry as a base image of another CI build? According to https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/configuration/advanced-configuration.md#using-a-private-docker-registry I first have to manually login on the runner machine. Somehow it feels strange to login with an existing Gitlab user. Is there a way to use the CI variable "CI_BUILD_TOKEN" (which is described as "Token used for authenticating with the GitLab Container Registry") for authentication to pull the base image from Gitlab Registry? EDIT: I found out that I can use images from public projects. But I don't really want to make my docker projects public. UPDATE: Starting with Gitlab 8.14 you can just use the docker images from the build in docker registry. See https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/master/docs/configuration/advanced-configuration.md#support-for-gitlab-integrated-registry
All of the above answers including the acepted one are deprecated, This is possible in 2021: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#access-an-image-from-a-private-container-registry TL;DR Set the CI/CD variable DOCKER_AUTH_CONFIG value with appropriate authentication information in following format: Step 1: # The use of "-n" - prevents encoding a newline in the password. echo -n "my_username:my_password" | base64 # Example output to copy bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ= Step 2 (This JSON is the value to be set for DOCKER_AUTH_CONFIG variable): { "auths": { "registry.example.com:5000": { "auth": "(Base64 content from above)" } } }
GitLab
38,269,701
23
My GitLab instance setup will occasionally put in place an IP ban on our own IP address, resulting in all our users in the office getting 403 / Forbidden on any web page or git request. The ban is being put in place as a result of repeated errors authenticating, which is a separate problem altogether, but I would like to prevent our own IP address from being IP banned. It lasts for about one hour. In the nginx logs, nothing unusual pops up in the gitlab_access.log or gitlab_error.log files. The server is still running, and external IP addresses are unaffected while the ban is in place. I would like to be able to whitelist our own IP address, or to be able to disable the ban once it occurs (restarting gitlab doesn't remove the ban). If neither of these are possible, then just finding the setting to tweak the ban duration down from one hour would be OK too.
We are running Gitlab EE and for us this issue was caused by a combination of using git lfs inside a build on a Gitlab CI runner, and having installed the rack-attack gem on the Gitlab server. Background In order to work around an issue with git-lfs 1.2.1 (where it insisted on requiring username and password despite cloning a public repository), the build contained this line: git clone https://fakeuser:[email protected]/group/repo.git On build, this resulted in every LFS request from the runner triggering a login attempt with fakeuser, which obviously failed every time. However, since no login was actually required by the server, the client could continue downloading the files using LFS, and the build passed. Issue The IP banning started when the package rack-attack was installed. By default, after 10 failed login attempts, rack-attack bans the origin IP for one hour. This resulted in all runners being completely blocked from Gitlab (even visiting the web page from the runner would return 403 Forbidden). Workaround (insecure) A short-term workaround, if the servers (Gitlab runners in our case) are trusted, is to add the servers' IPs to a whitelist in the rack-attack configuration. Adjusting the ban time, or allowing more failed attempts, is also possible. Example configuration in /etc/gitlab/gitlab.rb: gitlab_rails['rack_attack_git_basic_auth'] = { 'enabled' => true, 'ip_whitelist' => ["192.168.123.123", "192.168.123.124"], 'maxretry' => 10, 'findtime' => 60, 'bantime' => 600 } In this example, we are whitelisting the servers 192.168.123.123 and 192.168.123.124, and adjusting down the ban time from one hour to 10 minutes (600 seconds). maxretry = 10 allows a user to get the password wrong 10 times before ban, and findtime = 60 means that the failed attempts counter resets after 60 seconds. Then, you should reconfigure gitlab before changes take effect: sudo gitlab-ctl reconfigure More details, and for the YAML version of the config example, see gitlab.yml.example. NOTE: whitelisting servers is insecure, as it fully disables blocking/throttling on the whitelisted IPs. Solution The solution to this problem should be to stop the failing login attempts, or possibly just reduce the ban time, as whitelisting leaves Gitlab vulnerable to password brute-force attacks from all whitelisted servers.
GitLab
36,298,959
23
Is there any way in markdown to combine the code (inside ```) with the spoiler (after !>) syntax in order to obtain some code inside a spoiler ? I'm using the markdown implemented in GitLab.
https://docs.gitlab.com/ee/user/markdown.html#details-and-summary You can use raw HTML <p> <details> <summary>Click this to collapse/fold.</summary> These details <em>remain</em> <strong>hidden</strong> until expanded. <pre><code>PASTE LOGS HERE</code></pre> </details> </p> or now GitLab supports Markdown within <details> blocks <details> <summary>Click this to collapse/fold.</summary> These details _remain_ **hidden** until expanded. ``` PASTE LOGS HERE ``` </details>
GitLab
32,181,339
23
I have a self-hosted gitlab and I would like to install a package hosted there using ssh. I tried: pip install git+ssh://git@<my_domain>:se7entyse7en/<project_name>.git Here's the output: Downloading/unpacking git+ssh://git@<my_domain>:se7entyse7en/<project_name>.git Cloning ssh://git@<my_domain>:se7entyse7en/<project_name>.git to /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-4_JdRU-build ssh: Could not resolve hostname <my_domain>:se7entyse7en: nodename nor servname provided, or not known fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Update: I tried to upload it on gitlab.com and after having uploaded the repo I tried to install it by running: pip install git+ssh://[email protected]:loumarvincaraig/<project_name>.git but the nothing changed. In particular here's the content of pip.log: /Users/se7entyse7en/Envs/test/bin/pip run on Mon Nov 17 22:14:51 2014 Downloading/unpacking git+ssh://[email protected]:loumarvincaraig/<project_name>.git Cloning ssh://[email protected]:loumarvincaraig/<project_name>.git to /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build Found command 'git' at '/usr/local/bin/git' Running command /usr/local/bin/git clone -q ssh://[email protected]:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build Complete output from command /usr/local/bin/git clone -q ssh://[email protected]:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build: Cleaning up... Command /usr/local/bin/git clone -q ssh://[email protected]:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build failed with error code 128 in None Exception information: Traceback (most recent call last): File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/basecommand.py", line 134, in main status = self.run(options, args) File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/commands/install.py", line 236, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/req.py", line 1092, in prepare_files self.unpack_url(url, location, self.is_download) File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/req.py", line 1231, in unpack_url return unpack_vcs_link(link, loc, only_download) File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/download.py", line 410, in unpack_vcs_link vcs_backend.unpack(location) File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/vcs/__init__.py", line 240, in unpack self.obtain(location) File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/vcs/git.py", line 111, in obtain call_subprocess([self.cmd, 'clone', '-q', url, dest]) File "/Users/se7entyse7en/Envs/test/lib/python2.7/site-packages/pip/util.py", line 670, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command /usr/local/bin/git clone -q ssh://[email protected]:loumarvincaraig/<project_name>.git /var/folders/3r/v7swlvdn2p7_wyh9wj90td2m0000gn/T/pip-91JVFi-build failed with error code 128 in None
I don't know why, but by running the following command it worked (slash instead of : after <my_domain>): pip install git+ssh://git@<my_domain>/se7entyse7en/<project_name>.git # ^ # slash instead of :
GitLab
26,979,181
23
I am running the below code section in gitlab-ci.yml file: script: - pip install --upgrade pip - cd ./TestAutomation - pip install -r ./requirements.txt Below are the keys and values. So I have to pass any values to the pipeline with key as a variable ENV : dev I have added all the above three variables in the GitLab CI CD variables sections by expanding them. just added a single value along with key I also found like we can add variables in the .yml file itself as below. I am not sure how we can add multiple values for one key variables: TEST: value: "some value" # this would be the default value description: "This variable makes cakes delicious" When I run the pipeline I am getting errors as looks like these variables and values are not injected properly. More details: And the same error I am getting while running the pipeline. Hence my suspect is like Category variable is not injected properly when I am running through the pipeline If needed I will show it on the share screen What I have observed is --the values associated with keys which I am passing as parameter or variables , those are not injected or replaced instead of key. So ideally ${Category} should be replaced with value smoke etc
When Gitlab CI CD variables are not getting injected into your pipelines as environment variables, please follow the following steps to verify. Check whether the variable is defined. You need to have at least the Maintainer role setup for your user. Go to Settings --> CI/CD --> Variables. You can see all project variables, and group variables (inherited). Next, check whether these variables are defined as Protected variables. If they are marked as Protected, then they are only exposed to protected branches or protected tags. I would suggest to uncheck this, if your current branch is not a protected branch. If not you can always make your current branch a protected one. Next, check whether your code is accessing the environment variables correctly. Based on your scripting language, just access as if you are accessing a regular environment variable. You don't really need to define these variables in the .gitlab-ci.yaml file. (Even though their documentation says so) Hope this helps.
GitLab
70,067,929
22
I have a .gitlab-ci.yml file that says: include: - project: 'my-proj/my-gitlab-ci' ref: master file: '/pipeline/gitlab-ci.yml' Because of some "Inconvenience" I would like to override some specific stage that is defined on the above mentioned gitlab-ci.yml file injected on my top level .gitlab-ci.yml file. The plan stage I am interested in has the following thing: plan-dummy: stage: plan script: - terraform plan -lock=false -var-file=vars/vars.tfvars What I want to do is override the above on the main .gitlab-ci.yml file such that only the script is executed as an override: plan-dummy: stage: plan script: - terraform refresh # This is the line I want to add as an additional step before next - terraform plan -lock=false -var-file=vars/dev.tfvars How do I achieve that without fiddling with the injected file? Yes, I know that alternative is to do dirty copy-paste from child file, but I don't want to do that. Regards,
simply reuse the same job name and add the configuration you need: plan-dummy: before_script: - terraform refresh
GitLab
67,065,616
22
1- Environment: Gitlab-CE GitLab 13.2.1 (b55baf593e6) GitLab Shell 13.3.0 GitLab Workhorse v8.37.0 GitLab API v4 Ruby 2.6.6p146 Rails 6.0.3.1 PostgreSQL 11.7 Debian GNU / Linux 10 server (buster) 2- .gitlab-ci.yml file: before_script: - echo "--------- STARTING WORK ------------" job_homologacao: only: - homologation script: - cd /home/ati/ - mkdir test - echo "got here" job_producao: only: - master script: - cd /home/ati/test/ - echo "got here" 3- Error presented when the runner is executed: Running with gitlab-runner 13.2.1 (efa30e33) on runner with Akx_BvYF shell Preparing the "shell" performer Using Shell executor ... Preparing environment Running on hermes ... ERROR: Job failed (system failure): prepare environment: exit status 1. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information 4- correction attempts: I read and executed all the procedures contained in the codumentation: https://docs.gitlab.com/runner/faq/README.html#job-failed-system-failure-preparing-environment https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading
I was erroneously editing the file .bash_logout located inside my Home /home/ati/ Gitlab when installing gitlab-runner create a Home for it in /home/gitlab-runner/ I just had to comment on the contents of the /home/gitlab-runner/.bash_logout file for the job to work.
GitLab
63,154,881
22
GitLab doesn't render HTML for me, but just display the source code: Background: I used sphinx to generate the HTML and tried to show the doc at GitLab. I looked at other projects' repositories, such as pandas, sphinx. They only have .rts files in the repository, and not HTML files. I guess they generate HTML for their websites but don't upload to Git. I don't have a website and want to show doc at GitLab. Is there a way to do that? Or do I have to generate other formats (other than HTML, e.g. PDF) instead?
First of all, Git and products like GitLab and GitHub are different things. Git doesn't ever render anything; it's a version control system. It doesn't have a web interface. Secondly, GitLab's core product isn't supposed to render anything. It's not a web host, it's a tool for hosting, sharing, and managing Git repositories. However you might want to try GitLab Pages: GitLab Pages is a feature that allows you to publish static websites directly from a repository in GitLab. You can use it either for personal or business websites, such as portfolios, documentation, manifestos, and business presentations. You can also attribute any license to your content. Pages is available for free for all GitLab.com users as well as for self-managed instances (GitLab Core, Starter, Premium, and Ultimate). GitLab Pages will publish content from the public/ directory of your repository, so you should move your files there. You will also need a .gitlab-ci.yml file in the root of your repository containing something like image: alpine:latest pages: stage: deploy script: - echo 'Nothing to do...' artifacts: paths: - public only: - master (taken from the sample repository). Add that file, then commit and push. After deployment is complete, your site should be available at https://youruser.gitlab.io/yourproject. Note that GitHub has a similar product (that works differently). Finally, I looked at other projects' repositories, such as pandas, sphinx. They only have .rts files in the repository, and not HTML files. I guess they generate HTML for their websites but don't upload to Git. it's very likely that the reStructured Text files are the only source that exists, and that HTML is generated from them automatically. Sphinx this format by default. If you're interested in working from another format like Markdown or reStructured Text you may want to explore GitLab Pages' support for static site generators.
GitLab
55,595,323
22
I'm using kubernetes on-prem While I build gitlab using kubernetes has some problem. I think it's related with serviceaccount or role-binding. but couldn't find correct way I found these posts Kubernetes log, User "system:serviceaccount:default:default" cannot get services in the namespace https://github.com/kubernetes/kops/issues/3551 my error logs ==> /var/log/gitlab/prometheus/current <== 2018-12-24_03:06:08.88786 level=error ts=2018-12-24T03:06:08.887812767Z caller=main.go:240 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:372: Failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" 2018-12-24_03:06:08.89075 level=error ts=2018-12-24T03:06:08.890719525Z caller=main.go:240 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:320: Failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"pods\" in API group \"\" at the cluster scope"
The issue is due to your default service account doesn't have the permission to get the nodes or pods at the cluster scope. The minimum cluster role and cluster role binding to resolve that is: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: prom-admin rules: # Just an example, feel free to change it - apiGroups: [""] resources: ["pods", "nodes"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: prom-rbac subjects: - kind: ServiceAccount name: default roleRef: kind: ClusterRole name: prom-admin apiGroup: rbac.authorization.k8s.io The above cluster role provide permission to default service account to access any pods or nodes in any namespace. You can change the cluster role to provide more permission to service account, if you want to grant access all permission to default service account then, replace resources: ["*"] in prom-admin Hope this helps.
GitLab
53,908,848
22
I'm trying to improve the project building script, described in YML-file, the improvement itself seems quite trivial, but the idea of accidentally ruining the auto-builds scares me a bit. Right now there are several branches, version tags and other stuff in the project. A development branch, not built by the runners would be of use, because copying a huge project somehow between virtual machines to test the build on different platforms is not convenient at all. So, I want to exclude from builds some "prj-dev" branch. And there we have: stages: - build - linuxbuild job: tags: - win2008build stage: build script: # something complex job1: tags: - linux stage: linuxbuild script: # something else complex I googled and found a solution like: stages: - build - linuxbuild job: tags: - win2008build branches: except: - *dev-only But it seems that our pipelines are quite different, the tags are not git tags, they are pipeline tags. So, I'm considering rather to use a config like: stages: - build - linuxbuild job: tags: - win2008build except: branches: - *dev-only ...which would mean "build as usual, but not my branch". There are complications in trying it both ways, I'm pretty sure someone should know the recipe for sure. So, if you please, -- how do I exclude my dev branch without changing the pipelines, config only? Is it possible at all?
All you need to do is use except in the gitlab-ci.yml file and add your branches directly below like this: mybuild: stage: test image: somedockerimage script: - some script running except: - branch-name This is working on my project without problems.
GitLab
51,324,550
22
I spend my day doing this: Read an issue on a Gitlab-powered issue tracker, Fix the issue, Commit and push to the same Gitlab-powered Git server, Mark the issue as closed. To remove the 4th step, how can I close the issue automatically when committing?
Commit and push using this syntax: git commit -m "Sort more efficiently" -m "Closes #843" git push This will commit and close the issue. Note that unlike Github a single -m will not work. The following will appear on the issue page: References: https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically How to commit a change with both "message" and "description" from the command line?
GitLab
44,838,967
22
I want to use 2 accounts in Gitlab website, every account with a different ssh key I generated the keys successfully and add them in ~/.ssh folder I created ~/.ssh/config file and use one of them , it's works good I can also make swapping between the two keys by editing the ~/.ssh/config file The problem is : I want to use them in the same time , but all the tutorials i found taking about different hosts :/ actually my two accounts are in the same host how can i edit the ~/.ssh/config file to accept two accounts for the same host Note: I read this question but i can't get help from it My two accounts are username1 and username2 repo URL looks like : [email protected]:username1/test-project.git My current ~/.ssh/config file: Host gitlab.com-username1 HostName gitlab.com User git IdentityFile ~/.ssh/id_rsa Host gitlab.com-username2 HostName gitlab.com User git IdentityFile ~/.ssh/id_rsa_username2 Update 1: 1) When I use one key in the ~/.ssh/config file , everything works perfect (but it's very boring to update it every time i want to change the user i use) 2) When i use this lines ssh -T [email protected] ssh -T [email protected] its works good and return a welcoming message From 1) and 2) , i think the problem is definitely from the ~/.ssh/config file , specifically in Host variable Update 2: (the solving) the solving was to edit the .git/config file from [remote "origin"] url = [email protected]:username1/test-project.git to [remote "origin"] url = [email protected]:username1/test-project.git and do the same for the username2
You have got complete ssh configuration. First of all, check if it works: ssh -T [email protected] ssh -T [email protected] should succeed in both cases. If not, the keys are not set up correctly. Verify that the keys are on gitlab for respective users. If it works, move on to your git repository and open .git/config. It will have some part like: [remote "origin"] url = [email protected]:username1/test-project.git Replace it with [remote "origin"] url = [email protected]:username1/test-project.git (or username1 if you want to connect using this username). Then it should allow you to push.
GitLab
37,895,592
22
I am trying to get composer to download a library from my repository on Gitlab, however, it does not have a composer.json file in it so I'm not sure if this is possible. "require": { "username/repository-name" }, "repositories": [{ "type": "package", "package": { "version": "dev-master", "name": "username/repository-name", "source": { "url": "https://gitlab.com/username/repository.git", "type": "git", "reference": "master" } } }],
I found the answer and it works for me here (the last answer, not the accepted answer): Using Composer and Private Repository on GIthub using VCS on Build Server This is what I make it works: "repositories": [ { "type": "package", "package": { "name": "username/repository", "version": "0.1.0", "type": "package", "source": { "url": "[email protected]:username/repository.git", "type": "git", "reference": "master" } } } ], "require": { "username/repository": "*" },
GitLab
34,781,422
22
I'm using gitlab-ci-multi-runner with docker containers. Everything is going fine, but docker containers don't keep the composer cache so in every run composer downloads dependencies again and again, which takes a lot of time. Is there any way to configure gitlab-ci-runner docker container to keep the composer cache or mount a volume on each run where the composer cache is kept?
You can change the composer cache path by exporting the COMPOSER_CACHE_DIR environment variable in your runner configuration file, and then add a volume in the [runners.docker] section to match it. If you run gitlab-runner as root or with sudo, then your configuration file is located at /etc/gitlab-runner/config.toml. Otherwise it's located at $HOME/.gitlab-runner/config.toml. # config.toml [[runners]] name = "Generic Docker Runner" ... environment = ["COMPOSER_CACHE_DIR=/cache"] executor = "docker" [runners.docker] ... volumes = ["/var/cache:/cache:rw"] cache_dir = "/cache"
GitLab
33,479,574
22
I'm not seeing an "Accept Merge Request" button in gitlab despite having "Developer" level access. Instead there is this message: Ready to be merged automatically Ask someone with write access to this repository to merge this request. According to the documentation, users with "Developer" access have the ability to "manage merge requests", but this doesn't seem possible in this case. I have two "Developer" level users that are seeing this problem, one of which pushed the project to the gitlab instance to begin with. I assume he must have write access? Version information below GitLab 7.14.3 GitLab Shell 2.6.5 GitLab API v3 Ruby 2.1.6p336 Rails 4.1.11 Please let me know if any more info is required.
Developers can accept merge requests. However, it depends on how the project is configured, too. Developers can accept merge requests when: The merge target is not a protected branch. The merge target is a protected branch, if an owner/maintainer has checked the 'Developers can push' checkbox on the protected branch setting. If a developer is seeing the message you describe it is probably because of a protected branch and the 'Developers can push' box is unchecked.
GitLab
32,738,461
22
I am filled with chagrin having to ask this, but I can't figure out how to add users in GitLab. I get to the screen where it allows me to add new members as follows: From my Group page -> click the 'members' icon on the toolbar along the left edge -> click 'Add Members' expando button -> enter username in 'Find existing member by name' or 'People' fields and get no results. The 'People' field auto-populates with a bunch of names I don't recognize. But won't let me find a user who has registered by username or actual name. Am I missing something?
What version? These instructions are for gitlab community edition on ubuntu; the deb was named gitlab_7.4.3-omnibus.5.1.0.ci-1_amd64: Login Click the gears icon (top right) to enter the Admin area Click the Groups link (top center) to enter the Groups page Click the name of the group you wish to extend ("Mygroup") Look at the right side of the screen; if you have the right privs you should see "Add user(s) to the group:" Type the first few letters of the user name into the box, a drop-down should appear so you can select the user. HTH
GitLab
28,709,411
22
With version 7.4 gitlab changed the behaviour of protected branches in new projects. In every new project the default branch e.g. master is a protected branch, meaning developers are not able to push to it. In my company a lot of developers work on the default/master branch and are now struggeling when starting a new project. My Question: Is there a property in the ui or in the gitlab.rb to restore the pre 7.4 behaviour and to not protect the default branch?
I'm not Sure if the is a default param, but per project you can change master as unprotected , in your project, go to settings -> protected branches and unprotect master Update The gilt team published a post related to your question!! https://about.gitlab.com/2014/11/26/keeping-your-code-protected/
GitLab
26,932,503
22
I setup a new Gitlab on CentOs on /opt/gitlab-6.9.2-0/apps/gitlab/ and created a new repository under continuous-delivery group. The full path is /opt/gitlab-6.9.2-0/apps/gitlab/gitlab-satellites/continuous-delivery/cd-test . There is only one file under this path which is README.txt. What I try to achieve is to create a new file when somebody pushes changes to the server. Below are what I have done on the server: Create post-update and update files under .git/hooks/' each file creates a new file usingecho "text" >> file_name` chmod them to 775. When I push changes from my local to the server, there is no file being created. So, I would like to know what I have to do to fix this problem. Update 1 I added post-receive and post-update to repositories path as VonC suggested [root@git-cd hooks]# pwd /opt/gitlab-6.9.2-0/apps/gitlab/repositories/continuous-delivery/cd-test.git/hooks [root@git-cd hooks]# ll total 48 -rwxrwxr-x. 1 git git 452 Jun 10 06:01 applypatch-msg.sample -rwxrwxr-x. 1 git git 896 Jun 10 06:01 commit-msg.sample -rwxrwxr-x. 1 git git 44 Jun 11 00:37 post-receive -rwxrwxr-x. 1 git git 41 Jun 11 00:38 post-update -rwxrwxr-x. 1 git git 189 Jun 10 06:01 post-update.sample -rwxrwxr-x. 1 git git 398 Jun 10 06:01 pre-applypatch.sample -rwxrwxr-x. 1 git git 1642 Jun 10 06:01 pre-commit.sample -rwxrwxr-x. 1 git git 1281 Jun 10 06:01 prepare-commit-msg.sample -rwxrwxr-x. 1 git git 1352 Jun 10 06:01 pre-push.sample -rwxrwxr-x. 1 git git 4972 Jun 10 06:01 pre-rebase.sample lrwxrwxrwx. 1 git git 57 Jun 10 06:01 update -> /opt/gitlab-6.9.2-0/apps/gitlab/gitlab-shell/hooks/update -rwxrwxr-x. 1 git git 3611 Jun 10 06:01 update.sample Both file contains a script that adds a new line to an existing file, "post-receive-2" >> /var/log/hooks_test.log. then pushed changes from my local machine to the server. But it still doesn't append the text. Update 2 Script in post-receive was wrong, it didn't have echo. After I added echo (echo "post-receive-2" >> /var/log/hooks_test.log then it works as expected!
That would be because those satellite repos aren't the one you would push to, so their hook aren't trigger when you would think (ie, not when someone is pushing to the GitLab server). PR 6185 introduced the archicture overview documentation /home/git/gitlab-satellites - checked out repositories for merge requests and file editing from web UI. This can be treated as a temporary files directory. The satellite repository is used by the web interface for editing repositories and the wiki which is also a git repository. You should add your hook in the bare repos ~git/repositories. Or (update Q4 2014, from GitLab 7.5+ Nov 2014), you can use custom hooks (instead of webhooks), as mentioned below by Doka. Custom git hooks must be configured on the filesystem of the GitLab server. Only GitLab server administrators will be able to complete these tasks. Please explore webhooks as an option if you do not have filesystem access. On the GitLab server, navigate to the project's repository directory. For a manual install the path is usually /home/git/repositories/<group>/<project>.git. For Omnibus installs the path is usually /var/opt/gitlab/git-data/repositories/<group>/<project>.git. Create a new directory in this location called custom_hooks. Inside the new custom_hooks directory, create a file with a name matching the hook type. For a pre-receive hook the file name should be pre-receive with no extension. Make the hook file executable and make sure it's owned by git.
GitLab
24,154,384
22
Currently we are working with github and we are actually quiet happy with it. But the costs will grow more and more in near future. Now we've started evaluating other git solutions and stumbled over gitlab, and i've to say, it looks very interesting for us. I've seen that there is as well a wiki feature similar to github. But one important thing is nowhere described... The only thing i've found is this two year old entry https://groups.google.com/forum/#!msg/gitlabhq/YSM_Il9yk04/_-ybpN4BekYJ Does anybody know if there are any news in that matter? it looks like it is possible, but how? is there any manual or howto that could help me? Thanks a lot!
Github wikis and GitLab wikis are both just Git repositories containing text files, so you can just pull from one and push to the other. Go to any page on your Github wiki and click the Clone URL button. You'll get a URL like https://github.com/Homebrew/homebrew.wiki.git. Clone it to your computer: git clone https://github.com/Homebrew/homebrew.wiki.git cd homebrew.wiki Then, on your GitLab wiki, click the Git Access tab, find the URL in the instructions (in the first line under the Clone Your Wiki heading), and push to that URL: git push https://gitlab.com/adambrenecki/test-project.wiki.git If you can't find the URLs, they should be roughly the same as on this page, with the usernames/repo names changed as appropriate.
GitLab
21,992,151
22
How I can set default reviewers in GitLab Premium? In Settings → General I have only Merge request (MR) approvals, not reviewers.
You can use "Default description template for merge requests" either via Settings->Merge Requests or via file in .gitlab/merge_request_templates to do it via a workaround. (Doc) In the template you can use the chat code /assign_reviewer @user1 @user2 @user3 to automatically assign user1, user2 and user3 as reviewers when creating a new MR.
GitLab
68,195,758
21
I use Gitlab for doing Continuous Integration and Development and all of a sudden I get this error message "There has been a runner system failure, please try again" There's no real error message or error code. I've tried restarting the gitlab runner, using gitlab-runner restart, I've done a reboot of the server its running on but I keep getting this error message on Gitlab whenever I push a code change.
After a couple of hours, I realized the issue is that the server that Gitlab Runner is running on has no space left. I logged into the server in question. Looked at the Gitlab log file using the following command: journalctl -u gitlab-runner And it showed me the following logs: May 21 08:20:41 gitlab-runner[18936]: Checking for jobs... received job=178911 repo_url=https://.......git runner=f540b942 May 21 08:20:41 gitlab-runner-01 gitlab-runner[18936]: WARNING: Failed to process runner builds=0 error=open /tmp/trace543210445: no space left on device executor=docker runner=f540b942 To fix this issue I ran the docker container prune command which clears out stopped containers. Alternatively you could use the docker system prune command which would remove all unused objects. See https://linuxize.com/post/how-to-remove-docker-images-containers-volumes-and-networks/ for more information about those docker commands. Afterwards, I no longer got the error on Gitlab when pushing changes.
GitLab
61,939,202
21
I'm currently trying to get traefik to use multiple routers and services on a single container, which isn't working and i don't know if this is intended at all. Why? Specificly i'm using an gitlab omnibus container and wanted to use / access multiple services inside the omnibus container since gitlab is providing not only "the gitlab website" with it. What did i try? I simply tried adding another router to my docker compose file via labels This is what i have: labels: - "traefik.http.routers.gitlab.rule=Host(`gitlab.example.com`)" - "traefik.http.services.gitlab.loadbalancer.server.port=80" This is what i want: labels: - "traefik.http.routers.gitlab.rule=Host(`gitlab.example.com`)" - "traefik.http.services.gitlab.loadbalancer.server.port=80" - "traefik.http.routers.registry.rule=Host(`registry.gitlab.example.com`)" - "traefik.http.services.registry.loadbalancer.server.port=5000" This doesn't work since traefik probably getting confused with what to route to which service and i couldn't find a mechanism that tells traefik exactly which router goes to which service in a case like this. Is this even possible or am i just missing a little bit of traefik magic?
I found the solution to my Question. There's indeed a little bit i missed: traefik.http.routers.myRouter.service=myService With this Label i can point a Router to a specific Service and should be able to add multiple services to one container: labels: - "traefik.http.routers.gitlab.rule=Host(`gitlab.example.com`)" - "traefik.http.routers.gitlab.service=gitlab" - "traefik.http.services.gitlab.loadbalancer.server.port=80" - "traefik.http.routers.registry.rule=Host(`registry.gitlab.example.com`)" - "traefik.http.routers.registry.service=registry" - "traefik.http.services.registry.loadbalancer.server.port=5000" Here each router is pointed to a specific service explicitly which normally happens implicitly.
GitLab
59,856,722
21
I have a merge request and a source branch is already bound to it. Now I pushed another branch and want to change the merge request to be point to the new branch. Is that possible with gitlab-ce? If yes, how? In essence, I want to use "Fast-forward merge" as merge method without being forced to force-push to the source branch.
No, regarding https://gitlab.com/gitlab-org/gitlab-foss/-/issues/47020 this is unfortunately not possible. The statement is from last year, but it seems that there were no changes in supporting this for now. To not lose the discussion completely, you can link the old MR in the new MR. With that you at least have some indirect relation to the discussion.
GitLab
56,491,774
21
Our previous GitLab based CI/CD utilized an Authenticated curl request to a specific REST API endpoint to trigger the redeployment of an updated container to our service, if you use something similar for your Kubernetes based deployment this Question is for you. More Background We run a production site / app (Ghost blog based) on an Azure AKS Cluster. Right now we manually push our updated containers to a private ACR (Azure Container Registry) and then update from the command line with Kubectl. That being said we previously used Docker Cloud for our orchestration and fully integrated re-deploying our production / staging services using GitLab-Ci. That GitLab-Ci integration is the goal, and the 'Why' behind this question. My Question Since we previously used Docker Cloud (doh, should have gone K8s from the start) how should we handle the fact that GitLab-Ci was able to make use of Secrets created the Docker Cloud CLI and then authenticate with the Docker Cloud API to trigger actions on our Nodes (ie. re-deploy with new containers etc). While I believe we can build a container (to be used by our GitLab-Ci runner) that contains Kubectl, and the Azure CLI, I know that Kubernetes also has a similar (to docker cloud) Rest API that can be found here (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster) — specifically the section that talks about connecting WITHOUT Kubectl appears to be relevant (as does the piece about the HTTP REST API). My Question to anyone who is connecting to an Azure (or potentially other managed Kubernetes service): How does your Ci/CD server authenticate with your Kubernetes service provider's Management Server, and then how do you currently trigger an update / redeployment of an updated container / service? If you have used the Kubernetes HTTP Rest API to re-deploy a service your thoughts are particularly value-able! Kubernetes Resources I am Reviewing How should I manage deployments with kubernetes Kubernetes Deployments Will update as I work through the process.
Creating the integration I had the same problem of how to integrate the GitLab CI/CD with my Azure AKS Kubernetes cluster. I created this question because I was having some error when I tried to add my Kubernetes cluester info into GitLab. How to integrate them: Inside GitLab, go to "Operations" > "Kubernetes" menu. Click on the "Add Kubernetes cluster" button on the top of the page You will have to fill some form fields, to get the content that you have to put into these fields, connect to your Azure account from the CLI (you need Azure CLI installed on your PC) using az login command, and then execute this other command to get the Kubernetes cluster credentials: az aks get-credentials --resource-group <resource-group-name> --name <kubernetes-cluster-name> The previous command will create a ~/.kube/config file, open this file, the content of the fields that you have to fill in the GitLab "Add Kubernetes cluster" form are all inside this .kube/config file These are the fields: Kubernetes cluster name: It's the name of your cluster on Azure, it's in the .kube/config file too. API URL: It's the URL in the field server of the .kube/config file. CA Certificate: It's the field certificate-authority-data of the .kube/config file, but you will have to base64 decode it. After you decode it, it must be something like this: -----BEGIN CERTIFICATE----- ... some base64 strings here ... -----END CERTIFICATE----- Token: It's the string of hexadecimal chars in the field token of the .kube/config file (it might also need to be base 64 decoded?). You need to use a token belonging to an account with cluster-admin privileges, so GitLab can use it for authenticating and installing stuff on the cluster. The easiest way to achieve this is by creating a new account for GitLab: create a YAML file with the service account definition (an example can be seen here under Create a gitlab service account in the default namespace) and apply it to your cluster by means of kubectl apply -f serviceaccount.yml. Project namespace (optional, unique): I leave it empty, don't know yet for what or where this namespace can be used. Click in "Save" and it's done. Your GitLab project must be connected to your Kubernetes cluster now. Deploy In your deploy job (in the pipeline), you'll need some environment variables to access your cluster using the kubectl command, here is a list of all the variables available: https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables To have these variables injected in your deploy job, there are some conditions: You must have added correctly the Kubernetes cluster into your GitLab project, menu "Operations" > "Kubernetes" and these steps that I described above Your job must be a "deployment job", in GitLab CI, to be considered a deployment job, your job definition (in your .gitlab-ci.yml) must have an environment key (take a look at the line 31 in this example), and the environment name must match the name you used in menu "Operations" > "Environments". Here are an example of a .gitlab-ci.yml with three stages: Build: it builds a docker image and push it to gitlab private registry Test: it doesn't do anything yet, just put an exit 0 to change it later Deploy: download a stable version of kubectl, copy the .kube/config file to be able to run kubectl commands in the cluster and executes a kubectl cluster-info to make sure it is working. In my project I didn't finish to write my deploy script to really execute a deploy. But this kubectl cluster-info command is executing fine. Tip: to take a look at all the environment variables and their values (Jenkins has a page with this view, GitLab CI doesn't) you can execute the command env in the script of your deploy stage. It helps a lot to debug a job.
GitLab
50,749,095
21
I forked a project to a group. But there is no option to delete that forked project. I saw danger zone in Github. Is there any option available to delete forked project from Gitlab?
login with master role in the repository. go to forked project. then from left panel, go to Settings => General. like this: click the Expand from Advance settings panel. go to bottom page and click Remove project. then type your projectName and click Confirm. I hope is useful.
GitLab
50,737,564
21
I am currently having my project in GitLab and Heroku. What I wanna do is as soon as I ask for merge request with my feature branch (let's call it crud-on-spaghetti), I want to automatically run the tests on this branch (npm test basically, using Mocha/Chai), and after they succeed, merge this crud-on-spaghetti with master, commit it and push it to origin/master (which is remote on GitLab) and after git push heroku master (basically, push it to the master branch in Heroku, where my app is stored). I have read several articles on GitLab CI and I think this is more suitable for me (rather than Heroku CI, because I do not have DEV and PROD instances). So, as of now, I do this manually. And this is my .gitlab-ci.yml file now (which is not committed/pushed yet): stages: - test - deploy test_for_illegal_bugs: stage: test script: - npm test deploy_to_dev: stage: deploy only: - origin master script: - git commit - git push origin master - git pull heroku master --rebase - git push heroku master Hence, my questions is: What do I exactly need to write in .gitlab-ci.yml in order to automate all these "manipulations" (above)? PS. And another (theoretical) follow-up question: how is GitLab-CI Runner triggered? For instance, if I want it to trigger upon merge request with master, do I do that using only: ... in .gitlab-ci.yml?
As of 2023 the keywords only and except have been deprecated in favor of rules (see docs). In newer versions Using rules: to restrict the execution of a job to a merge request to the target branch: rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event" && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"' For older versions Restrict stages to Merge Requests: To have your test stage only being executed when a Merge Request (MR) is opened, use: only: - merge_requests According to the Gitlab docs, you can further restrict this to only being executed for MRs with a certain target branch, e.g. only MRs for master only: - merge_requests except: variables: - $CI_MERGE_REQUEST_TARGET_BRANCH_NAME != "master" This adds an exception for all target branches that are not master. Restrict stages to Branches: As already mentioned by @marcolz, this is achieved by only: - master to only execute the stage for pushes to the master branch.
GitLab
44,162,500
21
I'm using Gitlab and Sonarqube and the Sonarqube Plugin SVG Badges. To represent the Sonarqube state on gitlab I have something like this in my README.md file: [![coverage](https://sonar.domain.com/api/badges/measure?key=com.domain:projectname&metric=coverage)](https://sonar.domain.com/component_measures/metric/coverage/list?id=de.domain:projectname) This works perfect. My badge is shown, the link is working, everything is fine. Is there some way to build something like: [![coverage](https://sonar.domain.com/api/badges/measure?key={MYDOMAIN}:{THIS}&metric=coverage)](https://sonar.domain.com/component_measures/metric/coverage/list?id={MYDOMAIN}:{THIS}) I want to provide a skeleton that every Developer just can copy and paste into their README.md file and the variables are filled into the README automatically, with something like .gitlab-ci.yml I also tried the permanent Gitlab Variables mentioned here but that wasn't working too! [![coverage](https://sonar.domain.com/api/badges/measure?key=com.mydomain:$CI_PROJECT_NAME&metric=coverage)](https://sonar.domain.com/component_measures/metric/coverage/list?id={MYDOMAIN}:$CI_PROJECT_NAME) Anyone has some idea?
The variables in https://gitlab.com/help/ci/variables/README.md are present only in a CI environment (i.e. a job), so you can't use them in the Markdown viewer when displaying the file. - That's a great idea for a feature proposal, though. I opened one - https://gitlab.com/gitlab-org/gitlab-ce/issues/32255. Feel free to chime in. What you could do is add a placeholder where you want those variables to go and then create a job which sed's them. update_readme: script: - echo $CI_PROJECT_NAME # Sanity check - sed -ie "s/{THIS}/$CI_PROJECT_NAME/g" README.md Note the use of double-quotes (") and not single quotes ('). Using double-quotes will interpolate $CI_PROJECT_NAME while single-quotes would just retain it's literal value.
GitLab
43,743,141
21
I'm using GitLab CI for a project and the first step of the process is npm install. I cache node_modules for quicker runs of the same job later on, and also define them as build artifacts in order to use them in later stages. However, even though I cache node_modules and it's up-to-date, calling npm install each time the install_packages job is run takes a long time, since the command goes through all of package.json and checks for updates of packages and such (I assume). Is there any way to only run npm install in the install_packages job depending on some condition? More specifically (what I think would be the best solution), whether or not package.json has been changed since last build? Below is the relevant part of my .gitlab-ci.yml file: image: node:6.9.1 stages: - install - prepare - deploy install_packages: stage: install script: - npm prune - npm install cache: key: ${CI_BUILD_REF_NAME} paths: - node_modules/ artifacts: paths: - node_modules/ only: - master - develop build_and_test: stage: prepare script: #do_stuff... deploy_production: stage: deploy #do_stuff... deploy_staging: stage: deploy #do_stuff...
Just use the only:changes flag doc The job will be: install_packages: stage: install script: - npm prune - npm install cache: key: ${CI_COMMIT_REF_NAME} paths: - node_modules/ artifacts: paths: - node_modules/ only: refs: - master - develop changes: - package.json Another point is: You set the cache the right way? Read this: https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching https://docs.gitlab.com/ee/ci/caching/
GitLab
40,615,533
21
I can't find out how to access variables in a build-script provided by the gitlab-ci.yml-file. I have tried to declare variables in two ways: Private Variables in the Web-Interface of GitLab CI Variable overrides/apennding in config.toml I try to access them in my gitlab-ci.yml-files commands like that: msbuild ci.msbuild [...] /p:Configuration=Release;NuGetOutputDir="$PACKAGE_SOURCE" where $PACKAGE_SOURCE is the desired variable (PACKAGE_SOURCE) but it does not work (it does not seem to be replaced). Executing the same command manually works just as expected (replacing the variable name with its content) Is there some other syntax required i am not aware of? I have tried: $PACKAGE_SOURCE $(PACKAGE_SOURCE) ${PACKAGE_SOURCE} PS: Verifying the runner raises no issues, if this matters.
I presume you are using Windows for your runner? I was having the same issue myself and couldn't even get the following to work: script: - echo $MySecret However, reading the Gitlab documentation it has an entry for the syntax of environment variables in job scripts: To access environment variables, use the syntax for your Runner’s shell Which makes sense as most of the examples given are for bash runners. For my windows runner, it uses %variable%. I changed my script to the following, which worked for me. (Confirmed by watching the build output.) script: - echo %MySecret% If you're using the powershell for your runner, the syntax would be $env:MySecret
GitLab
31,561,355
21