question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I'm having a few problems breaking out an MsBuild package+deploy command into two separate commands. (I need to do this to pass additional parameters to MsDeploy).
The command that works fine looks like this:
msbuild "src\Solution.sln"
/P:Configuration=Deploy-Staging
/P:DeployOnBuild=True
/P:DeployTarget=MSDeployPublish
/P:MsDeployServiceUrl=https://192.168.0.1:8172/MsDeploy.axd
/P:DeployIISAppPath=staging.website.com
/P:AllowUntrustedCertificate=True
/P:MSDeployPublishMethod=WmSvc
/P:CreatePackageOnPublish=True
/P:UserName=staging-deploy
/P:Password=xyz
The separated packaging command looks like this:
msbuild "src\Solution.sln"
/P:Configuration=Deploy-Staging
/P:DeployOnBuild=True
/P:DeployTarget=Package
/P:_PackageTempDir=C:\temp\web
which works fine. But then the MsDeploy portion:
msdeploy
-verb:sync
-allowUntrusted
-usechecksum
-source:manifest=
'src\WebProject\obj\Deploy-Staging\Package\WebProject.SourceManifest.xml'
-dest:auto,ComputerName=
'https://192.168.0.1:8172/MsDeploy.axd?site=staging.website.com',
username='staging-deploy',password='xyz',authType='basic',includeAcls='false'
-enableRule:DoNotDeleteRule
fails, with the following error in WmSvc.log
wmsvc.exe Error: 0 : Attempted to perform an unauthorized operation.
setAcl/C:\temp\web (Read)
ProcessId=15784
ThreadId=31
DateTime=2011-03-30T14:57:02.4867689Z
Timestamp=3802908721815
wmsvc.exe Error: 0 : Not authorized.
Details: No rule was found that could authorize user 'staging-deploy',
provider 'setAcl', operation 'Read', path 'C:\temp\web'.
(and several more Read/Write operations)
Something is clearly going wrong with the paths it's trying to access (as it works fine with the other method) - I'm not sure it's even trying to use the iisApp targeting correctly, and at the moment I don't think the correct web.config's will be deployed either.
| I've got this fixed now - I needed a different command to the one the automatically generated .cmd file was using, but comparing the two allowed me to fix it up (thanks @Vishal R. Joshi)
The differences I needed was:
basic authentication
allow untrusted certificates
?site=staging.webserver on the end of the MsBuild.axd path, as with my original command
override the IIS Web App name that is set in the params file
enable the do not delete rule
The winning command is as follows:
msdeploy
-verb:sync
-allowUntrusted
-source:package='src\WebProject\obj\Deploy-Staging\Package\WebProject.zip'
-dest:auto,ComputerName=
'https://192.168.0.1:8172/MsDeploy.axd?site=staging.website.com',
username='staging-deploy',password='xyz',authType='basic',includeAcls='false'
setParamFile:
"src\WebProject\obj\Deploy-Staging\Package\WebProject.SetParameters.xml"
-setParam:name='IIS Web Application Name',value='staging.website.com'
-enableRule:DoNotDeleteRule
-disableLink:AppPoolExtension -disableLink:ContentExtension
-disableLink:CertificateExtension
Hope this helps someone!
| TeamCity | 5,488,164 | 12 |
I have a build configuration which deploys my code to a machine. Depending on which machine I am deploying to (e.g. dev/uat/prod), I need to run as a different user.
Rather than hardcoding the username and password in the build files (not really possible as they change regularly for security reasons) I would like to be able to type them in at the point I run the build. I would envisage the "Run Custom Build" in TeamCity would have this option but I can't see anywhere to input that information.
Is there any way to do this (short of remoting into the build agent and changing the user which the build agent runs as)?
Thanks
| The RunAs plugin combined with TeamCity 7's new Typed Parameters will let you make the password a "typed" parameter plugin.
Then, when it's entered at the Run screen, it will not be visible in the build history.
EDIT: Much later, as covered in the comments: You probably don't want to do this. Consider having separate pools which run as different users, and parameters to specify what builds are supported by what pools.
| TeamCity | 5,223,055 | 12 |
I vaguely remember reading "something" "somewhere" about using Trace.WriteLine over Console.Out.WriteLine in nUnit possibly in the context of reSharper or TeamCity but I cannot remember the details.
Therefore the question is either in the context of nUnit running separately or within reSharper/TeamCity is there any benefit of using one over the other, what are the differences if any and what would you personally use?
Currently my stand point is Trace.WriteLine not only because I vaguely remember something which I could have dreamt up but that I feel that tracing in a unit test is more of a diagnostics task than an output task.
| Personally, I am not keen on embedding tracing in unit tests (using either method you mention). If a unit test requires this, it is most likely a sign that your unit test is too complex. If you need to trace logic through the unit test, you should use asserts throughout the test to programmatically check that the expected behaviour is occurring, removing the need for the textual tracing output.
However, you need to be pragmatic - it is useful to do this sometimes. Using either method (or something else like Debug.WriteLine) is fine, but which one you use does give you some flexibility.
If you have a lot of tests that output tracing, you can get a lot of tracing output when you run all of your tests in a single run. Within NUnit, you can filter this in the options page:
The four options do the following ref:
Standard Output: Captures all output written to Console.Error.
Error Output: Captures all output written to Console.Error.
Trace Output: Captures all output written to Trace or Debug.
Log Output: Captures output written to a log4net log. NUnit captures all output at the Error level or above unless another level is specified for the DefaultLogThreshold setting in the configuration file for the test assembly or project.
By turning off these options, you can individually disable the output of tracing sent to the four different logging methods, enabling you to filter your test tracing.
I am not aware of any similar setting in ReSharper's test runner.
One thing also worth considering is that the text output can have side effects. I recently encountered NUnit crashing because some output contained characters that were illegal in an XML file - NUnit produces one as part of our autobuild.
EDIT:
@Bronumski: The only real difference I can see of using one method over another is the way the output is consumed.
Certain tools will pick up Debug tracing (eg. DebugView) but not Console output. Also, you can disable Trace output at runtime via configuration (in app.config), but not Console output. This will only matter though if you have to decorate real (ie. not test) code with tracing for your tests - logging lots of text at runtime can be costly and so is beneficial if it can be turned off unless you really need it to diagnose something.
Additionally, when using NUnit, you can selectively turn them off independently of each other if you have too much logging to wade through.
| TeamCity | 3,872,608 | 12 |
I have multiple svn roots configured in TeamCity. They all point to the same repository, but different paths (branches). All branches return the same value for revision. I want the branch specific revision numbers.
Here is an excerpt from the build log after I've dumped all the defined properties:
vcsroot.3_0_11__SP6_.url = https://svn.devlan.local/Enigma/branch/release/3.0.11/
vcsroot.trunk.url = https://svn.devlan.local/Enigma/trunk/
system.build.vcs.number.trunk = 9602
system.build.vcs.number.3_0_11__SP6_ = 9602
Clearly different locations in the svn tree, but same revision number.
How can I get branch specific revision numbers?
| You just need to make multiple VCS Roots in your Administration settings and apply each one to the appropriate build. For instance, if
svn://196.168.0.1/software
is your SVN repository, then you might have VCS Roots for each of the following projects:
svn://196.168.0.1/software/agent/trunk
svn://196.168.0.1/software/server/trunk
svn://196.168.0.1/software/database/trunk
"Branch-specific" revisions is sort of a misnomer, but each of those VCS Roots will use the branch's most recent repository revision number in its build.vcs.number.
| TeamCity | 2,882,953 | 12 |
I have a .NET project with a Rake build script. Rake calls msbuild.exe to do the actual compilation. When I configure a TeamCity 5.0 build using the Rake runner, compilation errors are not recognized as such by TC. When a compilation error occurs:
The build does abort and is flagged as a failure;
The log overview does not contain the compilation error message. I have to go to Build Log -> All Messages to see the failure;
The compilation failure is not reported via email. The {COMPILATION_ERRORS} placeholder in my email notification template is replaced with a blank string.
What do I have to do to get TC to recognize the compilation errors?
| The answer, as shown in this thread on the TeamCity support forum, is to tell MSBuild to use a special TeamCity log listener using the "/l" switch:
msbuild /l:JetBrains.BuildServer.MSBuildLoggers.MSBuildLogger,<path to dll>
The dll ships in the TeamCity agent directory: {agent}/plugins/dotnetplugin/bin/JetBrains.BuildServer.MSBuildLoggers.dll
| TeamCity | 1,883,753 | 12 |
I've been reading through the TeamCity 4.x documentation, and I am confused what the difference between a server side checkout and an agent side checkout is, as mentioned in this snippet from their help section:
Exclude checkout rules will only speed up server-side checkouts. Agent-side checkouts emulate the exclude checkout rules by checking out all the root directories mentioned as include rules and deleting the excluded directories. So, exclude checkout rules should generally be avoided for the agent-side checkout.
What is the difference between a server-side checkout and an agent-side checkout?
| Ok, here is the answer from Pavel Sher (a JB guy) :
The main reason why server side
checkout exists - is to simplify
administration overhead.
With server side checkout you need to
install VCS client software on the
server only (applicable to Perforce,
Mercurial, TFS, Clearcase, VSS).
Network access to VCS repository can
also be opened to the server only. So
if you want to control who has access
to your sources repositories it is
probably better to use server side
checkout.
As a side effect in some cases server
side checkout can lower load produced
on VCS repositories especially if
clean checkout is performed often.
This is because clean patches are
cached by server. However this is
environment specific, probably in some
cases agent side checkout will work
better.
Exclude rules also are better
processed with server side checkout
because usually agent side checkout is
just an update and with most VCSes
there is no way to exclude some
directories during update operation.
From the other hand because agent side
checkout is just an update or checkout
it creates necessary administration
directories (like .svn, CVS), so it
allows you to communicate with
repository from the build: commit
changes and so on. With server side
checkout such directories won't be
created.
| TeamCity | 1,799,309 | 12 |
We have been using Teamcity for some time for the Continous Integration in the project. Now we want to have some kind of hardware in the room that shows everyone that a build was broken. I've seen mentions to lava lamps and rabbits that can do this, but couldn't see any examples for Teamcity.
Does anyone have a good suggestion on what to buy and how to integrate with Teamcity?
Thanks
| Teamcity has a buildbunny plugin for for integration with a Nabaztag (I wouldn't have recommended a Nabaztag some time ago but they are saved now).
alt text http://www.agimatec.de/blog/wp-content/uploads/2008/07/nabaztag-speech.jpg
If you are a team of Linux geeks, you may prefer the tux droid plugin.
(source: waltercedric.com)
Or maybe you could just use a computer display with the team-piazza plugin (for something "a la" mozilla, see http://isthetreegreen.com/)
alt text http://team-piazza.googlecode.com/svn/wiki/screenshot-success.png
For everything else (lava lamps, ambient orb, build wallboard, LCD monitor, etc), I guess you'll need some hacking. I'd like to see lava lamp support as this is my preferred extreme feedback device (it's funny to race against the wax to fix the build). So if you go this way, let me know :)
| TeamCity | 1,773,457 | 12 |
Is it possible to pin a build in Teamcity programmatically/automatically?
I want to pin a build if a Deploy-build is successfull.
| Just found out that its possible through the REST API
I can f.ex send a PUT command like this
http://teamcityserver:81/httpAuth/app/rest/builds/id:688/pin/
and then the build with id 688 (teamcity.build.id) will be pinned.
| TeamCity | 6,545,710 | 11 |
I have a TeamCity project with the following build configurations:
Gather dependencies (expensive)
Build
Test
Deploy
Say I know whether I need to do it by changes to some file deps.txt.
Here's what I want to do:
I want to trigger builds on all changes in version control.
If deps.txt has changed, I want to run builds 1, then 2, then 3, then 4.
If deps.txt has not changed, I want to run builds 2 then 3 then 4.
I tried putting triggers on build configurations like this:
VCS trigger on no checkins, unless +:deps.txt
VCS tigger on all checkins, unless -:deps.txt
Snapshot dependency on 2, trigger when 2 finishes building
Snapshot dependency on 3, trigger when 3 finishes building
but if a commit includes changes deps.txt and other files, then configurations 1 and 2 trigger at the same time, meaning that configuration 2 will fail.
Is there an easy way to do this in TeamCity?
| You could combine 1 into 2, and then for the build step of 1 which gathers dependencies, write a custom script which uses the teamcity.build.changedFiles.file property (see TeamCity docs) to check if deps.txt has actually changed or not, and then either gather dependencies or not. The rest of the build steps from 2 would then proceed as normal.
| TeamCity | 45,971,745 | 11 |
I'm trying to restore NuGet packages for a .NET Core solution using NuGet Installer TeamCity build step. The 'MSBuild auto-detection' chooses MSBuild v4.0 instead of v15.0 which is required for .NET Core projects:
[15:41:53][restore] Starting NuGet.exe 4.1.0.2450 from C:\TeamCity\buildAgent\tools\NuGet.CommandLine.4.1.0\tools\NuGet.exe
[15:41:53][restore] MSBuild auto-detection: using msbuild version '4.0' from 'C:\Windows\Microsoft.NET\Framework64\v4.0.30319'.
[15:41:53][restore] Nothing to do. None of the projects in this solution specify any packages for NuGet to restore.
[15:41:53][restore] Process exited with code 0
This leads to the compilation error in the 'MSBuild' TeamCity step that runs after the package restoring:
Assets file 'C:\TeamCity\...\MyProj\obj\project.assets.json' not found.
Run a NuGet package restore to generate this file.
For the 'MSBuild' TeamCity step I choose the MSBuildTools version manually as described in this SO answer:
But I didn't manage to find the similar setting for the 'NuGet Installer' step. Am I missing something?
| I managed to overcome this specifying the -MSBuildPath command line parameter:
| TeamCity | 44,522,321 | 11 |
This error occurs sometimes, usually this step works fine, but in about 10% cases it fails with below message.
Nuget installer step is first build step, and also "clean checkout" is enabled in TeamCity, so there shouldn't be any process that uses file.
[restore] The process cannot access the file 'C:\BuildAgent\work...\packages\Microsoft.Bcl.Build.1.0.21\Microsoft.Bcl.Build.1.0.21.nupkg' because it is being used by another process.
[restore] Process exited with code 1
| Root cause of "my file is being used by another process"
There are several things you can do, depending on what is the root cause:
You use parallel build feature of MSBuild: According to his blog, It seems to not work well with NuGet package restore, because it could runs parallel package restore for each projects (and if several projects require Microsoft.Bcl.Build... Boom!). Either disable parallel build or do package restore outside of solution (in a command line step). By the way, it's not a good practice to do NuGet restore with solution MSBuild.
There is another process: Use process monitor/explorer to find
out which process hold the lock on the file. It could be an antivirus, file indexing tool, an additional build agent instance which keep running in the background. As the lock is not permanent, and 1 out of 10 is quite a lot for this kind of issue, I would not bet that this is the root cause.
There is no other process: It's a bug in the Teamcity NuGet installer which do something. Then you could replace the step by a command line step with a call to nuget restore and check if it fixes your problem.
Diagnose yourself with a nuget plugin of your flavor:
NuGet plugin code is available here. It means you can compile a version of your own, with additional debugging information when these kind of problem happen (list of process locking the file for example). Or even more useful, you can add a retry step in case the file is locked.
EDIT: For those who use Teamcity on Unix...
I've tried to install Teamcity on Unix and make a build on mono with the NuGet plugin. By the way, NuGet plugin does not work at all on Unix (It's not supported by JetBrains). I even tried to fix it but it was easier to just replace it with a command line.
| TeamCity | 35,128,196 | 11 |
Basically I want to specify "all files that end with Test.dll", also known as *.Test.dll. *.Test.dll doesn't work, presumably because it matches only files in the current working directory.
However, I didn't have any luck with **\*Test.dll either. For some reason I had to use **\bin\**\*Test.dll for it to find any test assemblies it could run.
The TeamCity 7 documentation for MSTest doesn't say anything about wildcards, as far as I can tell. Can someone help me understand wildcards when specifying test assemblies for the MSTest runner in TeamCity 7?
Is it possible to specify files matching a certain file name pattern, but in whatever directory?
| According to the TeamCity documention on Wildcards **\*Test.dll should have worked. So either it's a bug or the forward slash versus backward slash issue is significant.
| TeamCity | 13,084,822 | 11 |
Using TeamCity, I've set up several builds in a project. Most of the time I want to run each build as a standalone. However, sometimes I want to execute several builds with the same set of parameters. The builds all use the same template, so all of their parameters could, theoretically, be supplied by a single build.
I can't find anything in the documentation that says this is possible, but it seems like it should be. (searching for "execute builds from another build in teamcity" gives me plenty of documentation on build dependencies, but not what I'm looking for)
I know I can manually queue up all of my builds, but that would require re-entering the same parameters each time.
Does TeamCity support build steps that execute other TeamCity builds? If so, How?
| I achieve this by calling the TeamCity REST API:
Add a new step at the end of your build, using Command Line runner
Do curl
curl -X POST -H "Authorization: Bearer %TeamCityToken%"
--header "Content-Type:application/xml"
-d"
<buildType id="Remote Deploy"/>
<property name="tag" value="%NewVersion%"/>
"
http://teamcity.example.com/app/rest/buildQueue
You will need to change:
TeamCityToken to your access token, refer to this page to create one: https://www.jetbrains.com/help/teamcity/rest/teamcity-rest-api-documentation.html#REST+Authentication
Build type id "Remote Deploy" to your build type id.
The properties to whatever you need.
And, of cause, the teamcity url.
| TeamCity | 37,416,065 | 11 |
I've updated my app from DNX, ASP.NET 5 RC1 to ASP.NET Core 1.0 RC2.
Locally it builds and runs fine.
On the build server, I don't have Visual Studio installed, and the build fails with:
error MSB4019: The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\DotNet\Microsoft.DotNet.Props" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk.
I did install the: .NET Core SDK for Windows.
Trying to install the VS 2015 tooling preview fails with:
What would be the correct setup to build .NET Core 1.0 RC2 app on the build server without having to install Visual Studio 2015?
Note: The build box (TeamCity 9) builds/runs tests fine for .NET 4.5 and DNX.
| https://learn.microsoft.com/en-us/dotnet/articles/core/windows-prerequisites#issues
Issues
You may be blocked from installing the .NET Core Tooling Preview 2 for Visual Studio 2015 installer due to a temporary bug. To workaround it, run the installer from the commandline with the SKIP_VSU_CHECK=1 argument, as you see in the example below.
DotNetCore.1.0.0-VS2015Tools.Preview2.exe SKIP_VSU_CHECK=1
| TeamCity | 37,326,569 | 11 |
For an old project I support, I've been performing some modernization. That has included various things: bumping the .NET Framework up to 4.6, and other upgrades. One of the things we have some leeway to do is make syntactic upgrades, provided we don't change business logic.
We've also recently installed Visual Studio 2015, and the latest and greatest ReSharper, which revealed that "String Interpolation" is now something we can do in our code. For those who don't know, string interpolation is syntactic sugar over string.Format calls as below:
// Normal, pre-C#6 formatting:
var foo = string.Format("Some string {0}", bar);
// C#6 String Interpolation
var foo = $"Some string {bar}";
This is really useful because it makes the messages a lot easier to read, and usually by taking up fewer characters.
...Yet, TeamCity seems to disagree. When I pushed the code up in a commit, I got the following error:
Directory\SomeFile.cs: error CS1056: Unexpected character '$' [C:\ProjectDirectory\Project.Core\Project.Core.csproj]
On the surface, it seems like a pre-C#6 builder of some sort is being hit, because this is a new feature to C#6.
Here's what I have observed that is why I theorize that this is what's going on:
The build configuration is pointing to C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe This gives me pause, because presumably .NET 4.6 is installed on our build agent.
Upon trying to directly install .NET 4.6 from the Microsoft installer, though, it fails because .NET 4.6 is already installed on the bulid agent.
Our build configuration's compile step is giving me pause as well. The first two lines that note the version of the Build Engine and framework respectively are [exec] Microsoft (R) Build Engine version 4.6.1055.0 [exec] [Microsoft .NET Framework, version 4.0.30319.42000] The build engine is apparently v4.6, but the .NET framework is 4.0!? Am I reading that right?
Lastly, one of the final lines of the build log: [NAnt output] External Program Failed: C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe (return code was 1) That is not 4.6...is it?
Question: Twofold. This first question might be stupid, but am I actually compiling against .NET 4.6? Secondly, in what way can I get the String Interpolation syntax in C#6 to actually build, if I am pointing to .NET 4.6?
I'm clearly missing something about all of this, I'm not sure A) how many things I'm missing, or B) what exactly I should do about them.
| I feel vindicated; the funny path/version was in fact indicative of the true problem: the MSBuild.exe that was being ran was not the one installed by VS2015.
As I read from here, the version of MSBuild.exe that is found in C:\Windows\Microsoft.NET\Framework\v4.0.30319 is actually a pre-C#6 version; for some reason, installing Visual Studio 2015 does not alter that installation!
Instead you should point your MSBuild.exe call to the copy stored at C:\Program Files (x86)\MSBuild\14.0\Bin to be compiling against the latest C# version, as installed by VS2015.
| TeamCity | 34,364,251 | 11 |
Using team city as our CI and I've started getting this error message. We've obviously updated System.Net.Http recently which now needs a new version of NuGet. How do I get team city to find the new NuGet version. I've tried installing VS2015 and updating NuGet package manager through there. I've tried pointing directly to the command line nuget.exe (Don't know if that's been updated to v3?)
[restore] The 'System.Net.Http 4.0.0' package requires NuGet client version '3.0' or above, but the current NuGet version is '2.8.60717.93'.
[restore] Process exited with code 1
Do I just have to just wait till MS pushing the new nuget package to nuget?
Thanks
| On your teamcity client you can configure the nuget versions available to your build agents.
Go to Administration -> Integrations -> NuGet
From this screen you can click Fetch NuGet and retrieve the latest version. Then you should be able to specify that version on your build step.
| TeamCity | 32,279,970 | 11 |
Given there are master and dev git branches, a git repository is hosted on the Github and TeamCity 9.0.1 installed as a CI server.
The teamcity build project is configured to use github repository as a VCS root with refs/heads/master set as a default branch.
The desired behavior is to run auto-merge from master to dev when the build is successful.
So I add an Automatic merge build feature as specified here with the following settings:
Watch builds in branches => Branch filter: +:master
Merge into branch: dev
Merge commit message: TEAMCITY: Automatic merge branch master into dev
Perform merge if: build is successful
Merge policy: use fast-forward merge if possible
After pressing Run - the build is green, no errors are shown in the Build Log, but totally nothing was merged as desired!
What's wrong and where can I find the debug information about build features execution?
| The thing I really needed was to create a dedicated teamcity project (called Integration) which first handles commits in both master and dev branches. It was achieved by configuring a VCS Root for Integration project with refs/heads/dev specified as a default branch and +:refs/heads/master specified in a branches specification section.
The project has an automatic merge build feature configured with settings similar to specified in the question (branch filter: +:refs/heads/master, merge into branch <default>).
That is the way I solved it.
| TeamCity | 30,141,426 | 11 |
We have automated the builds of our current project by using TeamCity/Command Line Tools. To be sure to catch as much potential issues as possible, we have set the project to use the static analyser for each build.
Several 3rd-party classes were flagged by the analyser so we excluded the dubious classes by flagging them with:
-w -Xanalyzer -analyzer-disable-checker
Everything works as expected when compiled in Xcode (tested with 4.6.3 and 5.0.1).
But when compiled on the TeamCity server, we're getting the following error for each excluded 3rd-party file:
__PIC__ level differs in PCH file vs. current fileerror: __PIC__ level differs in PCH file vs. current file2 errors generated.
The error goes away if we remove the -Xanalyzer -analyzer-disable-checker tags (but of course in this case we get the analyser warnings back).
The same error occurs if we compile using AppCode which makes me thinking this is somehow related to the command line tools, both AppCode and the TeamCity server using them to compile the builds.
The TeamCity server uses Xcode 4's command line tools and I've tried AppCode with both Xcode 4's and 5's.
When trying with AppCode using Xcode 5's command line tools the error differs slightly (once again, one for each excluded class):
error reading 'pic'
no analyzer checkers are associated with '-mrelocation-model'
So, the question: does anyone have any idea how to get rid of this error while suppressing the analyser warnings for specific classes when using command line tools (if command line tools are indeed at fault here)?
| I just ran into this issue and assume it is a bug with Clang. I think I found a workaround though.
Try replacing this
-w -Xanalyzer -analyzer-disable-checker
with this ridiculously long line (keep scrolling to the right to see it all):
-w -Xanalyzer -analyzer-disable-checker -Xanalyzer alpha -Xanalyzer -analyzer-disable-checker -Xanalyzer core -Xanalyzer -analyzer-disable-checker -Xanalyzer cplusplus -Xanalyzer -analyzer-disable-checker -Xanalyzer deadcode -Xanalyzer -analyzer-disable-checker -Xanalyzer debug -Xanalyzer -analyzer-disable-checker -Xanalyzer llvm -Xanalyzer -analyzer-disable-checker -Xanalyzer osx -Xanalyzer -analyzer-disable-checker -Xanalyzer security -Xanalyzer -analyzer-disable-checker -Xanalyzer unix -Xanalyzer -analyzer-disable-checker -Xanalyzer insecureAPI
OK, so here is how I got to that. It looks like Clang has a hierarchy of "Static Analyzer Checkers" and you can disable them individually or by group.
As an example the DeadStore checker is "deadcode.DeadStores" so you can disable it like this:
-Xanalyzer -analyzer-disable-checker -Xanalyzer deadcode.DeadStores
Alternatively you can disable ALL deadcode related checkers by specifying just "deadcode" like this:
-Xanalyzer -analyzer-disable-checker -Xanalyzer deadcode
You can get a list of all the checkers with this command:
clang -cc1 -analyzer-checker-help
It currently outputs the following:
OVERVIEW: Clang Static Analyzer Checkers List
USAGE: -analyzer-checker <CHECKER or PACKAGE,...>
CHECKERS:
alpha.core.BoolAssignment Warn about assigning non-{0,1} values to Boolean variables
alpha.core.CastSize Check when casting a malloc'ed type T, whether the size is a multiple of the size of T
alpha.core.CastToStruct Check for cast from non-struct pointer to struct pointer
alpha.core.FixedAddr Check for assignment of a fixed address to a pointer
alpha.core.PointerArithm Check for pointer arithmetic on locations other than array elements
alpha.core.PointerSub Check for pointer subtractions on two pointers pointing to different memory chunks
alpha.core.SizeofPtr Warn about unintended use of sizeof() on pointer expressions
alpha.cplusplus.NewDeleteLeaks Check for memory leaks. Traces memory managed by new/delete.
alpha.cplusplus.VirtualCall Check virtual function calls during construction or destruction
alpha.deadcode.IdempotentOperations
Warn about idempotent operations
alpha.deadcode.UnreachableCode Check unreachable code
alpha.osx.cocoa.Dealloc Warn about Objective-C classes that lack a correct implementation of -dealloc
alpha.osx.cocoa.DirectIvarAssignment
Check for direct assignments to instance variables
alpha.osx.cocoa.DirectIvarAssignmentForAnnotatedFunctions
Check for direct assignments to instance variables in the methods annotated with objc_no_direct_instance_variable_assignment
alpha.osx.cocoa.InstanceVariableInvalidation
Check that the invalidatable instance variables are invalidated in the methods annotated with objc_instance_variable_invalidator
alpha.osx.cocoa.MissingInvalidationMethod
Check that the invalidation methods are present in classes that contain invalidatable instance variables
alpha.osx.cocoa.MissingSuperCall
Warn about Objective-C methods that lack a necessary call to super
alpha.security.ArrayBound Warn about buffer overflows (older checker)
alpha.security.ArrayBoundV2 Warn about buffer overflows (newer checker)
alpha.security.MallocOverflow Check for overflows in the arguments to malloc()
alpha.security.ReturnPtrRange Check for an out-of-bound pointer being returned to callers
alpha.security.taint.TaintPropagation
Generate taint information used by other checkers
alpha.unix.Chroot Check improper use of chroot
alpha.unix.MallocWithAnnotations
Check for memory leaks, double free, and use-after-free problems. Traces memory managed by malloc()/free(). Assumes that all user-defined functions which might free a pointer are annotated.
alpha.unix.PthreadLock Simple lock -> unlock checker
alpha.unix.SimpleStream Check for misuses of stream APIs
alpha.unix.Stream Check stream handling functions
alpha.unix.cstring.BufferOverlap
Checks for overlap in two buffer arguments
alpha.unix.cstring.NotNullTerminated
Check for arguments which are not null-terminating strings
alpha.unix.cstring.OutOfBounds Check for out-of-bounds access in string functions
core.CallAndMessage Check for logical errors for function calls and Objective-C message expressions (e.g., uninitialized arguments, null function pointers)
core.DivideZero Check for division by zero
core.DynamicTypePropagation Generate dynamic type information
core.NonNullParamChecker Check for null pointers passed as arguments to a function whose arguments are references or marked with the 'nonnull' attribute
core.NullDereference Check for dereferences of null pointers
core.StackAddressEscape Check that addresses to stack memory do not escape the function
core.UndefinedBinaryOperatorResult
Check for undefined results of binary operators
core.VLASize Check for declarations of VLA of undefined or zero size
core.builtin.BuiltinFunctions Evaluate compiler builtin functions (e.g., alloca())
core.builtin.NoReturnFunctions Evaluate "panic" functions that are known to not return to the caller
core.uninitialized.ArraySubscript
Check for uninitialized values used as array subscripts
core.uninitialized.Assign Check for assigning uninitialized values
core.uninitialized.Branch Check for uninitialized values used as branch conditions
core.uninitialized.CapturedBlockVariable
Check for blocks that capture uninitialized values
core.uninitialized.UndefReturn Check for uninitialized values being returned to the caller
cplusplus.NewDelete Check for double-free and use-after-free problems. Traces memory managed by new/delete.
deadcode.DeadStores Check for values stored to variables that are never read afterwards
debug.ConfigDumper Dump config table
debug.DumpCFG Display Control-Flow Graphs
debug.DumpCallGraph Display Call Graph
debug.DumpCalls Print calls as they are traversed by the engine
debug.DumpDominators Print the dominance tree for a given CFG
debug.DumpLiveVars Print results of live variable analysis
debug.DumpTraversal Print branch conditions as they are traversed by the engine
debug.ExprInspection Check the analyzer's understanding of expressions
debug.Stats Emit warnings with analyzer statistics
debug.TaintTest Mark tainted symbols as such.
debug.ViewCFG View Control-Flow Graphs using GraphViz
debug.ViewCallGraph View Call Graph using GraphViz
llvm.Conventions Check code for LLVM codebase conventions
osx.API Check for proper uses of various Apple APIs
osx.SecKeychainAPI Check for proper uses of Secure Keychain APIs
osx.cocoa.AtSync Check for nil pointers used as mutexes for @synchronized
osx.cocoa.ClassRelease Check for sending 'retain', 'release', or 'autorelease' directly to a Class
osx.cocoa.IncompatibleMethodTypes
Warn about Objective-C method signatures with type incompatibilities
osx.cocoa.Loops Improved modeling of loops using Cocoa collection types
osx.cocoa.NSAutoreleasePool Warn for suboptimal uses of NSAutoreleasePool in Objective-C GC mode
osx.cocoa.NSError Check usage of NSError** parameters
osx.cocoa.NilArg Check for prohibited nil arguments to ObjC method calls
osx.cocoa.NonNilReturnValue Model the APIs that are guaranteed to return a non-nil value
osx.cocoa.RetainCount Check for leaks and improper reference count management
osx.cocoa.SelfInit Check that 'self' is properly initialized inside an initializer method
osx.cocoa.UnusedIvars Warn about private ivars that are never used
osx.cocoa.VariadicMethodTypes Check for passing non-Objective-C types to variadic collection initialization methods that expect only Objective-C types
osx.coreFoundation.CFError Check usage of CFErrorRef* parameters
osx.coreFoundation.CFNumber Check for proper uses of CFNumberCreate
osx.coreFoundation.CFRetainRelease
Check for null arguments to CFRetain/CFRelease/CFMakeCollectable
osx.coreFoundation.containers.OutOfBounds
Checks for index out-of-bounds when using 'CFArray' API
osx.coreFoundation.containers.PointerSizedValues
Warns if 'CFArray', 'CFDictionary', 'CFSet' are created with non-pointer-size values
security.FloatLoopCounter Warn on using a floating point value as a loop counter (CERT: FLP30-C, FLP30-CPP)
security.insecureAPI.UncheckedReturn
Warn on uses of functions whose return values must be always checked
security.insecureAPI.getpw Warn on uses of the 'getpw' function
security.insecureAPI.gets Warn on uses of the 'gets' function
security.insecureAPI.mkstemp Warn when 'mkstemp' is passed fewer than 6 X's in the format string
security.insecureAPI.mktemp Warn on uses of the 'mktemp' function
security.insecureAPI.rand Warn on uses of the 'rand', 'random', and related functions
security.insecureAPI.strcpy Warn on uses of the 'strcpy' and 'strcat' functions
security.insecureAPI.vfork Warn on uses of the 'vfork' function
unix.API Check calls to various UNIX/Posix functions
unix.Malloc Check for memory leaks, double free, and use-after-free problems. Traces memory managed by malloc()/free().
unix.MallocSizeof Check for dubious malloc arguments involving sizeof
unix.MismatchedDeallocator Check for mismatched deallocators.
unix.cstring.BadSizeArg Check the size argument passed into C string functions for common erroneous patterns
unix.cstring.NullArg Check for null pointers being passed as arguments to C string functions
The long command line I provided above in my answer disables all 9 top level checkers:
alpha, core, cplusplus, deadcode, debug, llvm, osx, security, and unix PLUS "insecureAPI" based on the comments below as it seems disabling security doesn't also disable security.insecureAPI.
Hopefully this is equivalent to not running the analyzer at all.
For more info see the Checker Developer Manual here: http://clang-analyzer.llvm.org/checker_dev_manual.html
| TeamCity | 19,863,242 | 11 |
I have a workspace with few projects that must be built as static libraries and I have schemes with tests for them. I want to configure TeamCity to build and test each of those libraries, but it does not work with following error:
...
/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -workspace code/MyApplication/My Framework.xcworkspace -scheme One Of Tests TEST_AFTER_BUILD=YES clean build -configuration Debug -sdk iphonesimulator6.1
in directory: /Users/Me/TeamCity/buildAgent/work/d0f083d874fc6891
Build settings from command line:
SDKROOT = iphonesimulator6.1
TEST_AFTER_BUILD = YES
xcodebuild: error: Failed to build workspace My Framework with scheme One Of Tests.
Reason: Scheme "One Of Tests" is not configured for running.
Process exited with code 70
...
But at the same moment, when I clone my repository, cd into it and run command from above in terminal:
/Applications/Xcode.app/Contents/Developer/usr/bin/xcodebuild -workspace code/MyApplication/My Framework.xcworkspace -scheme One Of Tests TEST_AFTER_BUILD=YES clean build -configuration Debug -sdk iphonesimulator6.1
It succeeds: // UPDATE: It worked only for build schemes, not for tests
** BUILD SUCCEEDED **
So it's definitely wrong settings in TeamCity. What can I try to make it works?
P.S. Schemes for building libraries work fine. Only with tests throw errors.
| I found solution. The problem was about poor support of SenTestKit with xcodebuild command. To make it works I had to go Edit Scheme menu and set Run step so the test scheme became runnable.
Thanks for solution this article. There're few things was actually different, it is a Test After Build setting and macros. In my case it runs test only with YES option and I did not have to write any macros. Mb they fix the issue that was described in the article.
| TeamCity | 17,110,633 | 11 |
Is it possible to get two builds to checkout on the same directory and how can this be done please?
Currently the two different builds are checking out to two different directories.
| You can accomplish this by taking control of the checkout directory locations.
First you need to define your checkout directory to something that can be known to both builds. In your build configuration, browse to Version Control Settings -> Checkout Settings. Change the Checkout Directory setting to Custom Path. You'll then be prompted to provide the directory to which you want to checkout your source. This can be anywhere you want**, as long as TeamCity has write privileges there.
Next, you need to modify the Checkout Rules (also on Version Control Settings) for each project such that they are targeting a folder relative to the root of the Checkout Directory. You can do this by setting the rule to +:%some.repo.path%=>/%some.sub.folder%. You could prescribe any subfolder you want there. We just checkout everything to the Checkout Directory root (=>/).
If both projects are referencing the same Checkout Directory, then this combination of setting should give you the control and flexibility that you're looking for.
** For our Checkout Directory we use the parameterized value %teamcity.agent.work.dir%\%system.teamcity.projectName%\%branch%. The first two parameters are TeamCity system parameters, and the the last is defined by us. On our system this resolves to G:\BuildAgent\work\$PROJECT\$BRANCH, which keeps everything tidy and predictable.
| TeamCity | 13,768,204 | 11 |
We have a TeamCity 7.1 installation that builds all branches from a GitHub repository.
GitHub has a notification hook back to TeamCity to trigger a build on check-in. We also have TeamCity polling GitHub every 120 seconds to check for changes (in case the server was offline when a change was checked in).
Our normal development follows a common pattern:
Create a branch from master
Commit to that branch until finished with a feature
When finished, pull from master to merge any changes and push to remote
Submit a GitHub pull request to allow the admins to merge into master
Everything is working swimmingly (after much searching to get the correct configuration settings) however...
The above process triggers several builds on TeamCity and I'd like to know whether they're all necessary. Typically we'll end up with:
A build for /refs/heads/branch-name
A build for /refs/pull/number/head
A build for /refs/pull/number/merge
Naturally the first build is the last check-in on the particular branch, and the second build is the pull request, but what is the third build for?
| Third build is actually the most valuable — it's the result of pull request auto merge (merge that happens, when you press the button at github).
| TeamCity | 12,634,440 | 11 |
I have created a release configuration project in Teamcity 6.5 using the "SLN Runner" for VS 2008 solutions. My debug solution builds fine along with the PDB files - however I simply cannot get the thing to build in Release mode, plus it will insist on defaulting to x64 architecture.
I have tried the following:
Set proj file explicitly to Release mode
Set build parameters to send to MSBuild explicitly passing through /platform:anycpu and /configuration:release
I've noticed in the .sln.proj file that is generated that the following code appears (at first glance) to be incorrect and the configs are being set to Debug mode for both configurations?
<ItemGroup Condition=" ('$(Configuration)' == 'Debug') and ('$(Platform)' == 'Any CPU') ">
<BuildLevel0 Include="MySolution.csproj">
<Configuration>Debug</Configuration>
<Platform>AnyCPU</Platform>
</BuildLevel0>
</ItemGroup>
<ItemGroup Condition=" ('$(Configuration)' == 'Release') and ('$(Platform)' == 'Any CPU') ">
<BuildLevel0 Include="MySolution.csproj">
<Configuration>Debug</Configuration>
<Platform>AnyCPU</Platform>
</BuildLevel0>
</ItemGroup>
Any assistance appreciated:
| May sound stupid but does all the Projects in your solution contain an Any CPU platform configuration for Release?
This has caught us out a few times with some projects only pointing at x86 etc
| TeamCity | 7,067,477 | 11 |
Is there a way to specify which SVN revision to checkout in a TeamCity build?
If I attempt to change the SVN URL to include the revision using the @ notation, eg.
svn+ssh://svn/some/url@1234
then I get an error ("Unknown path kind").
I've searched all TeamCity documentation and can find nothing appropriate.
The background to this question is that I would like to run tests on a particular revision that for some reason was not done in the past (eg. the URL was not in TeamCity at the time).
| Yes, just hit the ellipses next to the "Run" button to trigger a custom build and choose the revision from the "Last change to include" list in the resultant screen. BUT - you can only choose from revisions which the build has previously run.
Unfortunately the only other option is to create a separate VCS root against a tag of the revision you want to run to do this. Not elegant, but it works.
| TeamCity | 6,910,782 | 11 |
The recommended way to run scripts is
powershell.exe -NonInteractive -Command " & some.ps1 "
However for example TeamCity PowerShell runner uses:
powershell.exe -NonInteractive -Command - < some.ps1
I do not have an idea what "- <" means and cannot find any information on subject. Any help?
| Because powershell.exe is being invoked through the Windows shell, it is the same as if you were on a normal command prompt (cmd.exe). In that situation < pipes a file to the standard input (stdin) of the previous command. The help for powershell.exe states that if the value of -Command is simply -, the command text is read from standard input.
Here's a more self-documenting demonstration of < in cmd.exe:
processSomeFile.exe outputFileName.ext < intputFile.ext
| TeamCity | 6,816,923 | 11 |
I have a build that I know will fail randomly, around 5% of the time. This is due to an external resource that I have no control over.
Setting a "Retry on Fail" build trigger is easy enough, except that it doesn't allow me to specify a number of retries before it stops trying all together.
Is there a way to have TeamCity retry a build on fail, only n number of times?
Thanks!
| This feature was finally added in version 7.1, See
http://youtrack.jetbrains.com/issue/TW-5165
and
http://confluence.jetbrains.net/display/TCD7/What%27s+New+in+TeamCity+7.1
| TeamCity | 5,384,534 | 11 |
I have a build in TeamCity which runs against a project file names Web.csproj (inside a "Web" folder in the root) and targets "Package". It runs just fine and I get a nice Web\obj\Debug\Package folder with all the expected content.
I then have a second build with an artifact dependency on the above path which is intended to run the deploy command. However, no matter what I do I always get a "Failed to download artifact dependency" error message followed by "No files matched for pattern "Web/obj/Debug/Package"". Even if I set the artifacts path to just ** and try to pull everything from the root, it fails. Looking on the server, there are clearly files in the working directory.
Does anyone have any guidance for troubleshooting this?
| For the sake of completeness, the answer was that I hadn't defined an artefact path in the first build. Without specifying the output to save from this build, it won't be available in dependent builds.
| TeamCity | 4,093,854 | 11 |
We're setting up a TeamCity build server for continuous integration. To keep things clean, we don't want to install Visual Studio on the build server. I'm wondering how we can build Silverlight 4 apps without it?
I'd expect there to be a Silverlight 4 SDK which installs separately from VS - but so far I haven't found one. There's the Silverlight 4 tools for VS 2010, but that refuses to install without VS. I also found a link to a Silverlight 4 SDK RC, but I assume there should be an RTM version by now :).
If we have to, we can manually copy a few files from a VS machine to the build server. But we don't want to do a full VS install, since it will make the CI environment too different from production.
| Visit the Silverlight Tools topic on MSDN, scroll down to toward the bottom you will find a section on the Silverlight 4 SDK which includes a link to an independent install for the SDK alone. This install should not require VS and is what you need for a CI machine.
| TeamCity | 3,971,733 | 11 |
I want to set up each TeamCity agent to use a local repository (maven.repo.local) in the agent's working directory (agent.work.dir). Is it possible to configure maven properties to use TeamCity properties in this way?
| Enter -Dmaven.repo.local=%system.agent.work.dir%/.m2 for the setting Runner: Maven2 / JVM command line parameters
| TeamCity | 2,238,477 | 11 |
We have some unreliable tests - unreliable because of environmental reasons.
We'd like to see a history of which tests have failed the most often, so we can drill into why and fix the environment issue that causes that particular failure or class of failure.
Is this possible in TeamCity 6.0.3?
We know about individual test history (although that page is really hard to remember how to find!), but that pre-supposes we already know what we're actually trying to find out.
| If you go to the "Current problems" tab for a project, there is a link like "tests failed within 120 hours" at the top. There is some statistics which may be relevant to what you're looking for.
UPDATE: In newer versions of TeamCity, this page is not available. But, there is a new Flaky tests tab, which shows information about tests which fail un-predictably, and this page includes test failure counters.
| TeamCity | 5,975,527 | 10 |
I run MSTest to test WPF application (Coded UI Test) on a VM using Teamcity. I already installed test agent as interactive process but i keep getting this error in Teamcity log
Error calling Initialization method for test class Squarebit.Apms.Terminal.Wpf.Test.CodedUITest1: Microsoft.VisualStudio.TestTools.UITest.Extension.UITestException: To run tests that interact with the desktop, you must set up the test agent to run as an interactive process. For more information, see "How to: Set Up Your Test Agent to Run Tests That Interact with the Desktop" (http://go.microsoft.com/fwlink/?LinkId=255012)
If you are running the tests as part of your team build, you must also set up the build agent to run as an interactive process. For more information, see "How to: Configure and Run Scheduled Tests After Building Your Application" (http://go.microsoft.com/fwlink/?LinkId=254735)
at Microsoft.VisualStudio.TestTools.UITesting.Playback.Initialize()
at Microsoft.VisualStudio.TestTools.UITesting.CodedUITestExtensionExecution.BeforeTestInitialize(Object sender, BeforeTestInitializeEventArgs e)
at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecution.RaiseBeforeTestInitialize(BeforeTestInitializeEventArgs args)
at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecuter.RunInitializeMethod()
Can you help me resolve this problem or recommend some ways to run Coded UI Test using Teamcity?
| Coded UI tests (CUIT) can't run from a service account since they need access to Desktop Windowing API set.
Please refer Installing the teamcity build agent section in http://jake.ginnivan.net/teamcity-ui-test-agent/ to setup teamcity agent as a non-service account.
| TeamCity | 24,901,757 | 10 |
I have a TeamCity build set up which does nothing but run integration tests. Sadly, the tests are a tad unreliable. Most of them work fine, but a few intermittently fail from time to time.
I would dearly love to be able to get a graph of the most common test failures. Basically I want to know which tests fail most often.
I know that TC can show me pass/fail statistics for any single test. But I'm not going to go click on all 400+ tests to find out which ones fail most often!
If it's not possible to make TC show me this information, is there some interface that will enable me to download the data so I can process it myself?
| You can get count on frequently of fail test details from teamcity by following steps as details in this link:
Traverse with route : Projects -> (select proj) -> Current Problems (tab) -> View tests failed within the last 120hrs (link present at right side of page )
http://confluence.jetbrains.com/display/TCD7/Viewing+Tests+and+Configuration+Problems#ViewingTestsandConfigurationProblems-ViewingTestsFailedwithinLast120Hours
| TeamCity | 21,330,337 | 10 |
I am trying to use MSDeploy to deploy an MVC project to the server using TeamCity. When I do this on my computer in powershell, using the following command:
msbuild.exe .\mvc.csproj /p:PublishProfile=DevServer /p:VisualStudioVersion=11.0
/p:DeployOnBuild=True /p:Password=MyPassword /p:AllowUntrustedCertificate=true
It builds the project and deploys it to the server (info defined in the DevServer publish profile) perfectly. The output shows an MSDeployPublish section at the end, in which I see text like Starting Web deployment task from source... and then with rows telling me what files are updated, etc.
When I run this on TeamCity, using an MSBuild Build step, on the same file, with the same parameters (from the same working directory) it builds the project but does not publish it. Instead it has the regular output from a build process (CoreCompile, _CopyFilesMarkedCopyLocal, GetCopyToOutputDirectoryItems, CopyFilesToOutputDirectory) but then does not actually go and publish anything.
What changes to I need to make to the setup in TeamCity to get it to publish deploy in the same way that it works using MSBuild from my computer?
(TeamCity 7.1, MSBuild 4.0, WebDeploy 3.0, Visual Studio 12, IIS 7. Related to my previous question)
| We do our WebDeploys with a TeamCity MSBuild step configured as follows:
Build File Path: Server.csproj
Command Line Parameters:
/p:Configuration=%configuration%
/p:DeployOnBuild=True
/p:DeployTarget=MSDeployPublish
/p:MsDeployServiceUrl=https://%web.deploy.server%:8172/MsDeploy.axd
/p:DeployIisAppPath=%web.deploy.site%
/p:AllowUntrustedCertificate=True
/p:Username=
/p:AuthType=NTLM
We use integrated authentication; change as necessary to fit your scheme. The value of this, I think, is that it builds everything from scratch and doesn't rely on a pre-built package. From the gist you posted I noticed that you do some DB publishing, we don't use WebDeploy for that so I can't offer any guidance there. Hope this helps.
| TeamCity | 14,235,960 | 10 |
Hi
I installed teamcity longtime ago, on my home computer.
I am trying to re-use it again now, but I forgot the admin username and password
Is there a default admin user name?
and how can I get the password?
Thank
| From TeamCity 8 you can log in as a super user and change the password that way. You just need to use an empty username and last occurrence of the "super user authentication token" found in the logs\teamcity-server.log file as your password.
Please see the following for more information:
TeamCity 8 - http://confluence.jetbrains.com/display/TCD8/Super+User
TeamCity 9 - http://confluence.jetbrains.com/display/TCD9/Super+User
| TeamCity | 4,057,891 | 10 |
I recently took charge of a software product which was evolved rather unorganized and I have established a new project structure,a source code repository, issue tracking and a buildsystem using nant and teamcity. I'm at the point where every commit to one of the major branches gets compiled, tested and build into a setup.
Always building and shipping full setups seems wrong to me and I'd like to establish some kind of automated patch building, but I have no idea of how to do that. Do you have any suggestions how I could do that or where I could find some information on the topic? Google was no help so far.
Some more details on my current setup:
Repository:
- git:
-- 2 major branches: development and master
Build system:
- teamcity
- 2 configurations: one for building each branch
- build consists of only one build step:
-- nant runner: nant script is part of the repository and contains the following targets: clean, init, compile, test, deploy, build_setup (using inno setup)
I guess I'll have to split the nant script into pieces and use different build steps to somehow compare the new build artifacts to older ones and create a patch containing the updated files. Am I on the right track and if so, does anyone know a good example or tutorial on how to setup teamcity.
| Unless what you have is a massive multi-megabyte end-user application, generating patches (which I assume you want to be minimal) is a daunting task, since you'll have to provide patches from each previous version to the most up-to-date one.
Alternatively, you can invest into autoupdate infrastructure, so that an app will update itself whenever a new version is released.
As for building setups for each commit, I personally don't think this is neccessary unless you're continuously testing setup program itself. Rather, complete build should be triggered manually, whenever it's time to release.
| TeamCity | 6,993,086 | 10 |
There is a way to configure Teamcity to ignore some tests? I need to run these tests only locally, when they are running in Teamcity, must be ignored.
I'm using nunit.
This could be a directive, attribute, etc.
| You can do this by adding test categories to your tests.
[Category("LocalOnly")]
[Test]
public void MyLocalTest()
{
// Code omitted for brevity
}
You can then add that category to the NUnit runner's 'NUnit categories exclude:' field in the TeamCity build step.
NUnit categories exclude: LocalOnly
| TeamCity | 33,876,192 | 10 |
I am trying to pass the branch names from TeamCity to OctopusDeploy so that we can easily track which branch a deployment came from.
To do this I want to append the branch name onto the version number (or the nuget package built with octopack) so that I can display this in the OctopusDeploy UI.
This works fine, except that we are using git-flow so some of our branches contain slashes which causes octopack to fail (as file names cannot contain slashes):
+:refs/heads/(feature/*)
+:refs/heads/(release/*)
+:refs/heads/(hotfix/*)
Is there any way to replace the slashes with something else in TeamCity without changing the way we name our branches?
| Using a build script you can interact with the build process and specify a custom build number where you can replace the slashes. For more details you can check the TeamCity docs.
Here you can find an c# example on how to alter the build number.
By example, in order to mangle the build number you can add CommonAssemblyInfo.cs with a content like (extracted from the above link):
$ww = ([Math]::Floor([DateTime]::Now.DayOfYear/7)+1)
Write-Host "##teamcity[buildNumber '%major.minor%.$ww.%build.counter%']"
$fileLocation = Join-Path -Path "%teamcity.build.checkoutDir%" -ChildPath "\SourceDir\AssemblyInfo.cs"
$oldValue = "AssemblyFileVersion\(""(\d+)\.\d+\.\d+\.\d+""\)"
$newValue = [string]::Concat("AssemblyFileVersion(""%major.minor%.", $ww, ".%build.counter%", """)")
(get-content $fileLocation) | foreach-object {$_ -replace $oldValue, $newValue} | set-content $fileLocation
| TeamCity | 33,419,119 | 10 |
I have a TeamCity server with its Nuget feed enabled. I would like to manually add some third-party nupkg files to it. Is it possible to do so?
| You can add nupkg to a private feed either by using the out-of-the-box TeamCity "NuGet Publish" runner type step or by using the NuGet exe.
Out of the Box NuGet Publish: configure a build step with runner type "NuGet Publish". Under NuGet settings, provide the location to your .nupkg file(s) relative to checkout directory. Also supply the API key and the package source (URL to your private NuGet feed). Then run this build step and it should publish your package. It might be better to have preceding steps that rename the package to avoid confusion.
CommandLine NuGet.exe: configure a build step with runner type "Command Line". Select "Executable with Parameters" under Run option. input the path to the NuGet.exe under "command executable' and add the following parameters under "command parameters"- push {Path-to-package}{Package-Name}.nupkg {API-KEY} -Source {URL-to-Private-Feed}
| TeamCity | 28,321,036 | 10 |
We installed karma with the teamcity reporter on our build server. It was running unit tests through Chrome, Firefox, and IE and everything was working great. Then yesterday I noticed that Chrome was failing to report. IE and Firefox were still connecting and running all the unit tests but for some reason karma is not able to open a connection with Chrome so it times out after 60 seconds and the build step fails.
Whats really strange is that i can login to the build server and run this from the command line fine with no problems. The tests run (and they are very fast).
karma start --reporters teamcity --single-run --log-level error --browsers=IE,Firefox,Chrome
Here's the build log from teamcity. Does anyone have a clue what's going on? As you can see firefox and ie report fine but Chrome just falls on its face every time. I appreciate any help you might be able to offer.
[16:16:39][Step 2/5] [36mDEBUG [config]: [39mLoading config C:\TCBuildConf\01-OpSuiteDev\02-codebase\Website\OpSuite.MobileWeb\Client\unit_tests\karma.conf.js
[16:16:39][Step 2/5] [36mDEBUG [config]: [39mautoWatch set to false, because of singleRun
[16:16:39][Step 2/5] [36mDEBUG [plugin]: [39mLoading karma-* from C:\Users\administrator.OPSUITE\AppData\Roaming\npm\node_modules
[16:16:39][Step 2/5] [36mDEBUG [plugin]: [39mLoading plugin C:\Users\administrator.OPSUITE\AppData\Roaming\npm\node_modules/karma-chrome-launcher.
[16:16:39][Step 2/5] [36mDEBUG [plugin]: [39mLoading plugin C:\Users\administrator.OPSUITE\AppData\Roaming\npm\node_modules/karma-firefox-launcher.
[16:16:39][Step 2/5] [36mDEBUG [plugin]: [39mLoading plugin C:\Users\administrator.OPSUITE\AppData\Roaming\npm\node_modules/karma-ie-launcher.
[16:16:39][Step 2/5] [36mDEBUG [plugin]: [39mLoading plugin C:\Users\administrator.OPSUITE\AppData\Roaming\npm\node_modules/karma-jasmine.
[16:16:39][Step 2/5] [36mDEBUG [plugin]: [39mLoading plugin C:\Users\administrator.OPSUITE\AppData\Roaming\npm\node_modules/karma-teamcity-reporter.
[16:16:39][Step 2/5] [32mINFO [karma]: [39mKarma v0.12.16 server started at http://localhost:7357/
[16:16:39][Step 2/5] [32mINFO [launcher]: [39mStarting browser IE
[16:16:39][Step 2/5] [36mDEBUG [temp-dir]: [39mCreating temp dir at C:\TeamCity\buildAgent\temp\buildTmp\karma-43795558
[16:16:39][Step 2/5] [36mDEBUG [launcher]: [39mC:\Program Files\Internet Explorer\iexplore.exe -extoff http://localhost:7357/?id=43795558
[16:16:39][Step 2/5] [32mINFO [launcher]: [39mStarting browser Firefox
[16:16:39][Step 2/5] [36mDEBUG [temp-dir]: [39mCreating temp dir at C:\TeamCity\buildAgent\temp\buildTmp\karma-44455821
[16:16:39][Step 2/5] [36mDEBUG [launcher]: [39mC:\Program Files (x86)\Mozilla Firefox\firefox.exe http://localhost:7357/?id=44455821 -profile C:\TeamCity\buildAgent\temp\buildTmp\karma-44455821 -no-remote
[16:16:39][Step 2/5] [32mINFO [launcher]: [39mStarting browser Chrome
[16:16:39][Step 2/5] [36mDEBUG [temp-dir]: [39mCreating temp dir at C:\TeamCity\buildAgent\temp\buildTmp\karma-28976911
[16:16:39][Step 2/5] [36mDEBUG [launcher]: [39mC:\Program Files (x86)\Google\Chrome\Application\chrome.exe --user-data-dir=C:\TeamCity\buildAgent\temp\buildTmp\karma-28976911 --no-default-browser-check --no-first-run --disable-default-apps --disable-popup-blocking --disable-translate http://localhost:7357/?id=28976911
[16:16:39][Step 2/5] [36mDEBUG [watcher]: [39mResolved files:
[16:16:42][Step 2/5] [36mDEBUG [karma]: [39mA browser has connected on socket WNLvjHxwETp4In9uP7A6
[16:16:42][Step 2/5] [32mINFO [IE 11.0.0 (Windows 7)]: [39mConnected on socket WNLvjHxwETp4In9uP7A6 with id 43795558
[16:16:42][Step 2/5] [36mDEBUG [launcher]: [39mIE (id 43795558) captured in 2.943 secs
[16:16:42][Step 2/5] [36mDEBUG [launcher]: [39mKilled extra IE process 652
[16:16:42][Step 2/5] [36mDEBUG [launcher]: [39mProcess IE exited with code 0
[16:16:42][Step 2/5] [36mDEBUG [temp-dir]: [39mCleaning temp dir C:\TeamCity\buildAgent\temp\buildTmp\karma-43795558
[16:16:59][Step 2/5] [36mDEBUG [karma]: [39mA browser has connected on socket 4O99QMVW24pEsJBGP7A7
[16:16:59][Step 2/5] [32mINFO [Firefox 32.0.0 (Windows 7)]: [39mConnected on socket 4O99QMVW24pEsJBGP7A7 with id 44455821
[16:16:59][Step 2/5] [36mDEBUG [launcher]: [39mFirefox (id 44455821) captured in 20.586 secs
[16:17:00][Step 2/5] [36mDEBUG [launcher]: [39mProcess Firefox exited with code 0
[16:17:00][Step 2/5] [36mDEBUG [temp-dir]: [39mCleaning temp dir C:\TeamCity\buildAgent\temp\buildTmp\karma-44455821
[16:17:39][Step 2/5] [33mWARN [launcher]: [39mChrome have not captured in 60000 ms, killing.
[16:17:39][Step 2/5] [36mDEBUG [launcher]: [39mProcess Chrome exited with code 0
[16:17:39][Step 2/5] [36mDEBUG [temp-dir]: [39mCleaning temp dir C:\TeamCity\buildAgent\temp\buildTmp\karma-28976911
[16:17:39][Step 2/5] [32mINFO [launcher]: [39mTrying to start Chrome again (1/2).
[16:17:39][Step 2/5] [36mDEBUG [launcher]: [39mRestarting Chrome
[16:17:39][Step 2/5] [36mDEBUG [temp-dir]: [39mCreating temp dir at C:\TeamCity\buildAgent\temp\buildTmp\karma-28976911
[16:17:39][Step 2/5] [36mDEBUG [launcher]: [39mC:\Program Files (x86)\Google\Chrome\Application\chrome.exe --user-data-dir=C:\TeamCity\buildAgent\temp\buildTmp\karma-28976911 --no-default-browser-check --no-first-run --disable-default-apps --disable-popup-blocking --disable-translate http://localhost:7357/?id=28976911
[16:18:39][Step 2/5] [33mWARN [launcher]: [39mChrome have not captured in 60000 ms, killing.
[16:18:39][Step 2/5] [36mDEBUG [launcher]: [39mProcess Chrome exited with code 0
[16:18:39][Step 2/5] [36mDEBUG [temp-dir]: [39mCleaning temp dir C:\TeamCity\buildAgent\temp\buildTmp\karma-28976911
[16:18:40][Step 2/5] [32mINFO [launcher]: [39mTrying to start Chrome again (2/2).
[16:18:40][Step 2/5] [36mDEBUG [launcher]: [39mRestarting Chrome
[16:18:40][Step 2/5] [36mDEBUG [temp-dir]: [39mCreating temp dir at C:\TeamCity\buildAgent\temp\buildTmp\karma-28976911
[16:18:40][Step 2/5] [36mDEBUG [launcher]: [39mC:\Program Files (x86)\Google\Chrome\Application\chrome.exe --user-data-dir=C:\TeamCity\buildAgent\temp\buildTmp\karma-28976911 --no-default-browser-check --no-first-run --disable-default-apps --disable-popup-blocking --disable-translate http://localhost:7357/?id=28976911
[16:19:40][Step 2/5] [33mWARN [launcher]: [39mChrome have not captured in 60000 ms, killing.
[16:19:40][Step 2/5] [36mDEBUG [launcher]: [39mProcess Chrome exited with code 0
[16:19:40][Step 2/5] [36mDEBUG [temp-dir]: [39mCleaning temp dir C:\TeamCity\buildAgent\temp\buildTmp\karma-28976911
[16:19:40][Step 2/5] Process exited with code 1
| This is a chrome bug with the latest release of chrome.
The problem seems to be with launching chrome from a service.
The chromium team has fixed the issue and it will be included with the next release which is expected around November 20, 2014
issue tracker
| TeamCity | 26,431,494 | 10 |
I am building my app in teamcity with xcodebuild command line tools. I am looking for a way to suppress or make the output less verbose but still show errors or failures if they happen. The build log becomes very large and the browser has a hard time loading it.
Are there optional parameters I can pass in or a way to stream it to a log file?
| It's not possible. However You can make the log more readable with xctool or xcpretty - not sure the size is also changed. Probably, it is.
| TeamCity | 26,068,730 | 10 |
I have a batch file that I use to copy a folder and it's contents to a new location, it also creates the folder name based on the date and time (and this works):
SET TODAY=%DATE:/=-%
SET NOW=%TIME::=-%
XCOPY /S /Y "C:\BuildAgent\temp\buildTmp" "C:\Automation Results\%TODAY%_%NOW%\"
I added a new Configuration Step to my Team City setup, to include this batch file. The build step is a Command Line - Custom Script:
But this has an adverse effect on the TC Agent Requirements and I cannot start my TC builds:
This issue seems to be related to TC Implicit Requirements:
http://confluence.jetbrains.com/display/TCD8/Agent+Requirements
"Implicit Requirements
Any reference (name in %-signs) to an unknown parameter is considered an "implicit requirement". That means that the build will only run on the agent which provides the parameters named. Otherwise, the parameter should be made available for the build configuration by defining it on the build configuration or project levels."
How can I get around this TC conflict with % symbol which I need in my batch file?
| Use %% instead of %
SET TODAY=%%DATE:/=-%%
SET NOW=%%TIME::=-%%
XCOPY /S /Y "C:\BuildAgent\temp\buildTmp" "C:\Automation Results\%%TODAY%%_%%NOW%%\"
This will ensure the variables are treated as batch file variables instead of TeamCity variables.
| TeamCity | 23,886,583 | 10 |
I´m new with TeamCity and I don´t know how to run SQL scripts with it.
Is the way simply selecting the path of those scripts in a Command Line Build Runner ?
I´m pretty lost.
Regards.
| In a command line build step:
Command executable: c:\Program Files\Microsoft SQL Server\100\Tools\Binn\sqlcmd.exe
Command parameters: -S <server> -i <path_to_file> <== Note: that's a capital -S!
You may need to change the 100 to something else, depending on the version of the SQL Server tools that you have installed on the build agent.
| TeamCity | 21,555,038 | 10 |
At work we've added SQL Database projects to our VS 2010 project as a way of keeping control of changes in stored procedures and schema changes. Unfortunately, it is now breaking the build on our TeamCity CI server.
Is there a way to tell TeamCity not to build these projects or will I have to accept defeat and install Visual Studio 2010 on the TeamCity CI server?
| Option 1
Make a copy of your current .sln and remove the SQL db projects; then point TeamCity at this sln instead.
Option 2
Make a new Build Configuration (you have Debug, Release and you could add DebugCI as an example) and tick the projects you want compiled in this configuration. Then in the build step setup type DebugCI into the Configuration box: (is Debug in this screenshot but you get the idea)
| TeamCity | 15,158,116 | 10 |
I am very new to TeamCity and currently have a problem with an incompatible agent:
Unmet requirements:
DotNetFramework4.5_x86 exists
Does anyone know how to fix this? Do I have to add a reference to .NET 4.5 somewhere?
Any advice appreciated.
| You have an agent requirement that DotNetFramework4.5_x86 exists, but on this agent it doesn't. If the requirement is required, you need to install .NET on that agent machine. TeamCity has detected that .NET is not installed on this machine so your build cannot run.
If the requirement is incorrect and not needed by your build, it can be removed by going under:
Edit build Configuration > Agent Requirements
Then in the table of agent requirements you will see:
DotNetFramework4.5_x86 exists
And there is a button to delete this requirement. Once you delete the requirement, the agent will appear under 'Compatible Agents'.
| TeamCity | 13,312,796 | 10 |
I use the TeamCity (7.0) REST API to allow developers to trigger custom builds. I add the build to the queue like this:
http://teamcity/httpAuth/action.html?add2Queue=[buildTypeId]&name=[propName]&value=[propValue]
My question is how I best can track the progress of the build just triggered. The REST call does not return any info about build ID assigned to the build, so even if I poll the list of builds (running/finished) I will not know if one of them is the one I triggered. There could potentially be several builds for the same buildTypeId in the queue, so I need a way to separate out the one I am after.
I read somewhere a suggestion that you could add a build property with a unique value to each build you put in the queue, and then later poll the build list and look for one with that exact property value. I have however not found a way of listing the properties for the builds, so I am still stuck. This REST call does not provide information about properties:
http://teamcity/httpAuth/app/rest/builds/?locator=buildType:[buildTypeId]
Any suggestions on how to solve this? I would ideally like to know if the build is in the queue, if it is running, and when it's done I would like to get the status. The most important is however to know if it is done and what the status has.
| After some further investigation I came up with a solution for this which seems to work fine:
I found out that even though you did not get any information about the custom build properties using the "/builds/?locator=buildType:x" call, you could extract the build ID for each one of the builds in that list and then do another REST call to get more details about one specific build. The rest call looks like this:
http://teamcity/httpAuth/app/rest/builds/id:{0}
The response from this call will give you a "build object" which contains a list of build properties, among other things.
My solution for tracking the build progress was then like this:
When a build is added to the TeamCity queue, I first add a property to the URL called "BuildIdentifier". The value is just a GUID. I pass this identifier back to the client application, and then the client starts polling the server, asking for the status of the build with this specific identifier. The server then goes through some steps to identify the current stage of the build:
1: Check if the build is running. I get the list of running builds with the call "/builds?locator=running:true", iterate through the builds and use the build ID to query the REST API for details. I then go through the details for each running build looking for a build with a matching "BuildIdentifier" property to the one I received from the client. If there is a match in one of the running builds I send a response with a message that the build is running at x percent (PercentageComplete property of the build object) to the client who is tracking the progress. If a match is not found I move on to step 2.
2: Check if it is finished: First get the latest build list using the "/builds/?locator=buildType:x" call. Then do the same thing as in step 1, and extract the X latest builds from the list (I chose 5). In order to limit the number of REST calls I set an assumption that the build would be in the latest 5 builds if it was finished. I then look for a match on the BuildIdentifier, and if I get one I return the status of the build (FAILED, SUCCESS, etc.).
3: If there was no match for the BuildIdentifier in step 1 or 2 I can assume that the build is in the queue, so I return that as the current status.
On the client side I poll the server for the status every x seconds as long as the status is saying that the build is either in the queue, or running.
Hope this solution could be helpful if there are someone else with the same problem out there! I would think that tracking the progress of a triggered build is a pretty common task if you use the TeamCity REST API.
| TeamCity | 11,966,674 | 10 |
As per Teamcity REST API
We can use the following to get XML Data
curl -v --basic --user USERNAME:PASSWORD --request POST "http://teamcity:8111/httpAuth/app/rest/users/" --data @data.xml --header "Content-Type: application/xml"
Can we do the same for JSON ?
curl -v --basic --user USERNAME:PASSWORD --request POST "http://teamcity:8111/httpAuth/app/rest/users/" --data @data.json --header "Content-Type: application/json"
BOTH, return
HTTP/1.1 200 OK
Date: Sun, 05 Aug 2012 02:18:36 GMT
Server: Apache-Coyote/1.1
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: no-cache
Cache-Control: no-store
Content-Type: application/xml
Thus , Content-Type: xml
How can i get JSON Reponse.
| You need to set the Accept header not the Content-type header
curl -v --basic --user USERNAME:PASSWORD --request POST "http://teamcity:8111/httpAuth/app/rest/users/" --data @data.json --header "Accept: application/json"
| TeamCity | 11,813,495 | 10 |
We use jetBrains TeamCity continuous integration server for builds.
We've got tens of different projects in TeamCity, and want to see one big picture across them in terms of their development quality, to find out which projects are lacking quality and in which sense. We use metrics such as unit test coverage, cyclomatic complexity \ maintainability index, duplicates, defect rates, etc...
We collect metrics to TeamCity from test tools, either:
automatically if supported by TeamCity as standard metrics (e.g. NCover coverage).
manually, extracting them when running test tools and providing them to TeamCity using service messages: [##teamcity[buildStatisticValue key='<valueTypeKey>' value='<value>']
So we got them in TeamCity and can see them on per project charts. We can even get them out of TeamCity by REST protocol in XML or JSON format.
Our goal is to see the overall picture across ALL projects. Here are 2 examples of tables that we want to see:
projects in rows, time (weeks) in columns, and values of one chosen metric in inside cells.
projects in rows, all metrics in columns, values of the metrics in inside cells for a specific point in time (e.g. latest).
Or it could be a 2-dimensional charts with similar approach.
So, the question is:
Is there such existing Dashboard tool, that can show described tables and\or charts? Either separate application tightly integrated with TeamCity, or a plugin for TeamCity?
Thanks!
| This question is pretty similar to another one I just answered.
The answer is to use SonarQube.
| TeamCity | 11,156,335 | 10 |
We have a TeamCity instance with a variety of projects and build configurations on it, with no security set up at present. Although it's OK for most of the projects to be publicly visible, we'd like to set up a couple of projects that are only visible to certain users.
Because there are many public projects already set up on the server, across a variety of teams, we'd like to avoid setting up restrictions on everything - that is, we'd rather use "deny access to project Z" than "allow access to project A, allow access to project B, ..., allow access to project Y".
How can I restrict access to these projects without affecting the public projects?
| In case anyone still needs an answer, this can be done by TeamCity itself.
Go to Administration -> Groups -> 'Create new group'. For example, public
Assign roles to this group. You can choose 'Grant role in selected projects' radio button and choose those public projects and click Assign button.
| TeamCity | 10,537,931 | 10 |
I'm migrating continuous integration system from Teamcity to Jenkins. We have a single svn repository for all our projects like this:
project/dev_db_build (folder)
project/module1 (folder)
project/module2 (folder)
projets/pom.xml
For building db on CI server I use url project/dev_db_build and can pol this url to trigger builds when there are changes.
For building application I use url project/ So if I poll it and there are changes to dev_db_build application build should be ignored and triggered after db_build as successful.
In teamcty I used "Trigger patterns" for this. But in Jenkins there are so many triggering plugins https://wiki.jenkins-ci.org/display/JENKINS/Plugins#Plugins-Buildtriggers - I looked into some of them and have not found suitable.
| Ideally, you should use a post-commit hook as suggested by @Mike, rather than polling. Otherwise, when configuring the Jenkins job, under 'Source Code Management' with 'Subversion' selected, there is an advanced button. Clicking this reveals an number of options, including 'Excluded Regions'
If set, and Jenkins is set to poll for changes, Jenkins will ignore
any files and/or folders in this list when determining if a build
needs to be triggered. Each exclusion uses regular expression pattern
matching, and must be separated by a new line.
/trunk/myapp/src/main/web/.*.html
/trunk/myapp/src/main/web/.*.jpeg
/trunk/myapp/src/main/web/.*.gif
The example above illustrates that if only html/jpeg/gif files have
been committed to the SCM a build will not occur. More information on
regular expressions can be found here.
In your case, you would set 'Excluded Regions' to something like
/project/dev_db_build/.*
| TeamCity | 8,037,954 | 10 |
How can I automate executing a batch file from TeamCity. Can I create a TC build configuration and have the TC agent build that and automatically run the specified batch file?
EDIT: batch script.
echo off
echo Do you want to deploy xxxx to DerServ(yn):
set /p input=
if "%input%" == "y" goto :1
if NOT "%input%" == "y" goto :2
:1
SET MSBUILD="C:\Windows\Microsoft.NET\Framework\v4.0.30319\msbuild.exe"
%MSBUILD% xxxxx.defaultTeamCity.msbuild /target:projBuild
goto end
:2
ECHO Exiting...
goto end
:end
pause
Error message:
[12:25:12]: 'projBuild' is not recognized as an internal or external command,[12:25:12]: operable program or batch file.[12:25:13]: Build finished
| Yes, you can do it using Command Line runner.
| TeamCity | 7,490,213 | 10 |
I'm setting up TeamCity for Continuous Integration and (hopefully) Continuous Deployment. Some of the build steps will involve private files, e.g.
.snk files for strong naming .NET assemblies
password/token files for publishing artifacts (for example to NuGet or CodePlex)
Since these files contain private data I don't want to put them into the (publicly accessible) source control system.
I'm setting up http://teamcity.codebetter.com for AutoFixture so I don't have physical access to the server. I was hoping for a feature that would let me upload such files, but can't find anything of the kind.
What would be the most appropriate solution?
| TeamCity supports multiple VCS roots, so you could just add an extra VCS root with these private files.
Obviously this would require that the second repository is private - but that is what you want any way. Having those files in source control is a great thing.
| TeamCity | 6,721,000 | 10 |
I am trying to set up TeamCity behind nginx. I'd like https://public.address.com/teamcity/... to redirect to http://127.0.0.1:8111/..., but even though nginx does this successfully, the login page comes back with references that look like this:
<script type="text/javascript" src="/res/-8762791360234593415.js?v=1305815890782"></script>
Obviously, this won't do, and fiddling with the rootURL setting (Server URL: in Server Configuration) doesn't make any difference.
How do I run TeamCity behind a proxy under a non-root URL?
FWIW, here's the relevant portion of my nginx config:
location /teamcity/ {
proxy_pass http://127.0.0.1:8111/;
proxy_redirect http://127.0.0.1:8111/ https://$host/teamcity/;
}
| I did this using the standard Teamcity Windows installer, and presumably it would work on any platform.
Change Teamcity Location
As per a comment by a JetBrains employee:
To change TeamCity address from http://server/ to http://server/teamcity/, rename the <TeamCity home>\webapps\ROOT directory to <TeamCity home>\webapps\teamcity.
Note also that you'll need to rename this directory every time you upgrade Teamcity.
Proxy configuration
The nginx config then looks something like:
location /teamcity/ {
proxy_pass http://teamcity-server.domain.com/teamcity/;
}
Or you can use Apache (I switched to Apache due to authentication requirements I had):
<Location /teamcity>
ProxyPass http://teamcity-server.domain.com/teamcity
ProxyPassReverse http://teamcity-server.domain.com/teamcity
</Location>
Redirect old URL
I also created a new <Teamcity home>\webapps\ROOT, and put a index.jsp file into it, which redirects to the new URL so old links continue to work (eg, if someone goes to http://teamcity-server.domain.com it redirects to http://teamcity-server.domain.com/teamcity):
<!DOCTYPE html>
<html>
<head>
<title>TeamCity</title>
<meta http-equiv="refresh" content="0;url=/teamcity/overview.html"/>
</head>
<body>
<!-- no content -->
</body>
</html>
You could also do the redirect in nginx/apache, but doing on the Teamcity server means if someone goes to the old URL directly on the teamcity web server (instead of via your proxy) they'll still get correctly redirected (instead of a 404).
| TeamCity | 6,071,426 | 10 |
We are using TFS 2010 for source control and project management, and TeamCity 6.0 for performing builds and build reporting (CI and daily deployments for testers). Setting up TFS source labeling in TeamCity to match the build number was very straightforward, but I cannot find a way to link this back to TFS Build Explorer.
We want link these to be able to assign bugs to particular builds through TFS for the daily tester deployment builds.
| I don't know if you can, at least without some heavy VSX work or direct manipulation of the database, get the TeamCity builds to show up in the TFS Build Explorer.
However, the "Found in Build:" drop down on in the bug workitem is a populated by a global list which you can add to pro grammatically using http://blogs.microsoft.co.il/blogs/shair/archive/2010/03/08/tfs-api-part-23-create-global-list-xml-way.aspx .
| TeamCity | 4,889,410 | 10 |
Is it possible to Trigger an exe to run on a failed build? Can you do this within Team City?
| If you specifically want the failed builds, you can set up the dependent build as Eric said, and have that secondary buildscript use the REST API to pull up a list of the failed builds for the actual project.
If the latest build is in that failed builds list, then tell the build script to run the executable. If not, then you're all done!
http://confluence.jetbrains.net/display/TW/REST+API+Plugin
| TeamCity | 4,671,888 | 10 |
I'm setting up a nightly build for continuous integration within TeamCity. I wanted to create an artifact dependency on the last complete build. But I notice that I have no artifacts created in my project. I'm building using MS build.
| You need to specify what files you are interested in TeamCity saving as artifacts. This is done by changing the Artifact Path under General Settings when you edit the configuration.
http://confluence.jetbrains.net/display/TCD5/Build+Artifact
| TeamCity | 4,277,348 | 10 |
I have a reoccurring problem with TeamCity. On my company I have installed TeamCity three different times and successfully connected them to some kind of SVN-repo.
But after a while I have always got the same error, unable to access localhost, i.e. TeamCity's login-page (I start the browser and it can't find localhost).
I have tried to find a solution but with no success, I also tried to get TeamCity to stop working (to find out what causing the problem) but also without success.
The tricky part is that I don't know why it happens and I have no clue how to fix it. It just happens suddenly. The logs does not tell me anything and all the services/ports/etc is working properly. It just, out of the blue, loses its startpage-connection.
I run TeamCity on a Win 2008 Server R2. So, does anyone have a clue or some ideas that might help me to fix this?
| It's solved. No valid information was to be found in any logs. The silly answer was that, for some reason, port 80, that Tomcat uses, was blocked by a "System:4"-process. After some research it turned out to be the AD.
Never would have suspected that since it happened all of a sudden. That is, it worked for a couple of weeks then just one day - nothing.
You'll have to edit
TeamCity\conf\server.xml
TeamCity\buildAgent\conf\buildAgent.properties
to use another port instead of the default port.
| TeamCity | 3,195,384 | 10 |
I'm new to Continuous Integration. I want an advice with what tool should I start deal with. I see that this is the biggest tools right now: CruiseControl.NET, TeamCity and Visual Studio Team System.
I'm using this tools: Visual Studio 2010, Mercurial, NAnt, NUnit.
| I would recommend TeamCity - free for up to three agents, 20 projects and 20 users, runs a variety of builders (NAnt included) and can parse NUnit results (Hudson can do all this too I believe, however I have no used it, so I can't speak from experience).
Having worked with TFS, TeamCity, Bamboo and CC.NET, I can say that TC was the easiest to get up and running, the simplest to deploy multiple remote agents, get insight into builds, and integrated seamlessly with jabber, email, visual studio, windows task tray etc. Just felt good.
| TeamCity | 2,726,996 | 10 |
Any ideas?
| With IIS 7.5 you can use Application Request Routing to route requests at teamcity.server.domain.com:80 to Tomcat at server.domain.com:81. I would consider this approach superior since the Tomcat Connector seems a bit flaky under WS2008 x64.
Jon Alb has a good writeup on how to configure TeamCity plus IIS on WS2008:
Part1
Part2
Additionally, you need to ensure that your DNS can resolve teamcity.server.domain.com to server.domain.com. My IIS server needed a ipconfig /registerdns to update its DNS entry correctly. Correctly means in this case to create a Domain entry in the domain.com lookup zone for server, a simple A-Record does not suffice. In that domain, you need to create a CNAME record for * , so any subdomain will be redirected to server.domain.com
A big problem I ran into is that IIS 7.5 seems to no longer correctly write the applicationHost.config file, so the port number won't endup being persisted. This will result in a nasty 400.0 Bad Request error because the MAX_FORWARDS limit will be reached (the request is rooted in circles).
To fix this, add the following to C:\Windows\System32\inetsrv\config:
<webFarms>
<webFarm name="teamcity" enabled="true" adminUserName="" adminPassword="[enc:AesProvider:2blZ7roifGTktpn8zBBuVQ==:enc]" primaryServer="">
<server address="localhost" enabled="true">
<applicationRequestRouting httpPort="YOURPORTHERE!!!" />
</server>
<applicationRequestRouting>
<loadBalancing algorithm="WeightedRoundRobin" />
<protocol reverseRewriteHostInResponseHeaders="true" />
</applicationRequestRouting>
</webFarm>
</webFarms>
Edit If you are running other sites, and getting a 404, besides following Part 2 you need to create a dummy site to catch the hostname as the below Ian Patrick Hughes answer states.
| TeamCity | 858,790 | 10 |
I'm trying to use basic HTTP authentication in Python. I am using the Requests library:
auth = requests.post('http://' + hostname, auth=HTTPBasicAuth(user, password))
request = requests.get('http://' + hostname + '/rest/applications')
Response form auth variable:
<<class 'requests.cookies.RequestsCookieJar'>[<Cookie JSESSIONID=cb10906c6219c07f887dff5312fb for appdynamics/controller>]>
200
CaseInsensitiveDict({'content-encoding': 'gzip', 'x-powered-by': 'JSP/2.2', 'transfer-encoding': 'chunked', 'set-cookie': 'JSESSIONID=cb10906c6219c07f887dff5312fb; Path=/controller; HttpOnly', 'expires': 'Wed, 05 Nov 2014 19:03:37 GMT', 'server': 'nginx/1.1.19', 'connection': 'keep-alive', 'pragma': 'no-cache', 'cache-control': 'max-age=78000', 'date': 'Tue, 04 Nov 2014 21:23:37 GMT', 'content-type': 'text/html;charset=ISO-8859-1'})
But when I try to get data from different location, I'm getting HTTP Status 401 error:
<<class 'requests.cookies.RequestsCookieJar'>[]>
401
CaseInsensitiveDict({'content-length': '1073', 'x-powered-by': 'Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 3.1.2.2 Java/Oracle Corporation/1.7)', 'expires': 'Thu, 01 Jan 1970 00:00:00 UTC', 'server': 'nginx/1.1.19', 'connection': 'keep-alive', 'pragma': 'No-cache', 'cache-control': 'no-cache', 'date': 'Tue, 04 Nov 2014 21:23:37 GMT', 'content-type': 'text/html', 'www-authenticate': 'Basic realm="controller_realm"'})
As far as I understand, in the second request session parameters are not substituted.
| You need to use a session object and send the authentication each request. The session will also track cookies for you:
session = requests.Session()
session.auth = (user, password)
auth = session.post('http://' + hostname)
response = session.get('http://' + hostname + '/rest/applications')
| AppDynamics | 26,745,462 | 143 |
I want to use Serilog in an Azure Function v4 (.net 6) (the logs should be sent to Datadog). For this I have installed the following nuget packages:
<PackageReference Include="Serilog" Version="2.10.0" />
<PackageReference Include="Serilog.Extensions.Logging" Version="3.1.0" />
<PackageReference Include="Serilog.Formatting.Compact" Version="1.1.0" />
<PackageReference Include="Serilog.Sinks.Console" Version="4.0.1" />
<PackageReference Include="Serilog.Sinks.Datadog.Logs" Version="0.3.5" />
Below is the configuration in the Startup.cs class:
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddHttpClient();
//... adding services etc.
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
.MinimumLevel.Override("Worker", LogEventLevel.Warning)
.MinimumLevel.Override("Host", LogEventLevel.Warning)
.MinimumLevel.Override("System", LogEventLevel.Error)
.MinimumLevel.Override("Function", LogEventLevel.Error)
.MinimumLevel.Override("Azure.Storage.Blobs", LogEventLevel.Error)
.MinimumLevel.Override("Azure.Core", LogEventLevel.Error)
.Enrich.WithProperty("Application", "Comatic.KrediScan.AzureFunctions")
.Enrich.FromLogContext()
.WriteTo.DatadogLogs("XXXXXXXXXXX", configuration: new DatadogConfiguration() { Url = "https://http-intake.logs.datadoghq.eu" }, logLevel: LogEventLevel.Debug)
.WriteTo.Console()
.CreateLogger();
builder.Services.AddSingleton<ILoggerProvider>(sp => new SerilogLoggerProvider(Log.Logger, true));
builder.Services.AddLogging(lb =>
{
//lb.ClearProviders(); //--> if used nothing works...
lb.AddSerilog(Log.Logger, true);
});
Basically logging works, but all log statements are written twice (with a few milliseconds difference, Datadog and Console).
Obviously I am doing something fundamentally wrong with the configuration. I don't use appsettings.json, the configuration of Serilog takes place exclusively in the code. I have scoured the entire internet and read just about every article on Serilog and Azure Functions. On Stackoverflow I also read virtually every question about it and tried all the answers. Unfortunately, so far without success.
SO-Questions for example:
Use Serilog with Azure Log Stream
How do I use Serilog with Azure WebJobs?
Serilog enricher Dependency Injection with Azure Functions
https://github.com/hgmauri/sample-azure-functions/blob/main/src/Sample.AzureFunctions.DotNet31/Startup.cs
Is there any example for setting up Serilog with Azure Functions v4 / .net 6?
Thanks a lot for the help!
Michael Hachen
| Got it! After replacing all ILogger with ILogger<T> and removing the line builder.Services.AddSingleton<ILoggerProvider>(sp => new SerilogLoggerProvider(Log.Logger, true)); everything worked as expected.
Startup.cs
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Override("Microsoft", LogEventLevel.Warning)
.MinimumLevel.Override("Worker", LogEventLevel.Warning)
.MinimumLevel.Override("Host", LogEventLevel.Warning)
.MinimumLevel.Override("System", LogEventLevel.Error)
.MinimumLevel.Override("Function", LogEventLevel.Error)
.MinimumLevel.Override("Azure.Storage.Blobs", LogEventLevel.Error)
.MinimumLevel.Override("Azure.Core", LogEventLevel.Error)
.Enrich.WithProperty("Application", $"xxxxx.AzureFunctions.{builder.GetContext().EnvironmentName}")
.Enrich.FromLogContext()
.Enrich.WithExceptionDetails(new DestructuringOptionsBuilder()
.WithDefaultDestructurers()
.WithDestructurers(new[] { new SqlExceptionDestructurer() }))
.WriteTo.Seq(builder.GetContext().EnvironmentName.Equals("Development", StringComparison.OrdinalIgnoreCase) ? "http://localhost:5341/" : "https://xxxxxx.xx:5341/", LogEventLevel.Verbose)
.WriteTo.Console(theme: SystemConsoleTheme.Literate)
.CreateLogger();
builder.Services.AddLogging(lb =>
{
lb.AddSerilog(Log.Logger, true);
});
| Datadog | 71,034,036 | 19 |
Like this one:
[
If yes, how do I create one?
From all documentation I've read so far, it doesn't seem to support it. But I don't see anyone confirming that it's not supported anywhere.
| 2016
Confirmed on IRC (#datadog on freenode) that:
Datadog doesn't support multiple Y-axis at this time.
2020: Now it is supported. See James' answer below.
| Datadog | 37,010,163 | 16 |
Does anyone know how to integrate Spring boot metrics with datadog?
Datadog is a cloud-scale monitoring service for IT.
It allows users to easily visualice their data using a lot of charts and graphs.
I have a spring boot application that is using dropwizard metrics to populate a lot of information about all methods I annotated with @Timed.
On the other hand I'm deploying my application in heroku so I can't install a Datadog agent.
I want to know if there is a way to automatically integrate spring boot metric system reporting with datadog.
| I've finally found a dropwizzard module that integrates this library with datadog: metrics-datadog
I've created a Spring configuration class that creates and initializes this Reporter using properties of my YAML.
Just insert this dependency in your pom:
<!-- Send metrics to Datadog -->
<dependency>
<groupId>org.coursera</groupId>
<artifactId>dropwizard-metrics-datadog</artifactId>
<version>1.1.3</version>
</dependency>
Add this configuration to your YAML:
yourapp:
metrics:
apiKey: <your API key>
host: <your host>
period: 10
enabled: true
and add this configuration class to your project:
/**
* This bean will create and configure a DatadogReporter that will be in charge of sending
* all the metrics collected by Spring Boot actuator system to Datadog.
*
* @see https://www.datadoghq.com/
* @author jfcorugedo
*
*/
@Configuration
@ConfigurationProperties("yourapp.metrics")
public class DatadogReporterConfig {
private static final Logger LOGGER = LoggerFactory.getLogger(DatadogReporterConfig.class);
/** Datadog API key used to authenticate every request to Datadog API */
private String apiKey;
/** Logical name associated to all the events send by this application */
private String host;
/** Time, in seconds, between every call to Datadog API. The lower this value the more information will be send to Datadog */
private long period;
/** This flag enables or disables the datadog reporter */
private boolean enabled = false;
@Bean
@Autowired
public DatadogReporter datadogReporter(MetricRegistry registry) {
DatadogReporter reporter = null;
if(enabled) {
reporter = enableDatadogMetrics(registry);
} else {
if(LOGGER.isWarnEnabled()) {
LOGGER.info("Datadog reporter is disabled. To turn on this feature just set 'rJavaServer.metrics.enabled:true' in your config file (property or YAML)");
}
}
return reporter;
}
private DatadogReporter enableDatadogMetrics(MetricRegistry registry) {
if(LOGGER.isInfoEnabled()) {
LOGGER.info("Initializing Datadog reporter using [ host: {}, period(seconds):{}, api-key:{} ]", getHost(), getPeriod(), getApiKey());
}
EnumSet<Expansion> expansions = DatadogReporter.Expansion.ALL;
HttpTransport httpTransport = new HttpTransport
.Builder()
.withApiKey(getApiKey())
.build();
DatadogReporter reporter = DatadogReporter.forRegistry(registry)
.withHost(getHost())
.withTransport(httpTransport)
.withExpansions(expansions)
.build();
reporter.start(getPeriod(), TimeUnit.SECONDS);
if(LOGGER.isInfoEnabled()) {
LOGGER.info("Datadog reporter successfully initialized");
}
return reporter;
}
/**
* @return Datadog API key used to authenticate every request to Datadog API
*/
public String getApiKey() {
return apiKey;
}
/**
* @param apiKey Datadog API key used to authenticate every request to Datadog API
*/
public void setApiKey(String apiKey) {
this.apiKey = apiKey;
}
/**
* @return Logical name associated to all the events send by this application
*/
public String getHost() {
return host;
}
/**
* @param host Logical name associated to all the events send by this application
*/
public void setHost(String host) {
this.host = host;
}
/**
* @return Time, in seconds, between every call to Datadog API. The lower this value the more information will be send to Datadog
*/
public long getPeriod() {
return period;
}
/**
* @param period Time, in seconds, between every call to Datadog API. The lower this value the more information will be send to Datadog
*/
public void setPeriod(long period) {
this.period = period;
}
/**
* @return true if DatadogReporter is enabled in this application
*/
public boolean isEnabled() {
return enabled;
}
/**
* This flag enables or disables the datadog reporter.
* This flag is only read during initialization, subsequent changes on this value will no take effect
* @param enabled
*/
public void setEnabled(boolean enabled) {
this.enabled = enabled;
}
}
| Datadog | 34,398,692 | 15 |
Splunk has transaction command which can produce duration between logs grouped by id:
2020-01-01 12:12 event=START id=1
2020-01-01 12:13 event=STOP id=1
as it is described on
Query for calculating duration between two different logs in Splunk
Splunk - duration between two different messages by guid
transaction time between events
How to calculate duration between events in Datadog?
| You can use group queries to create transactions that will automatically calculate the duration. This screenshot is an example of logs grouped into transactions by CartId.
| Datadog | 62,782,910 | 14 |
I have a timeseries graph in a time board that displays data for one metric that has multiple tags called "page". The graph has one line for each tag and I'm running functions on the values, so the query for my data is "ewma_5(avg:client.load_time{env:prod}) by {page}". This query means the tooltip values when I hover on the graph are things like "ewma_5(avg:client.load_time{env:prod})".
I want to know if there is anyway to use the alias function with the tag value in it, so something like "alias": "{page}"?
| I asked DataDog support, and apparently as of January 2020 this is not possible, but is a feature request in their backlog. I know this is not a great answer to the question but if I hear that this changes, I will update my answer.
| Datadog | 57,918,254 | 12 |
I have a metric which has a tag with lots of different values (the value is a file name). How can I create a query that determines the number of different values of that tag exist on a metric?
For example if 4 metrics are received during a time frame, with the following tags "file_name:dir/file1", "file_name:dir/file2", "file_name:dir/file3", "file_name:dir/file1"
I want the query to return the value 3, since of all the metrics received during this timeframe there were 3 distinct values for the file_name tag.
| Either of the count_not_null() or count_nonzero() functions should get you where you want.
If graph your metric, grouped by your tag, and then apply one of those functions, it should return the count of unique tag values under that tag key. So in your case:
count_not_null(sum:your.metric.name{*} by {file_name})
And it works with multiple group-by tags too, so if you had separate tags for file_name and directory then you could use this same approach to graph the count of unique combinations of these tag values, or the count of unique combinations of directory+file_name:
count_not_null(your.metric.name{*} by {file_name,directory})
| Datadog | 61,108,009 | 11 |
Using JQ I would like to take a complex JSON object that includes JSON embedded as strings and then turn it all into a valid string I can easily embed in other JSON objects.
For example, lets say I have this json object:
{
"region": "CA",
"waf_rule_tags": "{\"RULEID:942100\":[\"application-multi\",\"language-multi\",\"platform-multi\",\"attack-sqli\",\"OWASP_CRS/WEB_ATTACK/SQL_INJECTION\",\"WASCTC/WASC-19\",\"OWASP_TOP_10/A1\",\"OWASP_AppSensor/CIE1\",\"PCI/6.5.2\"]}"
}
I need to turn this all into the following string:
"{\"region\": \"CA\",\"waf_rule_tags\": \"{\\\"RULEID:942100\\\":[\\\"application-multi\\\",\\\"language-multi\\\",\\\"platform-multi\\\",\\\"attack-sqli\\\",\\\"OWASP_CRS/WEB_ATTACK/SQL_INJECTION\\\",\\\"WASCTC/WASC-19\\\",\\\"OWASP_TOP_10/A1\\\",\\\"OWASP_AppSensor/CIE1\\\",\\\"PCI/6.5.2\\\"]}\"}"
That way I can take this string and insert it exactly under the text field of another JSON object to create the following.
{
"title": "12345-accesslogs",
"text": "{\"region\": \"CA\",\"waf_rule_tags\": \"{\\\"RULEID:942100\\\":[\\\"application-multi\\\",\\\"language-multi\\\",\\\"platform-multi\\\",\\\"attack-sqli\\\",\\\"OWASP_CRS/WEB_ATTACK/SQL_INJECTION\\\",\\\"WASCTC/WASC-19\\\",\\\"OWASP_TOP_10/A1\\\",\\\"OWASP_AppSensor/CIE1\\\",\\\"PCI/6.5.2\\\"]}\"}",
"priority": "normal",
"tags": ["environment:test"],
"alert_type": "info"
}
| In brief, tostring is your friend.
Assuming that your original JSON object is in a file named object.json, and that the template is in template.json, you could write:
jq --argfile object object.json '.text = ($object | tostring)' template.json
Needless to say, there are numerous variations on this theme, e.g.
jq -n 'input | input + {text: tostring}' \
object.json template.json
or more compactly if slightly more obscurely:
jq 'input + {text: tostring}' object.json template.json
| Datadog | 61,492,210 | 11 |
DataDog is so useless in its querying and its intuitiveness ... I'm looking for a custom exception in the stack trace. I found individual log entries in the last 18 hours that contain my exception class name, but attempting to write a log query that will find me all the occurrences is returning nothing. E.g.:
environment:prod @thrown.extendedStackTrace:UserDoesNotExistException
I'd like to include more words in the query, but even reducing down a single word fails to find anything. I've looked at their documentation, which is zero help.
| The following worked for me (where my stacktrace is in a stack_trace attribute) after reading the doco and trial and error:
@stack_trace:*the?quick?brown?fox*
i.e. to search on a phrase (multiple words), don't use quotes (so leading/trailing wildcards work) and replace spaces with ?
| Datadog | 66,639,586 | 10 |
I'm trying to learn how to use docker and am having some troubles. I'm using a docker-compose.yaml file for running a python script that connects to a mysql container and I'm trying to use ddtrace to send traces to datadog. I'm using the following image from this github page from datadog
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=invalid_key_but_this_is_fine
ports:
- "127.0.0.1:8126:8126"
And my docker-compose.yaml looks like
version: "3"
services:
ddtrace-test:
build: .
volumes:
- ".:/app"
links:
- ddagent
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=<my key>
ports:
- "127.0.0.1:8126:8126"
So then I'm running the command docker-compose run --rm ddtrace-test python test.py, where test.py looks like
from ddtrace import tracer
@tracer.wrap('test', 'test')
def foo():
print('running foo')
foo()
And when I run the command, I'm returned with
Starting service---reprocess_ddagent_1 ... done
foo
cannot send spans to localhost:8126: [Errno 99] Cannot assign requested address
I'm not sure what this error means. When I use my key and run from local instead of over a docker image, it works fine. What could be going wrong here?
| Containers are about isolation so in container "localhost" means inside container so ddtrace-test cannot find ddagent inside his container. You have 2 ways to fix that:
Put network_mode: host in ddtrace-test so he will bind to host's network interface, skipping network isolation
Change ddtrace-test to use "ddagent" host instead of localhost as in docker-compose services can be accessed using theirs names
| Datadog | 52,390,678 | 10 |
The use case is this:
I've several java applications running which all have to interact with different (each one has a specific target) elasticsearch indices. For instance an application A uses the indices A,B,C of ElasticSearch to query and update. Application B uses indices A,C,D(say).
Some common interface is required which can manage all these data streams. Currently I'm evaluating Kafka and fluentd for this purpose.
Can someone explain which will be better suited for this situation. I've looked at features of both Kafka and Fluentd and I don't really understand the difference it would make here.
Thanks a lot.
| kafka provides publish/subscribe messaging as a distributed commit log. Usually you install kafka on each host where you need to produce some data to be forwarded somewhere else and all those hosts will together form a cluster. The good thing here is that if for some reason network connectivity becomes unstable or goes down, your application can continue to produce data/logs and they won't be lost. Whereas if your application directly sends logs to some remote centralized logging host, you might lose some logs during the time the network goes down.
fluentd is a centralized log collector which is commonly installed on one host (or more if you need horizontal scaling). It connects to remote data sources, applies filtering and sends unified log data to remote data sinks.
From the fluentd docs, you can see that fluentd can consume data from kafka and produce data towards kafka as well. This alone should hint that fluentd and kafka are on different layers since the former uses the latter.
It would be more logical to compare fluentd and logstash actually. As far as fluentd is concerned, kafka is just another data source and/or data sink, but they are different beasts altogether.
If you want the best of both worlds, use kafka as input/output data pipes from/to your apps and fluentd (or logstash) as your centralized logging system reading from those kafka topics.
If you want to read more on the topic, you can read how fluentd and kafka complement each other very well, read they are not competing against each other.
| Fluentd | 35,144,835 | 34 |
I am trying to find a way in Fluent-bit config to tell/enforce ES to store plain json formatted logs (the log bit below that comes from docker stdout/stderror) in structured way - please see image at the bottom for better explanation. For example, apart from (or along with) storing the log as a plain json entry under log field, I would like to store each property individually as shown in red.
The documentation for Filters and Parsers are really poor and not clear. On top of that the forward input doesn't have a "parser" option. I tried json/docker/regex parsers but no luck. My regex is here if I have to use regex. Currently using ES (7.1), Fluent-bit (1.1.3) and Kibana (7.1) - not Kubernetes.
If anyone can direct me to an example or give one I would be much appreciated.
Thanks
{
"_index": "hello",
"_type": "logs",
"_id": "T631e2sBChSKEuJw-HO4",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2019-06-21T21:34:02.000Z",
"tag": "php",
"container_id": "53154cf4d4e8d7ecf31bdb6bc4a25fdf2f37156edc6b859ba0ddfa9c0ab1715b",
"container_name": "/hello_php_1",
"source": "stderr",
"log": "{\"time_local\":\"2019-06-21T21:34:02+0000\",\"client_ip\":\"-\",\"remote_addr\":\"192.168.192.3\",\"remote_user\":\"\",\"request\":\"GET / HTTP/1.1\",\"status\":\"200\",\"body_bytes_sent\":\"0\",\"request_time\":\"0.001\",\"http_referrer\":\"-\",\"http_user_agent\":\"curl/7.38.0\",\"request_id\":\"91835d61520d289952b7e9b8f658e64f\"}"
},
"fields": {
"@timestamp": [
"2019-06-21T21:34:02.000Z"
]
},
"sort": [
1561152842000
]
}
Thanks
conf
[SERVICE]
Flush 5
Daemon Off
Log_Level debug
Parsers_File parsers.conf
[INPUT]
Name forward
Listen 0.0.0.0
Port 24224
[OUTPUT]
Name es
Match hello_*
Host elasticsearch
Port 9200
Index hello
Type logs
Include_Tag_Key On
Tag_Key tag
| Solution is as follows.
[SERVICE]
Flush 5
Daemon Off
Log_Level debug
Parsers_File parsers.conf
[INPUT]
Name forward
storage.type filesystem
Listen my_fluent_bit_service
Port 24224
[FILTER]
Name parser
Parser docker
Match hello_*
Key_Name log
Reserve_Data On
Preserve_Key On
[OUTPUT]
Name es
Host my_elasticsearch_service
Port 9200
Match hello_*
Index hello
Type logs
Include_Tag_Key On
Tag_Key tag
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
# Command | Decoder | Field | Optional Action
# =============|==================|=================
Decode_Field_As escaped_utf8 log do_next
Decode_Field_As json log
| Fluentd | 56,841,754 | 17 |
By reading the following post from 12factor I have come up with a question I'd like to check how you guys handle this.
Basically, an app should write directly to stdout/stderr. Is there anyway to redirect these streams directly to fluentd (not bound to rsyslog/syslog)? As I become more aware of fluentd, I believe it would be a great tool for log aggregation from multiple apps/platforms.
The main reasoning for this is, if the app is cross-platform, rsyslog/syslog may not be available, and as I understand, using logging frameworks (which need the required configuration for them to work) would be a violation of the 12factor.
Thanks!
| You need to configure your process manager to use fluentd.
"Twelve-factor app processes should [...] rely on the operating system’s process manager (such as Upstart, a distributed process manager on a cloud platform, or a tool like Foreman in development) to manage output streams [...]."
Basically, the idea is that log redirection is a concern of the process manager. Upstart, for example, usually relies on logger, which has an option (-u) to write to a Unix Domain Socket. In turn, you can configure fluentd to use that same socket as an input stream.
Fluentd supports a lot of input streams (they call them data sources), which should provide a solution for just about any environment & process manager you might be using (which we need to know in order to provide a more complete solution).
| Fluentd | 28,730,462 | 15 |
I can't get Loki to connect to AWS S3 using docker-compose. Logs are visible in Grafana but the S3 bucket remains empty.
The s3 bucket is public and I have an IAM role attached to allow s3:FullAccess.
I updated loki to v2.0.0 and changed the period to 24h but it made no difference. There are no errors in the loki logs.
Here are the selected lines from docker logs (loki):
msg="Starting Loki" version="(version=master-4e661cd, branch=master, revision=4e661cde)"
caller=server.go:225 http=[::]:3100 grpc=[::]:9095 msg="server listening on addresses"
caller=worker.go:65 msg="no address specified, not starting worker"
msg="cleaning up mapped rules directory" path=/loki/tmprules
msg=initialising module=memberlist-kv
msg=initialising module=store
msg=initialising module=server
msg=initialising module=ring
msg="value is nil" key=collectors/ring index=1
msg=initialising module=ingester
msg="not loading tokens from file, tokens file path is empty"
msg="instance not found in ring, adding with no tokens" ring=ingester
msg="auto-joining cluster after timeout" ring=ingester
msg=initialising module=table-manager
msg=initialising module=distributor
msg=initialising module=ingester-querier
msg=initialising module=ruler
msg="ruler up and running"
msg="Loki started"
msg="synching tables" expected_tables=132
Here is my loki.config:
auth_enabled: false
server:
http_listen_port: 3100
distributor:
ring:
kvstore:
store: memberlist
ingester:
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
schema_config:
configs:
- from: 2020-10-27
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /loki/index
cache_location: /loki/index_cache
resync_interval: 5s
shared_store: s3
aws:
s3: s3://AKIARE3@us-east-1/mydomain.com.docker.loki.logs
s3forcepathstyle: true
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
Here is docker-compose.yaml
version: "3.8"
networks:
traefik:
external: true
volumes:
data:
services:
fluentd:
image: grafana/fluent-plugin-loki:master
command:
- "fluentd"
- "-v"
- "-p"
- "/fluentd/plugins"
environment:
LOKI_URL: http://loki:3100
LOKI_USERNAME:
LOKI_PASSWORD:
container_name: "fluentd"
restart: always
ports:
- '24224:24224'
networks:
- traefik
volumes:
- type: bind
source: ./config/fluent.conf
target: /fluentd/etc/fluent.conf
logging:
options:
tag: docker.monitoring
loki:
image: grafana/loki:master
container_name: "loki"
restart: always
networks:
- traefik
volumes:
- type: volume
source: data
target: /loki
ports:
- 3100
volumes:
- type: bind
source: ./config/s3.loki.conf
target: /loki/etc/loki.conf
depends_on:
- fluentd
| I finally did work this out. It requires a compactor but gives no warning about it. Best practice is to create an AWS s3 bucket without any public access. Next create an IAM user with programmatic access only. Create an access policy which gives full access only to the bucket you created. Attach the policy to the user's permissions. You do not need to attach a policy to the bucket itself. Check if you have "/" in your URL that you escape it with %2F otherwise you will get an auth error. Note that this config is for loki v2.0.0 which was released yesterday.
Here are my complete working docker-compose and loki config files. I put them on an external network to enable prometheus monitoring.
here is my docker-compose.yaml
version: "3.8"
networks:
appnet:
external: true
volumes:
loki_data:
services:
fluentd:
container_name: "fluentd"
image: grafana/fluent-plugin-loki:master
command:
- "fluentd"
- "-v"
- "-p"
- "/fluentd/plugins"
environment:
LOKI_URL: http://loki:3100
LOKI_USERNAME:
LOKI_PASSWORD:
restart: always
ports:
- '24224:24224'
networks:
- appnet
volumes:
- type: bind
source: ./config/fluent.conf
target: /fluentd/etc/fluent.conf
loki:
container_name: "loki"
image: grafana/loki:2.0.0
restart: always
networks:
- appnet
ports:
- 3100
volumes:
- type: volume
source: loki_data
target: /data
- type: bind
source: ./config/s3-loki-bolt-conf.yml
target: /etc/loki/local-config.yaml
command: -config.file=/etc/loki/local-config.yaml
depends_on:
- fluentd
Here is my loki config in prometheus/config/s3-loki-bolt-conf.yml. You can name this anything you want but keep the target file name as above as it is the loki default config file.
auth_enabled: false
ingester:
chunk_idle_period: 3m
chunk_block_size: 262144
chunk_retain_period: 1m
max_transfer_retries: 0
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
compactor:
working_directory: /loki/boltdb-shipper-compactor
shared_store: aws
schema_config:
configs:
- from: 2020-07-01
store: boltdb-shipper
object_store: aws
schema: v11
index:
prefix: loki_index_
period: 24h
server:
http_listen_port: 3100
storage_config:
aws:
s3: s3://ACCESS_KEY:SECRET_ACCESS_KEY@us-west-1/mydomain.com.docker.loki.logs
boltdb_shipper:
active_index_directory: /loki/index
shared_store: s3
cache_location: /loki/boltdb-cache
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
| Fluentd | 64,432,617 | 14 |
I was wondering how to use env vars in the Fluentd config, I tried:
<match **>
type elasticsearch
logstash_format true
logstash_prefix $ENV_VAR
host ***
port ***
include_tag_key true
tag_key _key
</match>
but it doesn't work, any idea?
| EDIT:
Here is a much better solution:
If you pass "--use-v1-config" option to Fluentd, this is possible with the "#{ENV['env_var_name']" like this:
<match foobar.**> # ENV["FOO"] is foobar
type elasticsearch
logstash_prefix "#{ENV['FOO']}"
logstash_format true
include_tag_key true
tag_key _key
host ****
port ****
</match>
Old, kludgey answer is here.
Install fluent-plugin-record-reformer and fluent-plugin-forest
Update your config as follows.
<match hello.world>
type record_reformer
tag ${ENV["FOO"]}.${tag_prefix[-1]} # adding the env variable as a tag prefix
</match>
<match foobar.**> # ENV["FOO"] is foobar
type forest
subtype elasticsearch
<template>
logstash_prefix ${tag_parts[0]}
logstash_format true
include_tag_key true
tag_key _key
host ****
port ****
</template>
</match>
In particular, do NOT use <match **> there. That would catch all events and will lead to behaviors that are hard to debug.
| Fluentd | 27,233,761 | 13 |
I have source:
<source>
@type tail
tag service
path /tmp/l.log
format json
read_from_head true
</source>
I would like to make several filters on it and match it to several outputs:
<source>
@type tail
tag service.pi2
path /tmp/out.log
format json
read_from_head true
</source>
<source>
@type tail
tag service.data
path /tmp/out.log
format json
read_from_head true
</source>
<filter service.data>
# some filtering
</filter>
<filter service.pi2>
# some filtering
</filter>
<match service.data>
@type file
path /tmp/out/data
</match>
<match service.pi2>
@type file
path /tmp/out/pi
</match>
So far, to make everything working I have to duplicate source with different tags. Can I make it working from one source definition?
| You can try using plugins copy and relabel to achieve this. Example configuration looks like this.
//One Source
<source>
@type tail
tag service
path /tmp/l.log
format json
read_from_head true
</source>
//Now Copy Source Events to 2 Labels
<match service>
@type copy
<store>
@type relabel
@label @data
</store>
<store>
@type relabel
@label @pi2
</store>
</match>
//@data Label, you can perform desired filter and output file
<label @data>
<filter service>
...
</filter>
<match service>
@type file
path /tmp/out/data
</match>
</label>
//@pi2 Label, you can perform desired filter and output file
<label @pi2>
<filter service>
...
</filter>
<match service>
@type file
path /tmp/out/pi
</match>
</label>
This Routing examples article has few more ways to do it by re-writing tag etc., but for me I like working with labels and above looks simple.
I have tested above config and it works fine. Let me know your thoughts :).
| Fluentd | 53,960,655 | 11 |
I'm a bit confused at how to setup error reporting in kubernetes, so errors are visible in Google Cloud Console / Stackdriver "Error Reporting"?
According to documentation
https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine
we need to enable fluentd' "forward input plugin" and then send exception data from our apps. I think this approach would have worked if we had setup fluentd ourselves, but it's already pre-installed on every node in a pod that just runs gcr.io/google_containers/fluentd-gcp docker image.
How do we enable forward input on those pods and make sure that http port available to every pod on the nodes? We also need to make sure this config is used by default when we add more nodes to our cluster.
Any help would be appreciated, may be I'm looking at all this from a wrong point?
| The basic idea is to start a separate pod that receives structured logs over TCP and forwards it to Cloud Logging, similar to a locally-running fluentd agent. See below for the steps I used.
(Unfortunately, the logging support that is built into Docker and Kubernetes cannot be used - it just forwards individual lines of text from stdout/stderr as separate log entries which prevents Error Reporting from seeing complete stack traces.)
Create a docker image for a fluentd forwarder using a Dockerfile as follows:
FROM gcr.io/google_containers/fluentd-gcp:1.18
COPY fluentd-forwarder.conf /etc/google-fluentd/google-fluentd.conf
Where fluentd-forwarder.conf contains the following:
<source>
type forward
port 24224
</source>
<match **>
type google_cloud
buffer_chunk_limit 2M
buffer_queue_limit 24
flush_interval 5s
max_retry_wait 30
disable_retry_limit
</match>
Then build and push the image:
$ docker build -t gcr.io/###your project id###/fluentd-forwarder:v1 .
$ gcloud docker push gcr.io/###your project id###/fluentd-forwarder:v1
You need a replication controller (fluentd-forwarder-controller.yaml):
apiVersion: v1
kind: ReplicationController
metadata:
name: fluentd-forwarder
spec:
replicas: 1
template:
metadata:
name: fluentd-forwarder
labels:
app: fluentd-forwarder
spec:
containers:
- name: fluentd-forwarder
image: gcr.io/###your project id###/fluentd-forwarder:v1
env:
- name: FLUENTD_ARGS
value: -qq
ports:
- containerPort: 24224
You also need a service (fluentd-forwarder-service.yaml):
apiVersion: v1
kind: Service
metadata:
name: fluentd-forwarder
spec:
selector:
app: fluentd-forwarder
ports:
- protocol: TCP
port: 24224
Then create the replication controller and service:
$ kubectl create -f fluentd-forwarder-controller.yaml
$ kubectl create -f fluentd-forwarder-service.yaml
Finally, in your application, instead of using 'localhost' and 24224 to connect to the fluentd agent as described on https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine, use the values of evironment variables FLUENTD_FORWARDER_SERVICE_HOST and FLUENTD_FORWARDER_SERVICE_PORT.
| Fluentd | 36,379,572 | 10 |
I am trying to send data to graphite carbon-cache process on port 2003 using
Ubuntu terminal:
echo "test.average 4 `date +%s`" | nc -q0 127.0.0.1 2003
Node.js:
var socket = net.createConnection(2003, "127.0.0.1", function() {
socket.write("test.average "+assigned_tot+"\n");
socket.end();
});
It works fine when i send data using the terminal window command on my ubuntu. However, i am not sure how to send timestamp unix epoch format from nodejs ?
Grpahite understands metric in this format metric_path value timestamp
Thanks!
| The native JavaScript Date system works in milliseconds as opposed to seconds, but otherwise, it is the same "epoch time" as in UNIX.
You can round down the fractions of a second and get the UNIX epoch by doing:
Math.floor(+new Date() / 1000)
Update: As Guillermo points out, an alternate syntax may be more readable:
Math.floor(new Date().getTime() / 1000)
The + in the first example is a JavaScript quirk that forces evaluation as a number, which has the same effect of converting to milliseconds. The second version does this explicitly.
| Graphite | 25,250,551 | 103 |
I am trying to run statsd/graphite which uses django 1.6.
While accessing graphite URL, I get django module error
File "/opt/graphite/webapp/graphite/urls.py", line 15, in
from django.conf.urls.defaults import *
ImportError: No module named defaults
However, I do not find defaults django package inside /Library/Python/2.7/site-packages/django/conf/urls/
Please help fixing this issue.
| django.conf.urls.defaults has been removed in Django 1.6. If the problem was in your own code, you would fix it by changing the import to
from django.conf.urls import patterns, url, include
However, in your case the problem is in a third party app, graphite. The issue has been fixed in graphite's master branch and version 0.9.14+.
In Django 1.8+ you can remove patterns from the import, and use a list of url()s instead.
from django.conf.urls import url, include
| Graphite | 19,962,736 | 99 |
I just installed graphite/statsd for production use. I'm really happy with it, but one of my co-workers asked me if there was a way to make it look prettier. Honestly, I can't say that I haven't wonder the same.
Are there alternatives to the Graphite UI that do a better job rendering data, perhaps using one of the awesome frontend graphing libraries and http push?
| Try Grafana
It has a very nice UI and advanced dashboard and graph editing features. Very simple to install.
| Graphite | 10,527,401 | 91 |
I'm playing with grafana and I want to create a panel where I compare data from one app server against the average of all the others except that one. Something like:
apps.machine1.someMetric
averageSeries(apps.*.not(machine1).someMetric)
Can that be done? How?
| Sounds like you want to filter a seriesList, you an do that inclusively using the 'grep' function or exclusively using the 'exclude' function
exclude(apps.machine*.someMetric,"machine1")
and pass that into averageSeries
averageSeries(exclude(apps.machine*.someMetric,"machine1"))
You can read more about those functions here:
http://graphite.readthedocs.io/en/latest/functions.html#graphite.render.functions.exclude
| Graphite | 34,214,149 | 24 |
What will I need to use Etsy's Statsd in a Windows Environment? My intentions are to create a .net client to use Statsd.
| I have statsd+graphite running in my Windows environment using the C# client NStatsD.
Here are my notes for getting the Linux VM setup:
Note: I know enough Linux to be dangerous but am otherwise a noob and could be doing something unwittingly horrible.
Install Ubuntu Server 12.04. I used VirtualBox for dev and then later EC2 for prod.
Download graphite-fabric to your home folder. This is a script that will download, compile and install graphite and statsd. It expects a clean box and uses nginx for the web server.
sudo apt-get install git
git clone git://github.com/gingerlime/graphite-fabric.git
cd graphite-fabric/
Install prereq's for fabric
sudo apt-get install python-setuptools
The next steps are a download, compile and install which can take some time. It is worthwhile setting a keep alive on any putty ssh session before continuing.
Now install as per gingerlime's instructions in the README.md - including the requirements section.
Install statsd as per gingerlime's instructions.
Reboot
Execute netstat -nulp and observe 8125 is in use to confirm statsd is listening.
Check carbon is running tail /opt/graphite/storage/log/carbon-cache/carbon-cache-a/listener.log. If it isn't, try sudo /etc/init.d/carbon start
Now you have your server running, try throwing some counters at it with the NStatsD client.
Timezone fix:
This will fix graphite to graph times in your local zone
cd /opt/graphite/webapp/graphite
sudo cp local_settings.py.example local_settings.py
sudo chown www-data:www-data local_settings.py (check with ls -l that permissions look right)
sudo pico local_settings.py Set TIME_ZONE to something like Australia/Sydney. Discover what timezones you can use in /usr/share/zoneinfo/
Save and restart the box (not sure how to make it pick up the change without restart)
EC2 Notes
root is disabled on EC2. Fabric prompts for a root password which you don't have. Use the -i keyfile argument with fab to give it your ssh keyfile instead.
VirtualBox Notes
VBoxVMService was handy to automatically run the VM as a service in my Windows dev environment.
| Graphite | 5,436,606 | 22 |
We have a metric that we increment every time a user performs a certain action on our website, but the graphs don't seem to be accurate.
So going off this hunch, we invested the updates.log of carbon and discovered that the action had happened over 4 thousand times today(using grep and wc), but according the Integral result of the graph it returned only 220ish.
What could be the cause of this? Data is being reported to statsd using the statsd php library, and calling statsd::increment('metric'); and as stated above, the log confirms that 4,000+ updates to this key happened today.
We are using:
graphite 0.9.6 with statsD (etsy)
| After some research through the documentation, and some conversations with others, I've found the problem - and the solution.
The way the whisper file format is designed, it expect you (or your application) to publish updates no faster than the minimum interval in your storage-schemas.conf file. This file is used to configure how much data retention you have at different time interval resolutions.
My storage-schemas.conf file was set with a minimum retention time of 1 minute. The default StatsD daemon (from etsy) is designed to update to carbon (the graphite daemon) every 10 seconds. The reason this is a problem is: over a 60 second period StatsD reports 6 times, each write overwrites the last one (in that 60 second interval, because you're updating faster than once per minute). This produces really weird results on your graph because the last 10 seconds in a minute could be completely dead and report a 0 for the activity during that period, which results in completely nuking all of the data you had written for that minute.
To fix this, I had to re-configure my storage-schemas.conf file to store data at a maximum resolution of 10 seconds, so every update from StatsD would be saved in the whisper database without being overwritten.
Etsy published the storage-schemas.conf configuration that they were using for their installation of carbon, which looks like this:
[stats]
priority = 110
pattern = ^stats\..*
retentions = 10:2160,60:10080,600:262974
This has a 10 second minimum retention time, and stores 6 hours worth of them. However, due to my next problem, I extended the retention periods significantly.
As I let this data collect for a few days, I noticed that it still looked off (and was under reporting). This was due to 2 problems.
StatsD (older versions) only reported an average number of events per second for each 10 second reporting period. This means, if you incremented a key 100 times in 1 second and 0 times for the next 9 seconds, at the end of the 10th second statsD would report 10 to graphite, instead of 100. (100/10 = 10). This failed to report the total number of events for a 10 second period (obviously).Newer versions of statsD fix this problem, as they introduced the stats_counts bucket, which logs the total # of events per metric for each 10 second period (so instead of reporting 10 in the previous example, it reports 100).After I upgraded StatsD, I noticed that the last 6 hours of data looked great, but as I looked beyond the last 6 hours - things looked weird, and the next reason is why:
As graphite stores data, it moves data from high precision retention to lower precision retention. This means, using the etsy storage-schemas.conf example, after 6 hours of 10 second precision, data was moved to 60 second (1 minute) precision. In order to move 6 data points from 10s to 60s precision, graphite does an average of the 6 data points. So it'd take the total value of the oldest 6 data points, and divide it by 6. This gives an average # of events per 10 seconds for that 60 second period (and not the total # of events, which is what we care about specifically).This is just how graphite is designed, and for some cases it might be useful, but in our case, it's not what we wanted. To "fix" this problem, I increased our 10 second precision retention time to 60 days. Beyond 60 days, I store the minutely and 10-minutely precisions, but they're essentially there for no reason, as that data isn't as useful to us.
I hope this helps someone, I know it annoyed me for a few days - and I know there isn't a huge community of people that are using this stack of software for this purpose, so it took a bit of research to really figure out what was going on and how to get a result that I wanted.
| Graphite | 7,099,197 | 21 |
I'm using graphite to collect data, and I'd like to retrieve the total count of certain events over a period of time. Say, number of logins per week.
However, I just need the total number, and don't need to see how it evolves over time.
When I use something like from=-1w&target=summarize(stats.events.login.success,"1w")&format=json then I still get two datapoints, and not one.
Is there a way to get a single datapoint from the summarize function? or use a different function to return a single datapoint value?
| The problem here is that summarize doesn't align to the from field by default.
summarize(seriesList, intervalString, func='sum', alignToFrom=False)
If you do
from=-1w&target=summarize(stats.events.login.success,"1w","sum",true)&format=json
you should get just one datapoint. What it's doing right now is aligning your buckets to dates that don't fit within the week range starting from your from parameter, so you end up with 2 buckets. From the graphite docs on summarize:
By default, buckets are caculated by rounding to the nearest interval.
This works well for intervals smaller than a day. For example, 22:32
will end up in the bucket 22:00-23:00 when the interval=1hour.
Passing alignToFrom=true will instead create buckets starting at the
from time. In this case, the bucket for 22:32 depends on the from
time. If from=6:30 then the 1hour bucket for 22:32 is 22:30-23:30.
| Graphite | 13,589,350 | 20 |
I have a counter that measures the number items sold every 10 minutes.
I currently use this to track the cumulative number of items:
alias(integral(app.items_sold), 'Today')
And it looks like this:
Now, what I want to do to show how well we were are doing TODAY vs best, avg (or may median) worst day we've had for the past say 90 days.
I tried something like this:
alias(integral(maxSeries(timeStack(app.items_sold, '1d', 0, 90))),'Max')
alias(integral(averageSeries(timeStack(app.items_sold, '1d', 0,90))), 'Avg')
alias(integral(minSeries(timeStack(app.items_sold, '1d',0, 90))), 'Min')
which looks great but actually shows me the cumulative amount of all the max, avg and min for all series interval.
Can anyone suggest a way to achieve what I'm looking for?
i.e. determine what the best (and worst and median) day was for the past 90 days and plot that. Can it be done using purely Graphite functions?
Thanks.
| The answer was to just flip the order to the function calls: (maxSeries before integral)
Thanks to turner on the [email protected] board for the answer
alias(maxSeries(integral(timeStack(app.items_sold, '1d', 0, 90))),'Max')
alias(averageSeries(integral(timeStack(app.items_sold, '1d', 0,90))), 'Avg')
alias(minSeries(integral(timeStack(app.items_sold, '1d',0, 90))), 'Min')
| Graphite | 29,264,515 | 19 |
I'm using statsD to report counter data to graphite; sends a tick everytime I get a message. This works great, except in the situation when statsD has to restart for whatever reason. Then I get huge holes in my graphs, since statsD is now no longer sending '0' every 10 seconds for periods when I didn't get any messages.
I'm reporting for various different message types and queues, and sometimes I don't get a message for a particular queue for a long time.
Is there any existing way to 'fill-in' the missing data with a default value I specify (in my case this would be 0)?
I thought about sending a '0' count for a given metric so that statsD starts sending 0's for it, but I don't always know the set of metrics I'll be reporting in advance.
| Check out the function transformNull that Graphite provides. e.g.
transformNull(stats.timers.deploys.all.duration.total.mean, 0)
This will map sections with null data to 0.
| Graphite | 13,736,898 | 16 |
I have configured Graphite to monitor my application metrics. And I configured Zabbix to monitor my servers CPU and other metrics.
Now I want to pass some critical Graphite metrics to Zabbix to add triggers for them.
So I want to do something like
$ whisper get prefix1.prefix2.metricName
> 155
Is it possible?
P.S. I know about Graphite-API project, I don't want to install extra app.
| You can use the whisper-fetch program which is provided in the whisper installation package.
Use it like this:
whisper-fetch /path/to/dot.wsp
Or to get e.g. data from the last 5 minutes:
whisper-fetch --from=$(date +%s -d "-5 min") /path/to/dot.wsp
Defaults will result in output like this:
1482318960 21.187000
1482319020 None
1482319080 21.187000
1482319140 None
1482319200 21.187000
You can change it to json using the --json option.
| Graphite | 25,651,902 | 15 |
I use statsd for measuring stats and Graphite for displaying these. Anyway, I would like to do a more sophisticated analysis in statistical software, to find out the relations between various variables.
In order to do this, I need the "raw" data, which are usually displayed in Graphite as color lines. Is it possible to get the data in CSV format? Data sampled to 1 entry per 10 seconds will be perfect, and that's statsd default behavior, I think.
| Yes. And it is straightforward.
Server: graphite.example.com
Metric: Graphite.system.data.ip-10-0-0-1.load
As you might be aware, Graphite has a URL API.
graphite.example.com/render/?target=Graphite.system.data.ip-10-0-0-1.load
Returns the line-graph. To get data in csv or even json, append- &format=json
graphite.example.com/render/?target=Graphite.system.data.ip-10-0-0-1.load&format=csv
| Graphite | 21,331,778 | 14 |
I'm trying to get arroud with Graphite. I have problem getting graph render precision lower that minute. I already set refresh time to 1 second, display time to relateive -5 minutes and retention to:
retentions = 1s:21d
Graph is updated every second, but the precision is still one minute. How can I change this ?
| First, I assume the pattern matches appropriately for the retention. For example:
[default_1s_for_21days]
pattern = .*
retentions = 1s:21d
Second, make sure you restart carbon after you modify the storage-schemas.conf file. If you have existing metrics (existing .wsp files) that you need to keep and you'd like them to adopt this schema you need to run whisper-resize.py on the .wsp. If you don't need to keep existing data then you can just delete the .wsp files and restart carbon-cache.py.
Third, verify the settings by looking at some whisper data by running whisper-info.py against a .wsp file. Find the .wsp file for one of your metrics in /graphite/storage/whisper/ and validate the settings. Run:
whisper-info.py my_metric_data.wsp
I'm curious if the 1s precision for that long (21 days) is causing trouble (e.g. causing aggregation), but you should see it if that is the case by checking the .wsp file using whisper-info.py. Anyway, good to confirm that the storage precision is correct and rule it out.
Lastly, and this is probably the problem, check the graphite web caching. Make sure the graphite web app isn't caching for 60 seconds (which is the default). Go to /[graphite_location]/webapp/graphite/settings.py and modify the DEFAULT_CACHE_DURATION.
So, in settings.py, change it to 1 from 60. Like so:
DEFAULT_CACHE_DURATION = 1
| Graphite | 17,045,549 | 14 |
I am sending Graphite the time spent in Garbage Collection (getting this from jvm via jmx). This is a counter that increases. Is their a way to have Graphite graph the change every minute so I can see a graph that shows time spent in GC by minute?
| You should be able to turn the counter into a hit-rate with the Derivative function, then use the summarize function to the counter into the time frame that your after.
&target=summarize(derivative(java.gc_time), "1min") # time spent per minute
derivative(seriesList)
This is the opposite of the integral function. This is useful for taking a
running totalmetric and showing how many requests per minute were handled.
&target=derivative(company.server.application01.ifconfig.TXPackets)
Each time you run ifconfig, the RX and TXPackets are higher (assuming there is network traffic.)
By applying the derivative function, you can get an idea of the packets per minute sent or received, even though you’re only recording the total.
summarize(seriesList, intervalString, func='sum', alignToFrom=False)
Summarize the data into interval buckets of a certain size.
By default, the contents of each interval bucket are summed together.
This is useful for counters where each increment represents a discrete event and
retrieving a “per X” value requires summing all the events in that interval.
Source: http://graphite.readthedocs.org/en/0.9.10/functions.html
| Graphite | 12,009,481 | 14 |
When setting up graphite I accidentally set the retention to 1800 days not 180 days.
'10s:6h,10min:1800d'
From what I understand changing the retention now won't clean up the old data. I am unsure of how todo this without destroying all the data we have and starting agin.
| You have to user the whisper-resize.py command. Note that every metric is saved in a .wsp file, so if you want to change the retention policy of all metrics you will have to use a command along the lines of this gist:
find ./ -type f -name '*.wsp' -exec whisper-resize.py --nobackup {} 10s:6h 10min:180d \;
| Graphite | 30,106,553 | 12 |
My current retention rule is like so:
[whatever]
priority = 110
pattern = ^stats\.whatever\..*
retentions = 60:10080,600:262974
If I understand correctly, this will save 2 days of 1 minute data and 5 years of ten minute data.
I have been sending data to graphite for the last couple of hours and I can see the a graph of this data but only for ranges less than 7 hours. If I try to visualize this data for a range of, for example, 1 day, the resulting graph doesn't show a single data point.
Is this caused by my retention rule?
thanks in advance.
| I had this same problem. After you change your retention rules, you need to restart carbon-cache.py. If you want to keep the data you have you need to run whisper-resize.py on your whisper files (.wsp).
This link should help too:
https://answers.launchpad.net/graphite/+question/140289
However in that link, the parameters passed to whisper-resize.py are in the wrong order. It should be whisper-resize.py <file> <retention rate>
Here's a helpful command for resizing:
find /opt/graphite/storage/whisper -type f -name "*.wsp" -exec whisper-resize.py {} <retention rate> \;
Adjust it as needed.
| Graphite | 10,820,119 | 12 |
I'm trying to set up graphite to work with grafana in docker based on this project : https://github.com/kamon-io/docker-grafana-graphite
and when I run my dockerfile I get 403 Forbidden error for nginx.
my configurations for nginx are almost the same as the project's configurations. I run my dockerfiles on a server and test them on my windows machine. So the configurations are not exactly the same ... for example I have :
server {
listen 80 default_server;
server_name _;
location / {
root /src/grafana/dist;
index index.html;
}
location /graphite/ {
proxy_pass http:/myserver:8000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header Host $host;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET, OPTIONS";
add_header Access-Control-Allow-Headers "origin, authorization, accept";
}
But I still keep getting 403 forbidden. Checking the error log for nginx says :
directory index of "/src/grafana/dist/" is forbidden
Stopping and running it again it says :
directory index of "/src/grafana/dist/" is forbidden
I'm very new to nginx ... was wondering if there's something in the configurations that I'm misunderstanding.
Thanks in advance.
| That's because you are hitting the first location block and the index file is not found.
| Graphite | 27,303,967 | 11 |
I want to label series by hostname + metric name. I know I can use aliasByNode(1) to do first part and aliasByMetric() to do the second. Any ideas how can I merge those two functions in a single metric?
| aliasByNode can take multiple arguments.
aliasByNode(apps.fakesite.web_server_01.counters.requests.count, 2,5)
returns web_server_01.count.
The Grafana query editor for Graphite does not support this but if you toggle edit mode then you can edit the raw query. After editing it, you can toggle back.
| Graphite | 38,281,290 | 11 |
Here's the display for a stat for the past 24 hours (in Graphite Composer):
Here's the display for a stat for the "past 14 days":
Not much difference there. I cannot convince Graphite to display any data for any period past the past 24 hours.
Here are the relavent entries from storage-schemas.conf (I'm using StatsD):
[stats]
pattern = ^stats.*
retentions = 10:2160,60:10080,600:262974
[stats_counts]
pattern = ^stats_counts.*
retentions = 10:2160,60:10080,600:262974
and my storage-aggregation.conf:
[min]
pattern = \.min$
xFilesFactor = 0
aggregationMethod = min
[max]
pattern = \.max$
xFilesFactor = 0
aggregationMethod = max
[sum]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum
[default_average]
pattern = .*
xFilesFactor = 0
aggregationMethod = average
I have five or so days of data captured so far. What am I missing?
EDITED to add:
I guess I should mention that I started out with the default storage-schemas.conf and only yesterday rebuilt my whisper database files to match the above configuration. I don't think this should be relevant, but there it is.
UPDATED:
I'm using 0.9.10 of graphite-web and whisper, from PyPI, released in May 2012.
| Well, this is what I get for not pasting the entire configuration. Here's what it actually looked like:
[carbon]
pattern = ^carbon\.
retentions = 60:90d
[default_1min_for_1day]
pattern = .*
retentions = 60s:1d
[stats]
pattern = ^stats.*
retentions = 10:2160,60:10080,600:262974
[stats_counts]
pattern = ^stats_counts.*
retentions = 10:2160,60:10080,600:262974
Of course, the [default_1min_for_1day] section was matching first, ahead of the other two, and so I was only getting data for the past 24 hours. Moving the catch-all to the end of the file seems to have addressed the issue.
| Graphite | 15,030,063 | 11 |
If I ask for this data:
https://graphite.it.daliaresearch.com/render?from=-2hours&until=now&target=my.key&format=json
I get, among other datapoints, this one:
[
2867588,
1398790800
]
If I ask for this data:
https://graphite.it.daliaresearch.com/render?from=-10hours&until=now&target=my.key&format=json
The datapoint looks like this:
[
null,
1398790800
]
Why this datapoint is being nullified when I choose a wider time range?
Update
I'm seeing that for a chosen date range smaller than 7 hours the resolution of the datapoints are every 10 seconds and when the date range chosen is 7 hours or bigger the the resolution goes to one datapoint every 1 minute.. and continue this diretion as the date range chosen is getting bigger to one datapoint every 10 minutes and so.
So when the resolution of the datapoints is every 10 seconds the data is there, when the resolution is every 1 minute or more, then the datapoint has not the value :/
I'm sending a data point every 1 hour, maybe it is a conflict with the resolutions configuration and me sending only one datapoint per hour
| There are several things happening here, but basically the problem is that you have misconfigured graphite (or at least, configured it in a way that makes it do things that you aren't expecting!)
Specifically, you should set xFilesFactor = 0.0 in your storage-aggregation.conf file. Since you are new at this, you probably just want this (mine is in /opt/graphite/conf/storage-aggregation.conf):
[default]
pattern = .*
xFilesFactor = 0.0
aggregationMethod = average
The graphite docs describe xFilesFactor like this:
xFilesFactor should be a floating point number between 0 and 1, and specifies what fraction of the previous retention level’s slots must have non-null values in order to aggregate to a non-null value. The default is 0.5.
But wait! This wont change existing statistics! These aggregation settings are set once per metric at the time the metric is created. Since you are new at this, the easy way out is to just go to your whisper directory and delete the prior data and start over:
cd /opt/graphite/storage/whisper/my/
rm key.wsp
your root whisper directory may be different depending on platform, etc. After removing the data files graphite should recreate them automatically upon the next metric write and they should get your updated settings (dont forget to restart carbon-cache after changing your storage-aggregation settings).
Alternatively, if you need to keep your old data you will need to run whisper-resize.py against your whisper (.wsp) data files with --xFilesFactor=0.0 and also likely all of your retention settings from storage-schemas.conf (also viewable with whisper-info.py)
Finally, I should add that the reason you get non-null data in your first query, but null data in your second is because graphite will try to pick the best available retention period from which to serve your request based on the time window you requested. For the smaller window, graphite is deciding that it can serve your request using the highest precision data (i.e., non aggregated) and so you are seeing your raw metrics. For the longer time window, graphite is finding that the high precision, non-aggregated data is not available for the entire window -- these periods are configured in storage-schemas.conf -- so it skips to the next highest-precision data set available (i.e. first aggregation tier) and returns only aggregated data. Because your aggregation config is writing null data, you are therefore seeing null metrics! So fix the aggregation, and you should fix the null data problem. But remember that graphite never combines aggregation tiers in a single request/response, so anytime you see differences between results from the same query when all you are changing is the from / to params, the problem is pretty much always due to aggregation configs.
| Graphite | 23,372,235 | 10 |
I am having difficulties understanding integration of Graphite and Kibana 3 to monitor logs and system vitals. I am referring to figure in Log management system described here.
Considering the new features in Kibana 3 Milestone 4, can we collect system vitals and store it directly into elastic search instead of graphite and use a single kibana dashboard (What could be right choice to implement in a distributed system where emphasis is on performance and low memory foot print)?
Why must we use StatsD and graphite, when count and simple statistics are now supported by kibana - Elasticsearch combination?
In case, we decide to use both graphite and kibana, How do we integrate it into a single Dashboard?
Is there a tutorial to integrate Dashboards (kibana and graphitos/graph explorer/orion/pencil)?
Thanks in advance.
| Why statsd-graphite:
Statsd and Graphite can help you visualize anything, not just logs and system vitals. It is very straightforward with the statsd-graphite stack, to measure say- number of users that hovered on the left bottom of your site for more than 10 seconds.
Because there is no in-between logging involved, the scalability that graphite provides is unparalleled from an IO point of view. Also consider the fact that statsd talks UDP, so collecting 300K metrics per minute is a breeze.
You don't have to log something in order to see it.
Integration:
As clearly shown in the architectural diagram you shared, you can filter the stats that you want to visualize, have them forwarded to statsd. This is in parallel with kibana visualizing directly from logstash-elasticsearch. Going redundant with data is a easier approach if you want to view both Graphite and Kibana data over Graphite, since the webapp would not query elasticsearch directly.
Vimeo's Graph Explorer is something you might want to look into. It queries elasticsearch.
Updates:
Not that Logstash catn't do it, but it isn't 'designed' for that role, whereas statsd et al, are.
I have been wondering if we have a simpler query language.
The inherent scheme of organization in graphite is tree-like and hence the searches do-not/ can-not yeild results from a different subtree. This makes it not-so-suitable for cross-dimensional searches. GE is the simplest, given you want the power.
Graph Explorer's flow-
Graph Explorer addresses this by adding tags to the metrics and integrating it with elasticsearch. So what GE actually does is that-
One time- It connects to your Graphite front-end, makes API calls to retrieve all metrics.
It then 'converts' the old style proto 1 metrics (A.B.C) into tag-based proto 2 metrics (host=A.app=B.username=C).
This is then exported to ES which maintains an index.
When you query GE front-end, it connects to ES to understand what you want.
GE then queries the Graphite-API, and delivers the results in GE front-end.
Moreover, does graph explorer assume we are using diamond for collection?
No.
How does it compare to pencil, orion and graphiti?
These are on-surface optimizations to visualization. They-
change the look and feel of the graphs.
make querying the API easier.
allow a better monitoring flow.
They DO NOT change the way you store or search the information. GE, embeds itself 'deeper' into the metric data and hence has a real edge over how you query metrics. (Cross dimensional search)
Heads up-
GE's metric-importing plug-in is far from perfect. It successfully imported 300 out of my 1000 metrics. It is also heavier to render, and the front-end eats more NW (because of the hoverable, zoomable features).
Update-
Grafana is out.
| Graphite | 20,040,373 | 10 |
We need to collect timeseries information on multiple server and business processes and consider to use graphite. It seems good if we want to display the raw data. But what if we want to do BI on this data and run custom queries? Does graphite allow that, or alternatively can I instruct graphite to store data on postgress?
| Graphite definitely allows you to query your data, both graphically and returning csv or json. The queries in graphite aren't done with a language like sql. They're done with functions that apply to one metric at a time. Each metric is it's own database, which is just a series of time, value pairs.
The most common thing you're likely to want is summarize data over different time periods. Here's an example of what the url would look like for a graph where the data is summarized daily for a week:
http://graphite.example.com/render/?width=586&height=308&_salt=1355992522.674&target=summarize(stats_counts.mystat.subname%2C%20'1day')&from=-7days
If you wanted to get back csv instead of a graph, you would just add format=json to the url. And if you're looking at the data through graphite's web interface you'd just be putting the following in to view the same graph.
summarize(stats_counts.mystat.subname, '1day')
Most of the querying of data you do will at first be in the graphite composer, which is just a web interface that lets you click on the metrics you want to add to the graph, and apply the various functions to them.
As for adding the data to Postgres, you're probably not going to want to do that to query it. The data isn't really structured in a way that's great for relational databases.
| Graphite | 13,919,478 | 10 |
I have a String, and I would like to reverse it. For example, I am writing an AngularDart filter that reverses a string. It's just for demonstration purposes, but it made me wonder how I would reverse a string.
Example:
Hello, world
should turn into:
dlrow ,olleH
I should also consider strings with Unicode characters. For example: 'Ame\u{301}lie'
What's an easy way to reverse a string, even if it has?
| The question is not well defined. Reversing arbitrary strings does not make sense and will lead to broken output. The first (surmountable) obstacle is Utf-16. Dart strings are encoded as Utf-16 and reversing just the code-units leads to invalid strings:
var input = "Music \u{1d11e} for the win"; // Music 𝄞 for the win
print(input.split('').reversed.join()); // niw eht rof
The split function explicitly warns against this problem (with an example):
Splitting with an empty string pattern ('') splits at UTF-16 code unit boundaries and not at rune boundaries[.]
There is an easy fix for this: instead of reversing the individual code-units one can reverse the runes:
var input = "Music \u{1d11e} for the win"; // Music 𝄞 for the win
print(new String.fromCharCodes(input.runes.toList().reversed)); // niw eht rof 𝄞 cisuM
But that's not all. Runes, too, can have a specific order. This second obstacle is much harder to solve. A simple example:
var input = 'Ame\u{301}lie'; // Amélie
print(new String.fromCharCodes(input.runes.toList().reversed)); // eiĺemA
Note that the accent is on the wrong character.
There are probably other languages that are even more sensitive to the order of individual runes.
If the input has severe restrictions (for example being Ascii, or Iso Latin 1) then reversing strings is technically possible. However, I haven't yet seen a single use-case where this operation made sense.
Using this question as example for showing that strings have List-like operations is not a good idea, either. Except for few use-cases, strings have to be treated with respect to a specific language, and with highly complex methods that have language-specific knowledge.
In particular native English speakers have to pay attention: strings can rarely be handled as if they were lists of single characters. In almost every other language this will lead to buggy programs. (And don't get me started on toLowerCase and toUpperCase ...).
| Split | 21,521,729 | 47 |
I've got a column people$food that has entries like chocolate or apple-orange-strawberry.
I want to split people$food by - and get the first entry from the split.
In python, the solution would be food.split('-')[0], but I can't find an equivalent for R.
| If you need to extract the first (or nth) entry from each split, use:
word <- c('apple-orange-strawberry','chocolate')
sapply(strsplit(word,"-"), `[`, 1)
#[1] "apple" "chocolate"
Or faster and more explictly:
vapply(strsplit(word,"-"), `[`, 1, FUN.VALUE=character(1))
#[1] "apple" "chocolate"
Both bits of code will cope well with selecting whichever value in the split list, and will deal with cases that are outside the range:
vapply(strsplit(word,"-"), `[`, 2, FUN.VALUE=character(1))
#[1] "orange" NA
| Split | 33,683,862 | 47 |
I've been trying to create a waveform image and I'm getting the raw data from the .wav file using song = wave.open() and song.readframes(1), which returns:
b'\x00\x00\x00\x00\x00\x00'
How can I split this into three separate parts, e.g. b'\x00\x00', b'\x00\x00', b'\x00\x00' because each frame consists of 3 parts (each is 2 bytes wide) so I need the value of each individual part to be able to make a wave form.
| You can use slicing on byte objects:
>>> value = b'\x00\x01\x00\x02\x00\x03'
>>> value[:2]
b'\x00\x01'
>>> value[2:4]
b'\x00\x02'
>>> value[-2:]
b'\x00\x03'
When handling these frames, however, you probably also want to know about memoryview() objects; these let you interpret the bytes as C datatypes without any extra work on your part, simply by casting a 'view' on the underlying bytes:
>>> mv = memoryview(value).cast('H')
>>> mv[0], mv[1], mv[2]
256, 512, 768
The mv object is now a memory view interpreting every 2 bytes as an unsigned short; so it now has length 3 and each index is an integer value, based on the underlying bytes.
| Split | 20,024,490 | 47 |
I would like that after splitting the window (C-x 3 or C-x 2) to be able to automatically get to cursor in the new opened buffer (the other than the current). How can I achieve this behavior ?
| You can switch between buffers with C-x o. As to do that automatically I don't think there is an existing command for that.
| Split | 6,464,738 | 47 |
I'm trying to write the Haskell function 'splitEvery' in Python. Here is it's definition:
splitEvery :: Int -> [e] -> [[e]]
@'splitEvery' n@ splits a list into length-n pieces. The last
piece will be shorter if @n@ does not evenly divide the length of
the list.
The basic version of this works fine, but I want a version that works with generator expressions, lists, and iterators. And, if there is a generator as an input it should return a generator as an output!
Tests
# should not enter infinite loop with generators or lists
splitEvery(itertools.count(), 10)
splitEvery(range(1000), 10)
# last piece must be shorter if n does not evenly divide
assert splitEvery(5, range(9)) == [[0, 1, 2, 3, 4], [5, 6, 7, 8]]
# should give same correct results with generators
tmp = itertools.islice(itertools.count(), 10)
assert list(splitEvery(5, tmp)) == [[0, 1, 2, 3, 4], [5, 6, 7, 8]]
Current Implementation
Here is the code I currently have but it doesn't work with a simple list.
def splitEvery_1(n, iterable):
res = list(itertools.islice(iterable, n))
while len(res) != 0:
yield res
res = list(itertools.islice(iterable, n))
This one doesn't work with a generator expression (thanks to jellybean for fixing it):
def splitEvery_2(n, iterable):
return [iterable[i:i+n] for i in range(0, len(iterable), n)]
There has to be a simple piece of code that does the splitting. I know I could just have different functions but it seems like it should be and easy thing to do. I'm probably getting stuck on an unimportant problem but it's really bugging me.
It is similar to grouper from http://docs.python.org/library/itertools.html#itertools.groupby but I don't want it to fill extra values.
def grouper(n, iterable, fillvalue=None):
"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
It does mention a method that truncates the last value. This isn't what I want either.
The left-to-right evaluation order of the iterables is guaranteed. This makes possible an idiom for clustering a data series into n-length groups using izip(*[iter(s)]*n).
list(izip(*[iter(range(9))]*5)) == [[0, 1, 2, 3, 4]]
# should be [[0, 1, 2, 3, 4], [5, 6, 7, 8]]
| from itertools import islice
def split_every(n, iterable):
i = iter(iterable)
piece = list(islice(i, n))
while piece:
yield piece
piece = list(islice(i, n))
Some tests:
>>> list(split_every(5, range(9)))
[[0, 1, 2, 3, 4], [5, 6, 7, 8]]
>>> list(split_every(3, (x**2 for x in range(20))))
[[0, 1, 4], [9, 16, 25], [36, 49, 64], [81, 100, 121], [144, 169, 196], [225, 256, 289], [324, 361]]
>>> [''.join(s) for s in split_every(6, 'Hello world')]
['Hello ', 'world']
>>> list(split_every(100, []))
[]
| Split | 1,915,170 | 47 |
Subsets and Splits