text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
"Double free or corruption" error inside this function?
Below is my function. It runs correctly once, then when it is called a second time it causes an error telling me "double free or corruption". I tried adding the +1 inside the malloc() as other posts have suggested, even though I am not storing null-terminated strings but arrays of integers. It did not help.
I am very confused at this point. I don't understand why at the end of the function the pointer that was free()'d doesn't go out of scope, or if it does, then how it can be considered a double-free when I malloc()'d after free()ing it the last time it was used.
int getCount(int number) {
int totalUniqueDigits = 0;
bool* allDigits = (bool*)malloc(10 * sizeof(bool));
do {
int currentDigit = number % 10;
number /= 10;
allDigits[currentDigit] = true;
} while (number > 0);
for (int i = 0; i < 10; i += 2) {
if (allDigits[i] == true) {
totalUniqueDigits++;
}
}
free(allDigits); /*This is where the problem is, but only the second time the function is called. */
allDigits = NULL;
return totalUniqueDigits;
}
A:
If number is negative, then
currentDigit = number % 10;
will be negative also (or zero if divisible by 10). This is a somewhat awkward (IMO) definition of the modulus operator.
If currentDigit is negative, then
allDigits[currentDigit] = true;
will write out of bounds. On most systems, writing to allDigits[-1] would overwrite information used to manage memory. This might not directly crash your program, but using malloc later could have that effect.
The solution of course is to either use abs or add 10 to currentDigit if it is negative.
| {
"pile_set_name": "StackExchange"
} |
Q:
Android Add Textview in java file, not XML
I need to be able to add a textview to my app with code, not using the XML or Graphic interface...
I have an imageview, but cannot figure out how to modify it to get a Textview (Im new to Android)...
Here is my imageview code:
//Add view using Java Code
ImageView imageView = new ImageView(AndroidAddViewActivity.this);
imageView.setImageResource(R.drawable.icon);
LayoutParams imageViewLayoutParams
= new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT);
imageView.setLayoutParams(imageViewLayoutParams);
mainLayout.addView(imageView);
A:
TextView textView = new TextView(this);
textView.setText("the text");
LayoutParams textViewLayoutParams = new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT);
textView.setLayoutParams(textViewLayoutParams);
mainLayout.addView(textView);
| {
"pile_set_name": "StackExchange"
} |
Q:
Install and use aspnet_merge.exe without installing the Windows SDK?
I used web site deployment projects but I discovered that the aspnet_merge.exe utility is not on my build server. This prevents me from being able to build. In order to get this utility I have to install the Windows SDK which comes as an ISO file and is over 1gb. I do not want to install this entire thing when I all I need is that one assembly. But I am not sure if that file depends on anything else in that installer. I also do not understand why a web tool is buried in the Windows SDK. I would prefer to have it include in some web tools installer.
Has anyone just copied this assembly to the FrameworkSDKDir and just made it work that way?
Related: Using Visual Studio 2008 Web Deployment projects - getting an error finding aspnet_merge.exe
A:
I downloaded the "web" version of the SDK because the setup is only 500KB and it prompts you for which components to install and only downloads and installs the ones you choose. I unchecked everything except for ".NET Development Tools". It then downloaded and installed about 250MB worth of stuff, including aspnet_merge.exe and sgen.exe
You can download the winsdk_web.exe setup for Win 7 and .NET 3.5 SP1 here.
| {
"pile_set_name": "StackExchange"
} |
Q:
Windows Server 2008 intermediate CA has incorrect AIA URL
We have a two tier ADCS PKI and our intermediate CA has the URL for the AIA ending in (1) (ie: http://pki.example.com/certenroll/certificate(1).crt) which of course doesn't exist. The URL template in the CA extension properties is correct so I think the last time the certificate was issued there was already a file with the same name so it added (1) to the file name. How do I "reissue" the certificate so that the AIA URL gets updated?
CertUtil -GetReg output:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\example-Issuing-CA\CACertPublicationURLs:
CACertPublicationURLs REG_MULTI_SZ =
0: 1:C:\Windows\system32\CertSrv\CertEnroll\%1_%3%4.crt
CSURL_SERVERPUBLISH -- 1
1: 2:ldap:///CN=%7,CN=AIA,CN=Public Key Services,CN=Services,%6%11
CSURL_ADDTOCERTCDP -- 2
2: 2:http://pki.example.com/CertEnroll/%1_%3%4.crt
CSURL_ADDTOCERTCDP -- 2
CertUtil: -getreg command completed successfully.
A:
Thanks to @CryptoGuy for the answer that my Root CA had issued two certificates for my issuing CA and that is why one of the certs had a (1) tacked on the end.
| {
"pile_set_name": "StackExchange"
} |
Q:
preg_match_all question (how to limit scope without a seperate preg_match call)
I have some data similar to this:
aaa1|aaa2|ZZZ|aaa3|aaa4|aaa5|ZZZ|aaa6|aaa7
I want to match all "aaa[0-9]" BETWEEN "ZZZ" (not the ones outside).
So I have some PHP code:
$string = "aaa1aaa2zzzzaaa3aaa4aaa5zzzzaaa6aaa7";
preg_match_all("/zzzz.*(aaa[0-9]).*zzzz/", $string, $matches, PREG_SET_ORDER);
print_r($matches);
But it only outputs:
Array
(
[0] => Array
(
[0] => zzzzaaa3aaa4aaa5zzzz
[1] => aaa5
)
)
I want "aaa3", "aaa4" in addition to "aaa5".
Is there a way to do this with 1 call to preg_match_all()?
A:
Check that the string occurs before one delimiter string (zzzz) but not before two delimiter strings:
$string = "aaa1aaa2zzzzaaa3aaa4aaa5zzzzaaa6aaa7";
preg_match_all("/aaa[0-9](?=.*?zzzz)(?!(?>.*?zzzz).*?zzzz)/", $string, $matches, PREG_SET_ORDER);
print_r($matches);
| {
"pile_set_name": "StackExchange"
} |
Q:
Play 2.1 with Guice 3.0 - Access not available outside Controller classes
Am trying to learn how guice plays with Play 2.1 framework. I have a service to which I need access outside the service package. I have placed the below in Global file
protected Injector configure() {
injector = Guice.createInjector(new AbstractModule() {
@Override
protected void configure() {
bind(MyService.class).to(MyServiceImpl.class).in(Singleton.class);
}
});
return injector;
}
@Override
public <A> A getControllerInstance(Class<A> clazz) throws Exception {
return injector.getInstance(clazz);
}
Inside the controller class am able to get to my object by doing below and everything seems to be fine
@Inject
MyService serviceObj
But elsewhere outside the controller the same object appears to be null. For example I have a core module which takes care of talking to the service. The controller classes hands out the job to the core module. I need to be able to get hold of this MyService obj in the core module classes.
What am I missing here guys?
Thanks
Karthik
A:
I had figured a way out to do this.
In my configure method I had to use this
protected Injector configure() {
injector = Guice.createInjector(new AbstractModule() {
@Override
protected void configure() {
requestStaticInjection(TheClassThatNeedsMyService.class);
}
});
return injector;
}
And in my TheClassThatNeedsMyService I had to just do
@Inject MyService serviceObj;
Just for reference this is how my Service class looks like
@ImplementedBy(MyServiceImpl.class)
public interface MyService{
...
}
@Singleton
public class MyServiceImpl implements MyService{
...
}
Now am able to get access to my service object whereever I want in my application. Hope it helps someone
Thanks
Karthik
| {
"pile_set_name": "StackExchange"
} |
Q:
How to show Plot Legends?
How do I make the PlotLegends works in the codes below? I had tried the "Expressions" option and other string options. Only the Ture option will give one legend but not all legends.
term = Table[Exp[3*Exp[v] - 3 - k*v] /. v -> Log[n*k/3], {n, 0.7, 1.3, 0.3}]
Plot[term, {k, 0, 10}, PlotLegends -> {"0.7", "1", "1.3"}]
A:
Plot[Evaluate@term, {k, 0, 10}, PlotLegends -> {"0.7", "1", "1.3"}]
| {
"pile_set_name": "StackExchange"
} |
Q:
Can't pull image from docker repo
I'm trying sudo docker pull ubuntu on Docker 1.13.1 on Ubuntu 16.04.2 LTS and getting error:
Error response from daemon: manifest for ubuntu:latest not found
I don't use proxy so I have no idea where manifest got lost.
A:
Well, it just started working 3 hours later without any intervention, so I make a conclusion that the problem was caused by Docker Hub incident.
| {
"pile_set_name": "StackExchange"
} |
Q:
Pip not installing during virtualenv venv
I have created several virtual environments using virtualenv venv for the past few weeks. But it suddenly does not work. When I do a deeper check, the installation will stopped at pip stage. I have tried virtualenv venv and virtualenv venv -v. I have also tried virtualenv venv --no-pip which further confirms that the issue lies within pip.
Looking in links: file:///C:/users/mong%20chang%20hsi/appdata/local/programs/python/python36/lib/>site-packages/virtualenv_support
Collecting setuptools
Collecting pip
...Installing setuptools, pip, wheel...done.
Traceback (most recent call last):
File "C:\Users\Mong Chang Hsi\AppData\Local\Programs\Python\Python36\Scripts\virtualenv-script.py", line 11, in <module>
load_entry_point('virtualenv==16.6.2', 'console_scripts', 'virtualenv')()
File "c:\users\mong chang hsi\appdata\local\programs\python\python36\lib\site-packages\virtualenv.py", line 867, in main
symlink=options.symlink,
File "c:\users\mong chang hsi\appdata\local\programs\python\python36\lib\site-packages\virtualenv.py", line 1159, in create_environment
install_wheel(to_install, py_executable, search_dirs, download=download)
File "c:\users\mong chang hsi\appdata\local\programs\python\python36\lib\site-packages\virtualenv.py", line 1009, in install_wheel
_install_wheel_with_search_dir(download, project_names, py_executable, search_dirs)
File "c:\users\mong chang hsi\appdata\local\programs\python\python36\lib\site-packages\virtualenv.py", line 1096, in _install_wheel_with_search_dir
call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=script)
File "c:\users\mong chang hsi\appdata\local\programs\python\python36\lib\site-packages\virtualenv.py", line 934, in call_subprocess
line = stdout.readline()
KeyboardInterrupt
A:
Try doing this python3 -m venv name_of_environment
| {
"pile_set_name": "StackExchange"
} |
Q:
Factoring WHERE clauses in IQueryable
I recovered some spaghetti code and I have to refactor it. I do not want a method with over 200 lines, for me it is not object oriented programming . I am trying to ponder on the question, I would like to have your suggestions.
This is my code:
Line 18
if (searchCriteria.EventReference != null)
{
query = query.Search(x = > x.EventReference, searchCriteria.EventReference);
}
if (searchCriteria.PendingEvent == false)
{
query = query.Where(x = > x.EventStatusId != EventStatus.Pending);
}
if (searchCriteria.VerifiedEvent == false)
{
query = query.Where(x = > x.EventStatusId != EventStatus.Verified);
}
if (searchCriteria.CanceledEvent == false)
{
query = query.Where(x = > x.EventStatusId != EventStatus.Canceled);
}
Line 237
if (searchCriteria.RemitterId != null)
{
query = query.Where(x = > x.Trade.RemitterId == searchCriteria.RemitterId);
}
A:
This one seems to be overkill to me (but I guess it's the polymorphism that appears in the comments), but anyway, there it is:
We start with an interface:
public interface IQueryFilter
{
IQueryable<Whatever> Filter(IQueryable<Whatever> query, SearchCriteria searchCriteria);
}
Then implement the common property:
public abstract class AQueryFilter<T> : IQueryFilter
{
public AQueryFilter(Func<SearchCriteria, T> criteria)
{
Criteria = criteria;
}
protected Func<SearchCriteria, T> Criteria { get; }
public abstract IQueryable<Whatever> Filter(IQueryable<Whatever> query, SearchCriteria searchCriteria);
}
And finally, all the specific stuff:
public class WhereEventStatusQueryFilter : AQueryFilter<bool>
{
private EventStatus _toTest;
public WhereEventStatusQueryFilter(Func<SearchCriteria, bool> criteria, EventStatus toTest)
: base(criteria)
{
_toTest = toTest;
}
public override IQueryable<Whatever> Filter(IQueryable<Whatever> query, SearchCriteria searchCriteria)
{
return (Criteria(searchCriteria) ? query : query.Where(x => x.EventStatusId != _toTest));
}
}
public class SearchQueryFilter : AQueryFilter<object>
{
Func<Whatever, object> _searchFor;
public SearchQueryFilter(Func<SearchCriteria, object> criteria, Func<Whatever, object> searchFor)
: base(criteria)
{
_searchFor = searchFor;
}
public override IQueryable<Whatever> Filter(IQueryable<Whatever> query, SearchCriteria searchCriteria)
{
return (Criteria(searchCriteria) == null ? query : query.Search(x => _searchFor(x), Criteria(searchCriteria)));
}
}
public class WhereEqualQueryFilter : AQueryFilter<object>
{
Func<Whatever, object> _searchFor;
public WhereEqualQueryFilter(Func<SearchCriteria, object> criteria, Func<Whatever, object> searchFor)
: base(criteria)
{
_searchFor = searchFor;
}
public override IQueryable<Whatever> Filter(IQueryable<Whatever> query, SearchCriteria searchCriteria)
{
return (Criteria(searchCriteria) == null ? query : query.Where(x => _searchFor(x) == Criteria(searchCriteria)));
}
}
Usage:
var filters = new IQueryFilter[]
{
new WhereEventStatusQueryFilter(x => x.PendingEvent, EventStatus.Pending),
new WhereEventStatusQueryFilter(x => x.VerifiedEvent, EventStatus.Verified),
new SearchQueryFilter(x => x.EventReference, x => x.EventReference),
new WhereEqualQueryFilter(x => x.RemittedId, x => x.Trade.RemittedId),
...
};
foreach (var filter in filters)
query = filter.Filter(query, searchCriteria);
But this solution hide a lot of the logic. And if anyone wants to add something he has to read all the previous filter classes to know if there is already one that can get the job done or if he has to write another one.
| {
"pile_set_name": "StackExchange"
} |
Q:
FullCalendar not loading events from json after first load except in IE
I have been testing out some code and I need to load day view with my own json data.
$('#day-calendar').fullCalendar('removeEvents');
// logic goes here to create dayViewEvents
$('#day-calendar').fullCalendar('addEventSource', eval(dayViewEvents));
This code gets called several times. The events render perfectly in IE, but in Chrome, FireFox and Safari the items do not show except for on the initial load.
Here is a sample of the json
[{"id":"T111","title":"OLIVER DOUGLAS","allDay":false,"start":"2010-12-27T08:00:00","end":"2010-12-27T08:00:00","url":null,"className":"myClass","editable":false,"source":null,"description":null,"eventType":"A Task"},
{"id":"T345","title":"EB DAWSON","allDay":false,"start":"2010-12-27T09:00:00","end":"2010-12-27T10:00:00","url":null,"className":"myRedClass","editable":false,"source":null,"description":null,"eventType":"A Call"}]
I tried rerender, refetch but nothing works. In IE it works every time with the above code.
One other thing - I called 'clientEvents' and they are in the calendar.
Thanks,
Paul
A:
I probably should have done this as a comment but I wanted to make it readable.
The calendar is working fine, it was my goof in the javascript. I had :
$('#day-calendar').fullCalendar('gotoDate', dt.getYear(), dt.getMonth(), dt.getDate());
instead of :
$('#day-calendar').fullCalendar('gotoDate', **dt.getFullYear**(), dt.getMonth(), dt.getDate());
Apparently IE returns 2010 from getYear and all other browsers return 110. The key is to allways use getFullYear and not getYear, which is deprecated. Oh well.
| {
"pile_set_name": "StackExchange"
} |
Q:
compare a character with an unicode in Qt
if (lineEditText[i] == 'đ' || lineEditText[i] == 'Đ')
lineEditText.replace(i, 1, "d");
i want to compare character at (i) type QString with an unicode as above. But it does not work. So how can i compare it?
A:
Build QStrings out of characters and use them for comparison:
if (lineEditText[i] == QString("đ")[0] || lineEditText[i] == QString("Đ")[0])
or use wide character literals so they got correctly cast to QChar:
if (lineEditText[i] == L'đ' || lineEditText[i] == L'Đ')
| {
"pile_set_name": "StackExchange"
} |
Q:
how to use a variable as object?
this is the function in class that i use
public function content($search) {
$this->dbconnection();
$search = mysql_real_escape_string($search);
$q= mysql_query("SELECT title FROM xx WHERE title LIKE '%$search%'");
$s = mysql_fetch_array($q);
foreach ($s as $key => $val) {
$$key = $val;
}
}
and on index.php file
$cl= new Class;
$test = $cl->content('cloud');
and question is
how can i echo the variable which i set inside foreach loop?
$test->title; < this is not working for sure, but this explains the thing that i am trying to do.
A:
Return the array as an object:
public function content($search)
{
$this->dbconnection();
$search = mysql_real_escape_string($search);
$q = mysql_query("SELECT title FROM xx WHERE title LIKE '%$search%'");
$s = mysql_fetch_assoc($q);
return (object)$s;
}
Then in your code:
$cl= new Class;
$content = $cl->content('cloud');
echo $content->title;
Alternatively, you could fetch the row as an object by using mysql_fetch_object
| {
"pile_set_name": "StackExchange"
} |
Q:
Unable to figure out why firebase hosting is not using http 2
I've set up a fresh hosting project (not using any custom domain at the moment) and split up some of my js files expecting them to be served via http2 (as described in firebase blog posts it should be enabled by default?) However protocol still shows up as http/1.1. Am I missing something? Do I need to add entry in my config files to force http2?
DEMO: https://asimetriq-com.firebaseapp.com/
A:
Works for me, see attached screenshot.
It may mean that you have some transparent proxy that does not support HTTP/2 in the network hops from your client to the server.
Also, from time to time, browsers may downgrade the protocol they are using to collect statistics about protocol performances to be able to compare them.
| {
"pile_set_name": "StackExchange"
} |
Q:
Fill all options of an Enum into Nullable array
I have an Enum that needs to go into an Array that is Nullable:
StatusType?[] statusTypes = null; //Array to fill
I tried to fill the statusTypes like this:
statusTypes = (StatusType?[])Enum.GetValues(typeof(StatusType))
I receive the following error:
System.InvalidCastException: 'Unable to cast object of type 'x.Entities.Enums.StatusType[]' to type 'System.Nullable`1[x.Entities.Enums.StatusType][]'.'
How can I fill the Enum?[] with Enum[]? I'm sure it is any easy fix, but I have been stumpted on it for a bit now. Thank you.
A:
Use LINQ's Cast<T>() extension method and provide the target type:
statusTypes = Enum.GetValues(typeof(StatusType)).Cast<StatusType?>().ToArray();
| {
"pile_set_name": "StackExchange"
} |
Q:
How to include the full property name using a custom htmlhelper from within an editor template
I have a registration form into which I am trying to place an Html.EditorFor().
EG:
@ModelType RegisterViewModel
<othermarkup />
<div>
@Html.LabelFor(Function(m) m.ConfirmPassword, New With {.class = "required"})
<div>
@Html.PasswordFor(Function(m) m.ConfirmPassword)
@Html.ValidationMessageFor(Function(m) m.ConfirmPassword)
</div>
</div>
<div>
@Html.EditorFor(Function(m) m.AlertSettings)
</div>
<othermarkup />
The RegisterViewModel contains a property called AlertSettings (of type AlertSettingsViewModel) and the Editor Template for AlertSettingsViewModel.vbhtml looks something like this:
@ModelType AlertSettingsViewModel
<othermarkup />
<div class="panel-heading">Personalized Alert Settings</div>
<div class="panel-body">
<div class="col-sm-6">
<div>
<label for="Disciplines">Disciplines</label>
@Html.ValidationMessageFor(Function(m) m.Disciplines)
@Html.CheckBoxListFor(Function(m) m.Disciplines, SelectLists.Disciplines())
</div>
<div class="visible-xs"><br /></div>
</div>
</div>
<othermarkup />
The Disciplines property of AlertSettingsViewModel is an Integer array.
I'm using a custom HtmlHelper to generate the Checkbox List. This is comprised of the two following methods (first calls the second):
''' <summary>
''' Returns a checkbox for each of the provided <paramref name="items"/>.
''' </summary>
<System.Runtime.CompilerServices.Extension>
Public Function CheckBoxListFor(Of TModel, TValue)(htmlHelper As HtmlHelper(Of TModel), expression As Expression(Of Func(Of TModel, TValue)), items As IEnumerable(Of SelectListItem), Optional htmlAttributes As Object = Nothing) As MvcHtmlString
Dim listName = ExpressionHelper.GetExpressionText(expression)
Dim metaData = ModelMetadata.FromLambdaExpression(expression, htmlHelper.ViewData)
items = GetCheckboxListWithDefaultValues(metaData.Model, items)
Return htmlHelper.CheckBoxList(listName, items, htmlAttributes)
End Function
''' <summary>
''' Returns a checkbox for each of the provided <paramref name="items"/>.
''' </summary>
<System.Runtime.CompilerServices.Extension>
Public Function CheckBoxList(htmlHelper As HtmlHelper, listName As String, items As IEnumerable(Of SelectListItem), Optional htmlAttributes As Object = Nothing) As MvcHtmlString
Dim Result As String = ""
For Each item In items
Dim div = New TagBuilder("div")
div.MergeAttributes(New RouteValueDictionary(htmlAttributes), True)
div.MergeAttribute("class", "checkbox") ' default class
Dim label = New TagBuilder("label")
Dim cb = New TagBuilder("input")
cb.MergeAttribute("type", "checkbox")
cb.MergeAttribute("name", listName)
cb.MergeAttribute("value", If(item.Value, item.Text))
If item.Selected Then
cb.MergeAttribute("checked", "checked")
End If
label.InnerHtml = cb.ToString(TagRenderMode.SelfClosing) & item.Text
div.InnerHtml = label.ToString()
Result &= div.ToString()
Next
Return New MvcHtmlString(Result)
End Function
The problem I am having is that when the editor template is called, the eventual call to ExpressionHelper.GetExpressionText(expression) returns only the immediate property name (eg. "Disciplines") instead of the full name in context (eg. "AlertSettings.Disciplines").
How can I get the full name instead of just the immediate name? This is my first attempt at using editor templates and I'm having some difficulty finding examples for this particular use case.
A:
I was able to rectify the issue by replacing:
Dim listName = ExpressionHelper.GetExpressionText(expression)
with:
Dim listName = htmlHelper.NameFor(expression).ToString
In the CheckboxListFor method.
Hopefully this will be of some use to someone else.
| {
"pile_set_name": "StackExchange"
} |
Q:
Discharging capacitors with a multi-meter in diode mode
Recently I encountered this "trick" where you use a multi-meter in diode test mode to discharge a large electrolytic capacitor. I feel that this cannot be a good practice, but the people who were doing insisted that they do it all the time and have never damaged their Fluke meters in the process.
The voltage in the system is max of around 50V and the total capacitance is around 1600 uF. The exact process is that you put the multi-meter in diode test mode, then connect positive to negative and negative to positive. The meter beeps continuously until the voltage is near zero.
So my question is, is this an industry practice? Can it be considered safe for quality multi-meters such as Fluke?
I definitely wouldn't consider it to be safe for cheap meters. But for all I know, Fluke knows their customers will do this and designs the meters accordingly.
A:
It's hard to say if it will damage the meter without having a schematic.
Most DMMs in diode test mode act as a current source (500uA-1mA). Connecting the charged cap "in reverse" simply makes the current source circuit act as a current limiter.
If the cap is charged at a voltage which is higher than the compliance of the current source, this latter may get damaged. Otherwise it may get away with it.
I don't think the compliance of the current source is anyway near 50V though. Since most diode testers cannot light a blue led, probably it is around 3-4V.
Of course a quality meter will have other protection circuits which may kick in when you apply an higher voltage, but then you are probably stressing them. Fluke DMMs are quite robust, they might survive 240Vac line applied to the voltage input terminals without dying even when the DMM is switched to non-voltage functions.
EDIT (to reinforce my last statements)
A good DMM, like say my Fluke 87V, will have the protection ratings specified. For example look at this table from the user manual (emphasis mine):
So it appears that, for that specific model, you are safe up to 1000Vrms.
Of course this doesn't mean that doing that does no harm to the DMM whatsoever. It depends which kind of circuitry provides the protection.
For example, if MOVs are used, they absorb the energy of the overloading pulse and get a little bit of damage themselves. More energy, more damage. So they get "used up" in the long run.
If, as another example, TVS are used, which are Zener diodes basically, they won't be damaged if the overload pulse is low energy (depending on their characteristics), but they heat up, and repeated overloads may get them damaged if they have not the time to cool down or reach thermal equilibrium at a safe temperature.
A DMM could have quite a complex protection circuitry, comprising MOVs, TVS diodes, resistors, PTCs, etc. As I said initially, without knowing the circuit specs, it is quite hard to say if you are completely safe, or you are slowly degrading the performance of the protection circuitry.
This is quite important for CAT-III or CAT-IV instruments that can actually be used on high energy equipment. So I wouldn't want to use those DMMs on high energy circuits unless I was sure that your cap discharging strategy wasn't detrimental to the protection cirtuitry.
If you use them only on low energy stuff, ... meh! ..., maybe you may not care if after a zillion discharges the DMM blows up, provided the user don't!
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use Url Helper in splited js/coffee files
When I write javascript code in ASP.NET MVC views, I can use @Url.Content() to generate a reference path.
Recently, I'm play with coffeescript and use MINDSCAPE Web WorkBench to generate js files in Visual Studio.
However, when I split all my js file to multiple files, I can't use Url helpers so that I must hard code url path like '/Dashboard/User/12' into coffee files.
Is there any work around that I can use url helpers in splited js/coffee files?
A:
you can use html5 data-* attributes on html elements and from your js file you can access them
Html
<li class='elem' data-url='example.com'>something</li>
Javascript
$('.elem').data('url') // return example.com
and that is the idea of unobtrusive Javascript ,put the needed information in the html document without putting Javascript code in it
if your using html helpers and want to use data-* check this out
| {
"pile_set_name": "StackExchange"
} |
Q:
Decomposing HTML to link text and target
Given an HTML link like
<a href="urltxt" class="someclass" close="true">texttxt</a>
how can I isolate the url and the text?
Updates
I'm using Beautiful Soup, and am unable to figure out how to do that.
I did
soup = BeautifulSoup.BeautifulSoup(urllib.urlopen(url))
links = soup.findAll('a')
for link in links:
print "link content:", link.content," and attr:",link.attrs
i get
*link content: None and attr: [(u'href', u'_redirectGeneric.asp?genericURL=/root /support.asp')]* ...
...
Why am i missing the content?
edit: elaborated on 'stuck' as advised :)
A:
Use Beautiful Soup. Doing it yourself is harder than it looks, you'll be better off using a tried and tested module.
EDIT:
I think you want:
soup = BeautifulSoup.BeautifulSoup(urllib.urlopen(url).read())
By the way, it's a bad idea to try opening the URL there, as if it goes wrong it could get ugly.
EDIT 2:
This should show you all the links in a page:
import urlparse, urllib
from BeautifulSoup import BeautifulSoup
url = "http://www.example.com/index.html"
source = urllib.urlopen(url).read()
soup = BeautifulSoup(source)
for item in soup.fetchall('a'):
try:
link = urlparse.urlparse(item['href'].lower())
except:
# Not a valid link
pass
else:
print link
A:
Here's a code example, showing getting the attributes and contents of the links:
soup = BeautifulSoup.BeautifulSoup(urllib.urlopen(url))
for link in soup.findAll('a'):
print link.attrs, link.contents
A:
Looks like you have two issues there:
link.contents, not link.content
attrs is a dictionary, not a string. It holds key value pairs for each attribute in an HTML element. link.attrs['href'] will get you what you appear to be looking for, but you'd want to wrap that in a check in case you come across an a tag without an href attribute.
| {
"pile_set_name": "StackExchange"
} |
Q:
Difference between Facade and Mediator Design pattern?
What is the difference between facade and mediator design pattern. I want understand which design pattern to choose between these two in which scenario. I was going through the following links and found both same in terms of use case.
Facade design pattern : http://www.tutorialspoint.com/design_pattern/facade_pattern.htm
Mediator design pattern :
http://www.java2s.com/Tutorial/Java/0460__Design-Pattern/CoordinatingYourObjectswiththeMediatorPatterns.htm
I have confusion in following code segment which looks similar in both the design patterns.
Facade class:
public class ShapeMaker {
private Shape circle;
private Shape rectangle;
private Shape square;
public ShapeMaker() {
circle = new Circle();
rectangle = new Rectangle();
square = new Square();
}
public void drawCircle(){
circle.draw();
}
public void drawRectangle(){
rectangle.draw();
}
public void drawSquare(){
square.draw();
}
}
Mediator class :
public class Mediator {
Welcome welcome;
Browse browse;
Purchase purchase;
Exit exit;
public Mediator() {
welcome = new Welcome(this);
browse = new Browse(this);
purchase = new Purchase(this);
exit = new Exit(this);
}
public void handle(String state) {
if (state.equals("welcome.shop")) {
browse.execute();
} else if (state.equals("shop.purchase")) {
purchase.execute();
} else if (state.equals("purchase.exit")) {
exit.execute();
}
A:
The facade exposes existing functionality and the mediator adds to the existing functionality.
If you look at the facade example, you will see that you are not adding any new functionality, just giving the current objects a new perspective. For example, Circle already exists and you are just abstracting off circle, using the method drawCircle.
If you look at your mediator class, you see that the method handle() provides additional functionality by checking the state. If you were to take out the conditions, you would have a facade pattern, since the additional functionality is gone.
A:
The facade pattern gives you a simple interface which interacts on a set of coherent classes. For example a remote control for your house which controls all kind of equipment in your house would be a facade. You just interact with the remote control, and the remote control figures out which device should respond and what signal to send.
The mediator pattern takes cares of communication between two objects, without the two objects need to have a reference to each other directly. A real life example is sending a letter, you post your letter and the postal service picks it up and makes sure that it will be delivered at the recipient. Without you telling them what route they should take. That is the mediator does.
Your examples however sound more like a creational pattern (looks like a factory) and a behavioral pattern (state pattern). I understand your confusion.
A:
I have confusion in following code segment which looks similar in both the design patterns.
I think you're seeing the composition aspects of both patterns.
Facade links to various existing classes of a subsystem to add some typical functionality that simplifies use of the subsystem. In the example code you cited, ShapeMaker provides services that facilitate making shapes.
Mediator links to various colleagues that have to collaborate, so as to minimize the knowledge the colleagues have about each other. Minimizing knowledge has the side effect of reducing coupling between colleagues (they only know the mediator) and increasing their cohesion (they generally have less to worry about since they don't know about the bigger picture).
In both patterns, the centralized class assumes responsibility for the complexity of dealing with the classes it is linked to.
Here are the basic patterns in UML from the Gang of Four:
| {
"pile_set_name": "StackExchange"
} |
Q:
manually calling click() on a button, can I pass any parameters?
I am manually calling .click() on a button on a page in my jquery/javascript code.
I need to pass a parameter to click that I can then read on the function that responds to the click event.
is this possible?
A:
You need to invoke .trigger(). You can pass over any amount of arguments there.
$('element').trigger('click', [arg1, arg2, ...]);
These extra parameters are then passed into the event handler:
$('element').bind('click', function(event, arg1, arg2, ...) {
});
Reference: .trigger()
| {
"pile_set_name": "StackExchange"
} |
Q:
Matlab figure print with no axes box
I'm trying to print a figure with the mapping toolbox.
When I print my figure, it always shows a black axes box, although it is not visible in the Matlab figure itself.
This code reproduces the problem:
f = figure;
f.Position = [f.Position(1:2) 765 421];
ax = axesm('MapProjection','robinson',...
'MapLatLimit',[-90 90],'MapLonLimit',[-180 180],....
'Frame','on','Grid','on');
ax.XColor = 'w';
ax.YColor = 'w';
tightmap
print('test','-dpng','-r150')
This is my test.png file with the black axes box:
This is a screenshot from my Matlab figure:
EDIT: adding a box off removed the top and right line
EDIT2: adding a ax.Visible = false; worked
A:
I figured it out.
Adding an ax.Visible = false; did it
| {
"pile_set_name": "StackExchange"
} |
Q:
Javascript Date for the Second Monday of the month
I am working with a group that meets the second monday of the month and they want their site to reflect the NEXT meeting date. I have the script to show this months second monday, but i am having trouble with the if else statement. I need it to reflect the next upcoming event and not just this months date. IE. this months event date was Aug 13 2012 which is past the current date (aug 21 2012). I would like it to move to the next available date Sept 10 2012. Below is the code i have so far.
<script type="text/javascript">
Date.prototype.x = function () {
var d = new Date (this.getFullYear(), this.getMonth(), 1, 0, 0, 0)
d.setDate (d.getDate() + 15 - d.getDay())
return d
}
Date.prototype.getSecondMonday = function () {
var d = new Date (this.getFullYear(), 1, 1, 0, 0, 0)
d.setMonth(this.getMonth()+1)
d.setDate (d.getDate() + 15 - d.getDay())
return d
}
var today = new Date()
var todayDate = today.toDateString()
if (Date.prototype.x>todayDate)
{
document.write (new Date().x().toDateString());
}
else
{
document.write (new Date().getSecondMonday().toDateString());
}
</script>
A:
If the date of the second Monday of the current month is less than the current date,
call the function on the first of the next month.
Date.prototype.nextSecondMonday= function(){
var temp= new Date(this), d= temp.getDate(), n= 1;
while(temp.getDay()!= 1) temp.setDate(++n);
temp.setDate(n+7);
if(d>temp.getDate()){
temp.setMonth(temp.getMonth()+1, 1);
return temp.nextSecondMonday();
}
return temp.toLocaleDateString();
}
/* tests
var x= new Date(2012, 7, 22);
x.nextSecondMonday()
Monday, September 10, 2012
var x= new Date(2012, 7, 12);
x.nextSecondMonday()
Monday, August 13, 2012
*/
| {
"pile_set_name": "StackExchange"
} |
Q:
Change the parameters from a method
i am new into programming in java and i hope i have chose the right title.
First my Code:
public class main
{
public static void main(String args[])
{
SysOutSleep sos = new SysOutSleep("Test", 450, 3 );//set the value
Thread t = new Thread(sos);
t.start();
//here i want to change the parameters from sos
//they should be something like that ("Test2", 390, 1)
//and after that i start the thread again with the new parameters
t.start();
}
}
So how can i change them, thank you in advance :)
A:
You can't start the same Thread isntance twice, which means you'll have to create a new Thread :
SysOutSleep sos = new SysOutSleep("Test", 450, 3);
Thread t = new Thread(sos);
t.start();
sos = new SysOutSleep("Test2", 390, 1);
t = new Thread(sos);
t.start();
| {
"pile_set_name": "StackExchange"
} |
Q:
How to Customize an individual Field in Symfony2
This is my ArticleType.php
class ArticleType extends AbstractType
{
/**
* @param FormBuilderInterface $builder
* @param array $options
*/
public function buildForm(FormBuilderInterface $builder, array $options)
{
$builder
->add('title')
->add('category')
->add('tags')
->add('cover')
->add('is_recommend', null, array('attr'=>array('require' => true)))
->add('description')
;
}
how to customize filed "cover" in twig? this is my code in twig
<div class="col-xs-12">
<!-- PAGE CONTENT BEGINS -->
{% form_theme edit_form with 'AppsAdminBundle:Form:fields.html.twig'%}
{{ form_start(edit_form) }}
{{ form_errors(edit_form) }}
{% block _article_cover_widget %}
<div class="text_widget">
{{ block('form_widget_simple') }}
<a href="">上传文件</a>
</div>
{% endblock %}
{{ form_end(edit_form) }}
</div><!-- /.col -->
I want customize field cover in this twig, but, I don't know, why it's not work, I hope get help, thanks a lot!
A:
Check docs to customize form rendering.
If you want to manage each field rendering separately, you can do it this way too:
<div class="col-xs-12">
<!-- PAGE CONTENT BEGINS -->
{% form_theme edit_form with 'AppsAdminBundle:Form:fields.html.twig'%}
{{ form_start(edit_form) }}
{{ form_errors(edit_form) }}
{{ form_row(edit_form.title) }}
{{ form_row(edit_form.category) }}
{{ form_row(edit_form.tags) }}
<div class="text_widget">
{{ form_row(edit_form.cover) }}
<a href="">上传文件</a>
</div>
{{ form_row(edit_form.is_recommend) }}
{{ form_row(edit_form.description) }}
{{ form_end(edit_form) }}
</div>
or, simply:
<div class="col-xs-12">
<!-- PAGE CONTENT BEGINS -->
{% form_theme edit_form with 'AppsAdminBundle:Form:fields.html.twig'%}
<div class="text_widget">
{{ form_row(edit_form.cover) }}
<a href="">上传文件</a>
</div>
{{ form_rest(edit_form) }}
{{ form_end(edit_form) }}
</div>
| {
"pile_set_name": "StackExchange"
} |
Q:
Migrating a git-svn branch to a git branch
My project (which is rather large, at ~2 million lines of code and tens of thousands of commits) is currently switching from git-svn to git. Many users have git-svn branches with history that would be nice to have in the new pure git repository. The basic scenario is like this
The old git-svn repo:
A-B-C-D-E-F (master)
I have a branch on this repository that has merged master in several times, eg:
G-H---I-J-K (feature)
/ /
A-B-C-D-E-F (master)
I want to move this branch to the new pure-git repository and maintain my history. To make things more complex, the directory structure of the pure-git repository and svn repository is slightly different. Specifically, the base directory structure in this repository has two directories, e.g:
foo/
bar/
In the new git repository, bar/ has been moved to a new repository.
How can I move this branch to the new repository and end up with something like this in the pure-git repo?
G'-H'----I'-J'-K' (feature)
/ /
A'-B'-C'-D'-E'-F' (master)
I thought the following would work:
From the feature branch on the git-svn repo:
git filter-branch -f --index-filter "git rm -rf --cached --ignore-unmatch bar" B..HEAD
Which should remove all modifications to the foo directory, which is non-existent in the new repo.
Then, add the git-svn repo as a remote for the pure git repo and do this from the pure-git repo:
git checkout -b B' feature
git rebase --preserve-merges --onto feature remotes/old_git_svn_repo/master remotes/old_git_svn_repo/feature
Unfortunately, this doesn't seem to work. I'm still required to manually resolve all of the merge-conflicts that I've already resolved in my feature branch. Is there a way to do what I want?
A:
Just as you did, the first thing you should run is a filter-branch --index-filter to fix the tree structure of your branch. But now you need to fix the parent-relationships. Depending on how many merges there have been, there are two options:
Do it manually: have a look at git help replace. git replace allows you to replace one commit with another. Do this for every merge-base. For example:
git replace B B'
git replace E E'
Beware that if you don’t run filter-branch on master..feature, B might allready have a different SHA.
After this step, you will need to run filter-branch again to make the replacements permanent. I would recommend doing the replacements first and then running the --index-filter. This way you get it all in one run.
Do it with a --parent-filter. Have a look at the --parent-filter argument for filter-branch. It will allow you to specify a script to rewrite parent relationships. That script can the use the git-svn-id: that should be recorded in every ported commit message to find the matching commits in the new master.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to make custom widget/component in flutter?
Let say I want a container with this style -> rounded shape and with border.
Should I create a theme for Container?
Or should I create my custom widget/component?
My main concern here is not to repeat everything so I'm thinking about this 2 possibilities.
Which one more recommended?
Kind Regards
And why people down voted my question. I really don't know :(
A:
You have to create your widget, which extend Widget
It can be StatelessWidget
class MyWidget extends StatelessWidget {
Widget build(BuildContext context) {
//... return your container here
}
or StatefulWidget
class MyWidget extends StatefulWidget {
MyWidget(this.child);
final Widget child;
@override
State<StatefulWidget> createState() => _MyWidgetState();
}
class _MyWidgetState extends State<MyWidget> {
@override
Widget build(BuildContext context) {
return Container(child: widget.child, ...)
//... return your container here
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does Git not default to "origin master"?
I tried many times doing a
git pull
and it says the branch is not specified. So the answer in How do you get git to always pull from a specific branch? seems to work.
But I wonder, why doesn't Git default to master branch? If nothing is specified, doesn't it make sense to mean the master branch? Is Git trying to live up to its name - "a git" = an unpleasant person, so that when you ask what time it is, it always tells you, "since you didn't say if you want to know the local time or Greenwich Mean Time, abort."
A:
git tries to use sensible defaults for git pull based on the branch that you're currently on. If you get the error you refer to from git pull while you're on master, that means you haven't configured an upstream branch for master. In many situations this will be configured already - for example, when you clone from a repository, git clone will set up the origin remote to point to that repository and set up master with origin/master as upstream, so git pull will Just Work™.
However, if you want to do that configuration by hand, you can do so with:
git branch --set-upstream master origin/master
... although as of git 1.8.0 you should use --set-upstream-to, since the former usage is not deprecated due to be confusingly error-prone. So, for git 1.8.0 and later you should do:
git branch --set-upstream-to origin/master
Or you could likewise set up the appropriate configuration when pushing with:
git push -u origin master
... and git pull will do what you want.
A:
As mentioned above, git clone sets all the defaults appropriately.
I've found pushing with the -u option to be the easiest to set things up with an existent repo.
git push -u origin master
You can check what has been remotely configured using
git remote -v
See git help remote for more information on this.
| {
"pile_set_name": "StackExchange"
} |
Q:
The description about sorting in Model/View Qt document maybe wrong?
In Qt document online Model/View Programming, it's said that If your model is sortable, i.e, if it reimplements the QAbstractItemModel::sort() function, both QTableView and QTreeView provide an API that allows you to sort your model data programmatically. In addition, you can enable interactive sorting (i.e. allowing the users to sort the data by clicking the view's headers), by connecting the QHeaderView::sortIndicatorChanged() signal to the QTableView::sortByColumn() slot or the QTreeView::sortByColumn() slot, respectively.
However, firstly the QTableView::sortByColumn() is not a slot, so one cannot connect a signal to it; secondly, the code of QTableView::sortByColumn() is something like
d->header->setSortIndicator(column, order);
//If sorting is not enabled, force to sort now.
if (!d->sortingEnabled)
d->model->sort(column, order);
and QHeaderView::setSortIndicator() function emit sortIndicatorChanged(logicalIndex, order). But if one uses function setSortingEnabled(true), the signal sortIndicatorChanged(logicalIndex, order) can also be emitted automatically by the view header when click the header column of the view.
So maybe the right way is to make a slot to receive the signal sortIndicatorChanged(logicalIndex, order) and in the slot, call the override virtual function sort() of the model?
A:
Sort the tree view by click a column.
Set the view can be sorted by click the "header".
treeView_->setSortingEnabled(true);
Connect the header signal to a slot made by you.
connect(headerView, SIGNAL(sortIndicatorChanged(int, Qt::SortOrder)),
treeModel_, SLOT(sortByColumn(int, Qt::SortOrder)));
In the slot, call the sort() virtual function of the model. sort() virtual function is a virtual function of QAbstractItemModel, and one should override it.
void TreeModel::sortByColumn(int column, Qt::SortOrder order)
{
sort(column, order);
}
Override the sort() function as your model should do.
emit dataChanged(QModelIndex(), QModelIndex()); from a model to update the whole tree view.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there a "Code Complete" book for Ruby?
I began reading the book "Code Complete" 2nd edition, but stopped reading when I noticed most of the solutions were easily solvable in Ruby with Ruby idioms. Is there a similar book for Ruby?
Here's the version that I started reading:
http://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670
A:
Pick that book back up and start where you left off. As someone who read the first edition and these days writes Ruby for a living, I can confidently say that the lessons of Code Complete are universal. The wisdom in that book about good code construction - quality naming, decoupling, how to structure a function, etc. - will stand any programmer in good stead. I still refer to my dog-eared first edition regularly.
As far as books on practices which pertain more specifically to Ruby: first, every programmer should have a copy of "The Ruby Way, 2nd Edition", by Hal Fulton, on his or her desk. Second, while I haven't read it yet, I have heard good things about "Ruby Best Practices".
A:
For such an old programming language (well, maybe not "Lisp" old, but Ruby is older than Java, after all) and a community that fanatic about code quality, style and beauty, the Ruby book market is surprisingly light on "higher level" books.
There are several reasons for this: a lot of Rubyists are already seasoned Smalltalk or Lisp programmers when they pick up Ruby, so they don't need those books. And Ruby and Smalltalk are similar enough that you can just read a Smalltalk book instead. (For example, you can pretty much read Stéphane Ducasse's list of free Smalltalk books from top to bottom.) The same goes for Perl.
Until very recently, the best higher level Ruby book was basically Smalltalk Best Practice Patterns by Kent Beck. (There is a newer version of this book, called Implementation Patterns which uses Java instead of Smalltalk. However, since Kent's Java style is very Smalltalk-ish, it is essentially SBPP, 2nd Ed.) However, since then, several higher level books have been released or are in the works:
Design Patterns in Ruby by Russ Olsen,
Ruby Best Practices by Gregory T. Brown,
Refactoring: Ruby Edition by Jay Fields, Shane Harvie, Martin Fowler and Kent Beck (a re-interpretation of Fowler's book in Ruby) and
Rein Henrichs is (or at least was, as of last year) working on Ruby Best Practice Patterns.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there a fast way to count occurrences of items in a matrix and save them in another matrix without using loops?
I have a time-series matrix X whose first column contains user ID and second column contains the item ID they used at different times:
X=[1 4
2 1
4 2
2 3
3 4
1 1
4 2
5 3
2 1
4 2
5 4];
I want to find out which user used which item how many times, and save it in a matrix Y. The rows of Y represent users in ascending order of ID, and the columns represent items in ascending order of ID:
Y=[1 0 0 1
2 0 1 0
0 0 0 1
0 3 0 0
0 0 1 1]
The code I use to find matrix Y uses 2 for loops which is unwieldy for my large data:
no_of_users = size(unique(X(:,1)),1);
no_of_items = size(unique(X(:,2)),1);
users=unique(X(:,1));
Y=zeros(no_of_users,no_of_items);
for a=1:size(A,1)
for b=1:no_of_users
if X(a,1)==users(b,1)
Y(b,X(a,2)) = Y(b,X(a,2)) + 1;
end
end
end
Is there a more time efficient way to do it?
A:
sparse creates a sparse matrix from row/column indices, conveniently accumulating the number of occurrences if you give a scalar value of 1. Just convert to a full matrix.
Y = full(sparse(X(:,1), X(:,2), 1))
Y =
1 0 0 1
2 0 1 0
0 0 0 1
0 3 0 0
0 0 1 1
But it's probably quicker to just use accumarray as suggested in the comments:
>> Y2 = accumarray(X, 1)
Y2 =
1 0 0 1
2 0 1 0
0 0 0 1
0 3 0 0
0 0 1 1
(In Octave, sparse seems to take about 50% longer than accumarray.)
| {
"pile_set_name": "StackExchange"
} |
Q:
Why aren't mini-dumps being created by Windows?
I have an x64-platform application running on Windows 8.1, x64...it crashes from time to time (its a multibyte, COM+ object hosted in a Windows service). I wanted to get the OS to write mini-dump files whenever an exception happens, so I set the following keys up in my registry:
Yet when a crash does occur, I see nothing in %LOCALAPPDATA%\CrashDumps. Why is this happening? Is it because the service is running under the Local System account?
A:
As it turns out, the dumps were being created. They were being created in C:\Windows\System32\%LOCALAPPDATA%\CrashDumps. This is because %LOCALAPPDATA% maps contextually under the context of a user account. If you use the Local System account for a service, this doesn't translate to anything...so it just appends to the default path of Local System which is C:\Windows\System32. Kind of a funny way to handle this case, M$...
| {
"pile_set_name": "StackExchange"
} |
Q:
what type of rendering engine is cycles
I don't know what type of rendering engine cycles is. is it path tracing ray tracing
Raydiosidi ray casting photon maping? I would like to know
A:
Cycles is a ray tracing renderer.
Can use different tracing strategies (under integrator option):
Path tracing: pure path tracer.
Branched path tracing: path tracer with branching on the first bounce.
It uses Monte-Carlo for sampling, that means repeated random sampling.
Soon Metropolis Light Transport should be also available which is a variant to Monte-Carlo, it uses Bidirectional path tracing and Metropolis sampling.
| {
"pile_set_name": "StackExchange"
} |
Q:
Rails Console "sharing" CTRL + C
When I open two Rails consoles and press CTRL+C in one of them, it will be sent to both consoles.
Why is that and how can I prevent this?
(Rails 4.2.0)
A:
Rails 4.1 introduced Spring, which speeds up the booting process of some Rails' components (like the console).
Each console is now trying to reach the spring server to check whether or not an existing Rails app is already running. If it finds one, it does a "warm run" as there's no need to boot the app.
Hitting Ctrl+C sends the SIGINT signal to Spring (and you can see ^C on all your terminals running a console connected to that server) but Spring ignore them to avoid killing the master server.
AFAICT from this analysis, there's nothing you can do.
| {
"pile_set_name": "StackExchange"
} |
Q:
access to attribute through attrs.val or attrs.$set(attname, val)
What is the difference between 2 way of setting attributes from directive(or other places)
(environment):
angular.module('module', [])
.directive('directive', [ function () {
return {
restrict: 'A',
scope: true,
link: function (scope, element, attrs) {
...
between:
attrs.skipWatchValue = true;
and
attrs.$set( 'skip-watch-value', true );
(it seems that the second one doesnt't work at all now...)
A:
There is a single difference between both the syntax that, writing attrs.$set( 'skip-watch-value', true ); will also modify the DOM element (see by inspecting element) and sets the value while attrs.skipWatchValue = true; will not modify the DOM element.
| {
"pile_set_name": "StackExchange"
} |
Q:
reloadData for a string
I have this in one view:
-(void)viewWillDisappear:(BOOL)animated
{
RootViewController *rVC = [[RootViewController alloc] initWithNibName:@"RootViewController" bundle:nil];
[rVC setMessage:[label text]];
NSLog(@"ihere - %@",rVC.message);
}
The NSLog returns the correct string. How would I reload the data in the RootViewController to update the string message there?
doing this doesn't work in my RootViewController (which i go back to in navcontroller):
-(void)viewWillAppear
{ [[self message] reloadData]; }
because the message is just a string. Can somebody show me how to fix this please?
Hi can someone else try to help me please?
In the viewWillAppear event, i need to reloadData on a NSString. So i need to convert it somehow to an object before i can use reloadData on it.
A:
That's because NSString doesn't have a reloadData method.
And as it is immutable it wouldn't make sense if it did.
What you probably want to do is display your string in viewWillAppear and change the model property in the controller where it gets this from.
Delegation is the usual way to do this and I've written a couple of examples that might help you see what is happening;
DelegationExample
TableViewDelegation
| {
"pile_set_name": "StackExchange"
} |
Q:
prove a field is a divisible group
I saw a statement: every field of characteristic 0, with its underlying additive group structure, is divisible.
I met with troubles in checking the above statement.
Suppose $\operatorname{char}(F)=0$, for each $g\in F$ and for each $n$, how to find $h\in F$ such that $nh=g$
A:
The solution is:
$$h=(n 1)^{-1}\cdot g$$
Here "$1$" denotes the multiplicative identity of $F$, "$nx$" means "add $x$ to itself $n$ times" (as usual) and "$\cdot$" denotes the field multiplication. Note that when $\operatorname{char}(F)=0$ then $n 1$ is invertible in $F$ when $n$ is a positive natural.
The rest follows from the observation that for $x\in F$ and $n\in\mathbb{N}$ we have $nx=(n1)\cdot x$. And thus
$$nh=(n1)\cdot h=(n1)\cdot (n1)^{-1}\cdot g=g$$
| {
"pile_set_name": "StackExchange"
} |
Q:
Can't clone and commit with gitolite
I am having some trouble with git/gitolite on Windows Server 2003 (although I suspect the OS is not the source of the problem.)
If I do this:
git clone git@server:test.git
I can't clone:
Cloning into test...
git@server's password:
fatal: 'test.git' does not appear to be a git repository
fatal: The remote end hung up unexpectedly
If I do this:
git clone git@server:repositories/test.git
I can clone, but I can't commit:
git@server's password:
Counting objects: 3, done.
Writing objects: 100% (3/3), 229 bytes, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: ENV GL_RC not set
remote: BEGIN failed--compilation aborted at hooks/update line 20.
remote: error: hook declined to update refs/heads/master
To [email protected]:repositories/test.git
! [remote rejected] master -> master (hook declined)
error: failed to push some refs to 'git@server:repositories/test.git'
If I look at $REPO_BASE in my .gitolite.rc, I see:
# ------------------------------------------------------------------------------
# variables that should NOT be changed after the install step completes
# ------------------------------------------------------------------------------
$REPO_BASE="repositories";
Can anyone tell me what to do here? Reinstall gitolite? Change the variable? Is this not the cause of the problem?
A:
Josh,
The syntax you're using is only for users that did NOT perform the remote gitolite setup. (For this reason I recommend using the alternative admin setup which is in the gitolite docs) To clone repos for the admin user you have to use the git clone git@gitolite:repot.git syntax. You also have to use that syntax when adding remotes but it will only work for your user.
Again, this can be avoided by using the admin/service based setup. http://sitaramc.github.com/gitolite/doc/1-INSTALL.html#_install_methods_and_deciding_which_one_to_use
| {
"pile_set_name": "StackExchange"
} |
Q:
How to set width of a custom validator
Does anybody know of a way to set the width of a custom validtor so that the error message text will wrap if it exceeds the specified width?
I have a user control that contains a custom validator which the containing page can set the error message on based on specific validation results.
The user control sits within a table cell in a page.
If the message is very long it simply prints the entire message on a single line ignoring any column widths that are set.
Thanks for any insight.
EDIT:
I have tried setting the width property on the custom validator itself to no avail.
A:
If the user control is defined such that the error message appears inside a div, try to set the width of that div element.
If your control is in something like this
<asp:TableCell id="some1" runat="server">
<uc:yourControl id="uc1" runat="server" />
</asp:TableCell>
Modify the control to have a div or span element for the message.
| {
"pile_set_name": "StackExchange"
} |
Q:
Own defined Layout , onDraw() method not getting called
I defined a class like that:
public class TestMyFrameLayout extends FrameLayout{
Paint mPaint;
public TestMyFrameLayout(Context context, AttributeSet attrs) {
super(context, attrs);
}
public TestMyFrameLayout(Context context) {
super(context);
mPaint = new Paint();
mPaint.setColor(Color.GREEN);
}
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
canvas.drawCircle(50f, 50f, 30, mPaint);
}
}
and called it like:
TestMyFrameLayout myFrameLayout = new TestMyFrameLayout(this);
LayoutParams myFrameLayoutParams = new LayoutParams(300,300);
myFrameLayout.setLayoutParams(myFrameLayoutParams);
setContentView(myFrameLayout);
But In fact TestMyFrameLayout.onDraw(Canvas canvas) function not getting called, why?
A:
Solved. Add
this.setWillNotDraw(false);
in constructor
| {
"pile_set_name": "StackExchange"
} |
Q:
Perform javascript on html content from js file
This is a big headache.
I have a simple js file consisting of nothing but document.write() blocks of html content, that I need to perform jquery on.
To put things in perspective, the js file, included like this:
<script language="JavaScript" src="http://external-java-file.js"/>
dumps news in a batch of div's. the trouble is, I need to perform javascript on each seperate article-div (to include it into a custom javascript "scroller". is there anyway for me to firstly "hide" the entire block of news, and then per-div add it it my scroller.
Basically I have this from the js file:
<div class="newsContainer">
<div class="newsArticle">bla bla bla</div>
<div class="newsArticle">bla bla bla</div>
<div class="newsArticle">bla bla bla</div>
</div>
Since it's all coming from this stinking external js file through document.write, I can't access it with a seperate code block like this:
<script>
$(document).onload(function(){
$("div.newsContainer").css("display","none");
});
</script>
I'm pretty sure I'm at the end of the road, but I'd like to see if any smart minds have genious solution
A:
Try this:
HTML
<div id="news">
<script language="JavaScript" src="http://external-java-file.js"/>
</div>
JavaScript
$(function() {
$('#news .newsArticle').hide();
});
If the script is using document.write(), then you will be able to access it's container, #news, once the page has loaded and edit it that way.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to have a table column with only minutes and seconds
I have a database project that I have to insert some music information in it. In one of my tables, I have a column in which I have to insert the track time of all the songs. For that, I was wondering if there is any function (similar to to_date()) that I can use in order to insert minute:second format only.
I tried to use to_timestamp(). However, it will always give me actual date with the first day of the month that I insert the data.
for example:
to_timestamp('9:10','MI:SS')
Result:
18-06-01, 00:9:10,0000000
PS: for the track time column, is it ok to defined the datatype as TIMESTAMP?
A:
Oracle doesn't support a separate time data type.
I would suggest that you store the value as a number of seconds if you want to do arithmetic (such as adding up the values). If you just want to look at them, use a string format.
If you want to convert a number of seconds to minutes/seconds, you can use:
select floor(secs / 60) || ':' || lpad(mod(secs, 60), 2, '0')
| {
"pile_set_name": "StackExchange"
} |
Q:
CLOCK_MONOTONIC vs CLOCK_MONOTONIC_RAW truncated values
I am writing some test code where I need nanosecond resolution. When I use clock_gettime with CLOCK_MONOTONIC, i get a value I expect: 3327.874384321.
When i use clock_gettime with CLOCK_MONOTONIC_RAW, i get a value that i do not expect: 3327.875723000
I've run this in a loop, and ALL of the values returned have the nanosecond resolution "truncated", 000.
Output from uname -a: Linux raspberrypi 3.12.22+ #691 PREEMPT Wed Jun 18 18:29:58 BST 2014 armv6l GNU/Linux
Thoughts on what is happening? How to address?
I am currently considering disabling NTP so I can use CLOCK_MONOTONIC
A:
I think your conclusion that CLOCK_MONOTONIC_RAW is "truncated" is wrong. Rather, the resolution of the hardware clock source is probably just microseconds. The nonzero low digits you're seeing in CLOCK_MONOTONIC are because the timestamps from the hardware clock source are being scaled, per adjustments made via adjtime/NTP, to correct for imprecision in the hardware clock rate that would otherwise make it drift relative to real time.
To test this hypothesis, you should take a large number of timer samples with CLOCK_MONOTONIC and look for a pattern in the low digits. I suspect you'll find that all your timestamps differ by a multiple of some number of nanoseconds close to but not exactly 1000, e.g. maybe 995 or 1005 or so.
| {
"pile_set_name": "StackExchange"
} |
Q:
Ember Data 1.0 Error while loading route: TypeError: Cannot set property 'typeKey' of undefined
I am trying to visualize data from an internal Laravel PHP API. So my localhost:8000/api/orders outputs json that looks like the following:
[
{
"id": "2",
"topic": "yusdvhdsh",
"data": ""
},
{
"id": "3",
"topic": "praise",
"data": ""
}
]
Here is my app.js
App.Router.map(function() {
this.route('introduction');
this.route('orders');
this.route('faq');
});
App.Order = DS.Model.extend({
id: DS.attr('string')
, topic: DS.attr('string')
, data: DS.attr('string')
});
App.ApplicationAdapter = DS.RESTAdapter.extend({
namespace: 'api'
});
App.FaqRoute = Ember.Route.extend({
model: function() {
return this.store.find('order');
}
});
I defined the routes, the order model, the faqroute and the adapter. I need to display a listing of the topics from this JSON data from localhost:8000/api/orders To be displayed in the faq template that looks like the one below:
<script type="text/x-handlebars" data-template-name="faq">
<h5>faqs</h5>
{{#each model}}
{{topic}}
{{/each}}
</script>
But when I try to access localhost:8000/#/faq it does not display anything and I get the following error on my console:
"Assertion failed: Error while loading route: TypeError: Cannot set property 'typeKey' of undefined "
Let me know what I am doing wrong...
A:
Ember data expects the response like this
{
"orders": [
{
"id": "1",
"topic": "Rails is unagi",
"data": "Rails is unagi"
},
{
"id": "2",
"topic": "Omakase O_o",
"data": "Rails is unagi"
}]
}
Additionally you shouldn't define id on the class definition
App.Order = DS.Model.extend({
topic: DS.attr('string'),
data: DS.attr('string')
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Cross-correlation of filtered random processes
I have a wide-sense-stationary (WSS) process $\{x(t)\}$ and two linear filters with impulse functions $h_1$ and $h_2$.
Let $\delta(\omega)$ be the power spectrum of $\{x(t)\}$ and $$H_1:\omega\mapsto H_1(\omega)$$ and $$H_2:\omega\mapsto H_2(\omega)$$ the transfer functions of the filters.
The outputs of the filters are denoted $$y_1(t)=(x \star h_1)(t)$$ and $$y_2(t)=(x \star h_2)(t),$$ where $\star$ denotes the convolution.
How can we compute the correlation of $\{y_1(t)\}$ and $\{y_2(t)\}$ and when are these two random variable uncorrelated?
A:
As an addition to Dilip's answer I'll show you how to derive that result:
$$\begin{align}R_{y_1,y_2}(\tau)&=E[y_1(t+\tau)y_2(t)]\\&=E\left[\int_{-\infty}^{\infty}x(\alpha)h_1(t+\tau-\alpha)d\alpha\int_{-\infty}^{\infty}x(\beta)h_2(t-\beta)d\beta\right]\\&=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}E[x(\alpha)x(\beta)]h_1(t+\tau-\alpha)h_2(t-\beta)d\alpha d\beta\\&=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}R_x(\alpha-\beta)h_1(t+\tau-\alpha)h_2(t-\beta)d\alpha d\beta\\&\stackrel{\gamma=\alpha-\beta}{=}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}R_x(\gamma)h_1(t+\tau-\alpha)h_2(t-\alpha+\gamma)d\alpha d\gamma\\&\stackrel{\zeta=\alpha-t}{=}\int_{-\infty}^{\infty}\underbrace{\int_{-\infty}^{\infty}R_x(\gamma)h_2(\gamma-\zeta)d\gamma}_{(R_x\star h_2^-)(\zeta)}\; h_1(\tau-\zeta) d\zeta\\&=(R_x\star h_1\star h_2^-)(\tau)\qquad\qquad\qquad (1)\end{align}$$
with $h_2^-(t)=h_2(-t)$.
The cross-spectral density $S_{y_1,y_2}(\omega)$ is the Fourier transform of the cross-correlation function:
$$S_{y_1,y_2}(j\omega)=S_x(j\omega)H_1(j\omega)H_2^*(j\omega)\tag{2}$$
where $S_x(j\omega)$ is the power spectral density of $x(t)$, and $H_1(j\omega)$ and $H_2(j\omega)$ are the frequency responses of the two filters.
Using $(2)$ it is straightforward to define a condition on $H_1(j\omega)$ and $H_2(j\omega)$ such that the cross-spectral density, and, consequently, the cross-correlation function become zero.
| {
"pile_set_name": "StackExchange"
} |
Q:
Silverlight Toolkit - Where is the AutoCompleteBox?
I just added the Silverlight 4 toolkit to my project via NuGet. (NuGet package "Silverlight Toolkit - All")
I can't find the AutoCompleteBox anywhere in the dlls added to my project. Where is it?
Things I've tried:
I cracked open all the dlls it added
to my project, and I don't see
AutoCompleteBox in any of them.
Looking at the Silverlight Toolkit
discussions, I don't see anyone mentioning its removal.
Looking at the Silverlight Toolkit
changeset history, I don't see it
being mentioned as removed.
I browsed the source, and I do see AutoCompleteBox in there, but it get compiled into a dll System.Windows.Controls.Input.dll, but when I add the project in via NuGet, I don't get that dll, instead I get System.Windows.Controls.Input.Toolkit.dll
Where is the AutoCompleteBox in the Silverlight Toolkit?
A:
Its not in the Silveright Toolkit any more. As of Silverlight 4 it moved to the SDK. You should be able to drag the AutoCompleteBox from you toolbar to the designer and VS will add the reference to System.Windows.Controls.Input.dll for you.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to delete group from expandable list view
I am trying to get the primary key "_id" from the selected group using code
groupPositionID is a global variable initialized like this
and IngredientListGroup_cursor is a group cursor which fetches the group data for expanable list view.
String groupPositionID=null;
Cursor IngredientListGroup_cursor;
in the oncreate code:
public void onCreate(Bundle savedInstanceState) {
IngredientListGroup_cursor=helper.GetIngredientsList();
}
ExpandableIngredietnsList.setOnGroupClickListener(new OnGroupClickListener(){
@Override
public boolean onGroupClick(
ExpandableListView paramExpandableListView,
View paramView, int paramInt, long paramLong) {
// TODO Auto-generated method stub
groupPositionID=IngredientListGroup_cursor.getString(0);
Toast toast = Toast.makeText(getBaseContext(),groupPositionID ,Toast.LENGTH_LONG);
toast.show();
return false;
}
});
this is my SQLhelper function for deleting the selected group from database.
public Cursor GetIngredientsList(){
return(getReadableDatabase().rawQuery("SELECT _id,Ingredient_name FROM tblIngredients",null));
}
public Cursor DeleteIngredientsList_Item(String index){
String[] args={index};
return(getReadableDatabase().rawQuery("DELETE FROM tblIngredients WHERE _id=?",args));
}
The problem I am facing is tha although i am getting the id for the selected group from cursor (i m able to see that as toast) but then, why i am not able to delete that from context menu.
on selecting delete from context menu the following code should execute properly
public boolean onOptionsItemSelected(MenuItem IngredientItem){
if(IngredientItem.getItemId()==R.id.addIngredient){
Intent i= new Intent(Ingredients_List.this,Ingredients_Add.class);
startActivity(i);
return(true);
}
else if (IngredientItem.getItemId()==R.id.deleteIngredient) {
if(groupPositionID!=null){
helper.DeleteIngredientsList_Item(groupPositionID);
return(true);
}
else{
Toast.makeText(getBaseContext(), "Select The Ingredient You want to delete", Toast.LENGTH_LONG).show();
}
}
return (super.onOptionsItemSelected(IngredientItem));
}
I am getting the selected primary key id value from "groupPositionID" variable here.. accurate value is transferred to database raw query also... still not getting what i am expecting
Plz help ASAP I am new to both android and java
sdk information(although not required)
android:versionCode="1"
android:versionName="1.0"
android:minSdkVersion="8"
Thanks in Advance
A:
public void DeleteIngredientsList_Item(long index){
String[] args = new String[]{Long.toString(index)};
Log.v("I am in", "SQLHelper");
getWritableDatabase().execSQL("DELETE FROM tblIngredients WHERE _id=?",args);
}
i modified this delete code in mysql helper
my function was earlier returning cursor thats why it was not working ... meanz in actual query was not getting executed...
rest was ok.. :)
| {
"pile_set_name": "StackExchange"
} |
Q:
TypeError: Cannot read property 'state' of undefined react
I have been working on a todo list in react, and ran into a problem I've been having trouble figuring out. I managed to get the todo list to dynamically update my array state, but having a little problem with displaying it. Here is the code: (https://codesandbox.io/s/friendly-curran-fldtg?fontsize=14). At the top I give a brief explanation of what everything does. The problem area is at the bottom, this line of code:
const DisplayTasks = () => {
const { tasksarray } = this.state;
return (
<ol>
{tasksarray.map(eachTask => (
<li>{eachTask}</li>
))}
</ol>
);
};
I'm able to get the app working the way I want it by deleting the DisplayTask component and putting the ol code in render() like so:
render() {
const { tasksarray } = this.state;
return (
<div>
<GetTask task={this.inputTask} />
<ol>
{tasksarray.map(eachTask => (
<li>{eachTask}</li>
))}
</ol>
</div>
);
}
}
But I'm wondering why I get TypeError Cannot read property 'state' of undefined when I try to do it as a component, and if there's a way to make it work as a component rather than directly in render()?
A:
You can't access this.state in Functional Component.
In react, the parent component can pass data to the child using the props like below
const DisplayTasks = ({tasks}) => {
return (
<ol>
{tasks.map((task, index) => <li key={index}>{task}</li>)}
</ol>
);
};
render() {
return (
<div>
<GetTask task={this.inputTask} />
<DisplayTasks tasks={this.state.tasksarray} />
</div>
);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Xcode won't detect device
I'm having a bit of an issue testing my app using my iPhone. Last week I upgraded to Xcode 4.3. I was able to test my apps on my iPhone (which was running iOS 4.3). Today I upgraded my iPhone to iOS 5.1 and now Xcode wont detect my iPhone (btw SDK is 5.1).
I'm not too sure what to do, I've even tried changing the Deployment target in Xcode back down to 4.3, but still nothing.
A:
Make sure the device is "Enabled for development" under Organizer in Xcode. I've seen cases, where Xcode doesn't recognise the device, because it wasn't setup to be a development device.
| {
"pile_set_name": "StackExchange"
} |
Q:
Bulleted list with vertical lines
I want to add different positions within a company in a similar fashion to what LinkedIn uses, i.e. the vertical line connecting the dots of the bullet list. How can this be done?
Edit: now with a minimal working example after doing some research:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{calc}
\newcommand{\linkedlist}[1]{
\begin{tikzpicture}[remember picture]%
\node (#1) [gray,circle,fill,inner sep=1.5pt]{};
\end{tikzpicture}%
}
\begin{document}
\begin{itemize}
\item[\linkedlist{a}] test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph.
\item[\linkedlist{b}] test
\item[\linkedlist{c}] test
\end{itemize}
\begin{tikzpicture}[remember picture,overlay]
\draw[gray] ($(a)!0.1!(b)$) -- ($(a)!0.9!(b)$);
\draw[gray] ($(b)!0.2!(c)$) -- ($(b)!0.8!(c)$);
\end{tikzpicture}
\end{document}
The problem currently is that I need to add the lines manually in a second overlay picture and that I specify the "gap" between the dot and the start/end of the line using a percentage instead of a fixed value (e.g. 2pt). This causes the gaps to be of different size depending on the length of the paragraph.
A:
Final solution (edit3)
Now, the code should work with page breaks in lists as well. Hope you like it :) If anything is unclear or you want some explanation about parts, just let me know!
\documentclass{article}
\usepackage{tikz,tikzpagenodes}
\usetikzlibrary{calc}
\usepackage{refcount}
\newcounter{mylist} % new counter for amount of lists
\newcounter{mycnt}[mylist] % create new item counter
\newcounter{mytmp}[mylist] % tmp counter needed for checking before/after current item
\newcommand{\drawoptionsconn}{gray, shorten <= .5mm, shorten >= .5mm, thick}
\newcommand{\drawoptionsshort}{gray, shorten <= .5mm, shorten >= -1mm, thick}
\newcommand{\myitem}{% Modified `\item` to update counter and save nodes
\stepcounter{mycnt}%
\item[\linkedlist{%
i\alph{mylist}\arabic{mycnt}}]%
\label{item-\alph{mylist}\arabic{mycnt}}%
\ifnum\value{mycnt}>1%
\ifnum\getpagerefnumber{item-\alph{mylist}\arabic{mytmp}}<\getpagerefnumber{item-\alph{mylist}\arabic{mycnt}}%
\begin{tikzpicture}[remember picture,overlay]%
\expandafter\draw\expandafter[\drawoptionsshort] (i\alph{mylist}\arabic{mycnt}) --
++(0,3mm) --
(i\alph{mylist}\arabic{mycnt} |- current page text area.north);% draw short line
\end{tikzpicture}%
\else%
\begin{tikzpicture}[remember picture,overlay]%
\expandafter\draw\expandafter[\drawoptionsconn] (i\alph{mylist}\arabic{mytmp}) -- (i\alph{mylist}\arabic{mycnt});% draw the connecting lines
\end{tikzpicture}%
\fi%
\fi%
\addtocounter{mytmp}{2}%
\IfRefUndefinedExpandable{item-\alph{mylist}\arabic{mytmp}}{}{% defined
\ifnum\getpagerefnumber{item-\alph{mylist}\arabic{mytmp}}>\getpagerefnumber{item-\alph{mylist}\arabic{mycnt}}%
\begin{tikzpicture}[remember picture,overlay]%
\expandafter\draw\expandafter[\drawoptionsshort] (i\alph{mylist}\arabic{mycnt}) --
++(0,-3mm) --
(i\alph{mylist}\arabic{mycnt} |- current page text area.south);% draw short line
\end{tikzpicture}%
\fi%
}%
\addtocounter{mytmp}{-1}%
}
\newcommand{\linkedlist}[1]{
\raisebox{0pt}[0pt][0pt]{\begin{tikzpicture}[remember picture]%
\node (#1) [gray,circle,fill,inner sep=1.5pt]{};
\end{tikzpicture}}%
}
\newenvironment{myitemize}{%
% Create new `myitemize` environment to keep track of the counters
\stepcounter{mylist}% increment list counter
\begin{itemize}
}{\end{itemize}%
}
\begin{document}
\rule[-32\baselineskip]{2pt}{32\baselineskip}
\begin{myitemize}
\myitem test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph.
\myitem test
\myitem test
\end{myitemize}
And a new list:
\begin{myitemize}
\myitem test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph.
\myitem test
\end{myitemize}
\end{document}
edit2: I managed to solve the problems @Tom mentioned by using \pgfmathsetmacro\result{int(\x-1)} and \result rather than \pgfmathparse{int(\x-1)} and \pgfmathresult. It looks like tikz uses \pgfmathparse for internal calculations that crashed the code. Using a name for it (\result), solves this issue. Also, I used the easier shorten syntax by @Ignasi.
TL;DR: Now thick works as it should, as well as the other options.
edit: Made the code work with many lists on many pages.
I used a list counter mylist and iterate over each node to plot the connecting line in the end of the newly created environment myitemize. This supports multiple lists on multiple pages.
Original solution:
works as long as you have a single list. If you need multiple lists, page breaking, etc., you need to expand the code.
I used a new counter to name and save the nodes automatically and iterate over them in the end.
A:
You can use shorten option to fix a certain distance between node's border and end of line. There's no nees for calc:
\documentclass{article}
\usepackage{tikz}
%\usetikzlibrary{calc}
\newcommand{\linkedlist}[1]{
\begin{tikzpicture}[remember picture]%
\node (#1) [gray,circle,fill,inner sep=1.5pt]{};
\end{tikzpicture}%
}
\begin{document}
\begin{itemize}
\item[\linkedlist{a}] test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph.
\item[\linkedlist{b}] test
\item[\linkedlist{c}] test
\end{itemize}
\begin{tikzpicture}[remember picture,overlay]
\draw[gray, shorten >=1mm, shorten <=1mm] (a)-- (b);
\draw[gray, shorten >=1mm, shorten <=1mm] (b)-- (c);
\end{tikzpicture}
\end{document}
Update
If itemize list is replaced by an enumerate list, it's possible to declare a new linkedlist environment which does all the work and uses regular \items.
It works with only for one level lists and it doesn't break between pages.
\documentclass{article}
\usepackage{tikz}
\newcommand{\linkeditem}[1]{
\begin{tikzpicture}[remember picture]%
\node (#1) [gray,circle,fill,inner sep=1.5pt]{};
\end{tikzpicture}%
}
\newenvironment{linkedlist}{%
\renewcommand{\theenumi}{\protect\linkeditem{\arabic{enumi}}}
\renewcommand{\labelenumi}{\theenumi}
\begin{enumerate}
}{ \end{enumerate} \begin{tikzpicture}[remember picture,overlay]
\ifnum\value{enumi}>1% Only if there are at least 2 bullet points
\foreach \x [remember=\x as \lastx (initially 1)]
in {2,...,\value{enumi}}{% iterate over them
\draw[gray, shorten >=1mm, shorten <=1mm] (\lastx) -- (\x);}% and draw the connecting lines
\fi
\end{tikzpicture}
}
\begin{document}
\begin{linkedlist}
\item test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph. test with a very long paragraph.
\item test
\item test
\end{linkedlist}
\end{document}
| {
"pile_set_name": "StackExchange"
} |
Q:
microprocessor processor cores physical logical
I try to understand below terms but still having confusion on it
microprocessor
processor
cores
processor cores
physical cores
logical cores
To my knowledge
microprocessor (CPU) == processor
Every system is having only one processor but we can have more cores
But what is the difference between processor and core ?
What is physical cores and logical cores ?
Please explain it.
A:
Modern Microprocessors may have multiple cores, think of a core as a unit of computation, for example: the original Pentium processor had just one core in one chip, in these days is possible to have multiple cores in one chip (like a CoreI7), that is the Multi-core processor.
Some people refers to phsyical cores to the number of absolute cores in the chip, but there is another technology called multi-threading that allows to run multiple threads[1] in the same pipeline of the core, taking advantage of the duplication of processing units P.U. let me try to clarify, for example a CoreI7 from Intel, has (at least some models) 4 cores inside the same chip or multiprocessor and also every core can run 2 threads simultaneously, you can think of a "logical core as" a thread it is a programming abstraction. So basically you can have 8 concurrent threads in a CoreI7 with 4 cores with Multi-threading.
Hope this helps to clarify, Wikipedia you can read more in depth.
[1]http://en.wikipedia.org/wiki/Thread_%28computing%29
| {
"pile_set_name": "StackExchange"
} |
Q:
How to verify contract field changes in a truffle test
My contract is
contract SimpleContract {
uint storedData;
function set(uint x) public { storedData = x; }
function get() public returns (uint) { return storedData; }
}
This test doesn't work:
var SimpleContract = artifacts.require("SimpleContract");
contract('SimpleContract', function(accounts) {
let contract;
let owner;
let web3Contract;
before(async () => {
contract = await SimpleContract.deployed();
web3Contract = web3.eth.contract(contract.abi).at(contract.address);
owner = web3Contract._eth.coinbase;
});
it("test", async function() {
await contract.set.call(10);
let result = await contract.get.call();
assert.equal(result.toNumber(), 10, "updates the field");
});
});
I'm getting
AssertionError: fail: expected 0 to equal 10
Any idea?
A:
@Jitendra Kumar. Balla's reply is the correct one. I was calling set.call(10) instead of set.sendTransaction(10).
| {
"pile_set_name": "StackExchange"
} |
Q:
foreach loop 1000 items multiple times
In javascript I have an array of something like 500,000 items.
I want to send to server 1000 items multiple times.
Seems that I need to use the Slice Function.
How can I do it send 1000 items each time and the last time send the last items. either if it less than 1000 items.
A:
var splicedItems = [];
for(var i =0; i<=yourBigArr.length-1; i++)
{
splicedItems[] = yourBigArr.splice(0,1000);
}
splice is used to modify your array. In the above case we are splicing 1000 items and the each time the array would be reducing by 1000 items.So the index would always start from 0 and the items to chop would be 1000(as you have stated)
| {
"pile_set_name": "StackExchange"
} |
Q:
SQLite Composite key error
Why am I getting an error with the following statement?
create table if not exists patient_to_prescs(
patient_id INTEGER NOT NULL REFERENCES patients_table(id)
presc_id INTEGER NOT NULL REFERENCES prescs_table(id)
PRIMARY KEY(patient_id, presc_id))
This is in android and the error is
near "prescs_id": syntax error(code 1)
A:
Use this, you are missing ",":
create table if not exists patient_to_prescs(
patient_id INTEGER NOT NULL REFERENCES patients_table(id),
presc_id INTEGER NOT NULL REFERENCES prescs_table(id),
PRIMARY KEY(patient_id, presc_id));
| {
"pile_set_name": "StackExchange"
} |
Q:
Text defining length, area and volume
I am looking for a geometry textbook that axiomatises concepts such as length, area and volume of objects in Euclidean space; for example, the surface area of a $2$-sphere in $3$-space. Such a text would then define length, area and volume, showing that the definitions satisfy the axioms, perhaps uniquely. It may even give different definitions that satisfy some common set of axioms. Is there such a book?
A:
These concepts are motivated through a measure theoretic approach, not just for Euclidian space, but for more general regions as well. I would recommend researching the Lebesgue measure in a real analysis textbook.
| {
"pile_set_name": "StackExchange"
} |
Q:
spying on functions returned by a function sinon
I'm a bit new to Sinon and having some trouble with the scenario where I need to spy on not only a function, but the functions returned by the function. Specifically, I'm trying to mock the Azure Storage SDK and ensure that once I've created a queue service, that the methods returned by the queue service are also called. Here's the example:
// test.js
// Setup a Sinon sandbox for each test
test.beforeEach(async t => {
sandbox = Sinon.sandbox.create();
});
// Restore the sandbox after each test
test.afterEach(async t => {
sandbox.restore();
});
test.only('creates a message in the queue for the payment summary email', async t => {
// Create a spy on the mock
const queueServiceSpy = sandbox.spy(AzureStorageMock, 'createQueueService');
// Replace the library with the mock
const EmailUtil = Proxyquire('../../lib/email', {
'azure-storage': AzureStorageMock,
'@noCallThru': true
});
await EmailUtil.sendBusinessPaymentSummary();
// Expect that the `createQueueService` method was called
t.true(queueServiceSpy.calledOnce); // true
// Expect that the `createMessage` method returned by
// `createQueueService` is called
t.true(queueServiceSpy.createMessage.calledOnce); // undefined
});
Here's the mock:
const Sinon = require('sinon');
module.exports = {
createQueueService: () => {
return {
createQueueIfNotExists: (queueName) => {
return Promise.resolve(Sinon.spy());
},
createMessage: (queueName, message) => {
return Promise.resolve(Sinon.spy());
}
};
}
};
I'm able to confirm that the queueServiceSpy is called once, but I'm having trouble determining if the methods returned by that method are called (createMessage).
Is there a better way to set this up or am I just missing something?
Thanks!
A:
What you need to do is stub your service function to return a spy that you can then track calls to elsewhere. You can nest this arbitrarily deep (though I would strongly discourage that).
Something like:
const cb = sandbox.spy();
const queueServiceSpy = sandbox.stub(AzureStorageMock, 'createQueueService')
.returns({createMessage() {return cb;}}});
const EmailUtil = Proxyquire('../../lib/email', {
'azure-storage': AzureStorageMock,
'@noCallThru': true
});
await EmailUtil.sendBusinessPaymentSummary();
// Expect that the `createQueueService` method was called
t.true(queueServiceSpy.calledOnce); // true
// Expect that the `createMessage` method returned by
// `createQueueService` is called
t.true(cb.calledOnce);
| {
"pile_set_name": "StackExchange"
} |
Q:
Print text between tags (inclusive) if certain text is found
I've got a task to extract data from several Apache servers. The task is to print out:
<Directory ...>
...
</Directory>
where +ExecCGI is located within. Let me give an example to illustrate. Assume that the Apache configuration file has numerous Directory sections as indicated below:
<Directory /var/www/site1/htdocs>
Options +ExecCGI
...
...
</Directory>
...
...
...
<Directory /var/www/site1/Promo>
Options -ExecCGI
...
...
</Directory>
From above, I would only like to get the following output:
<Directory /var/www/site1/htdocs>
Options +ExecCGI
...
...
</Directory>
I've searched the forums and have found posts where people have asked questions on how to print out a whole section between tags (I know how to do that), or to change certain text when found (again, I know how to do that).
I will be changing the +ExecCGI to -ExecCGI, but the changes need to go through a review process and hence this question so that I can pull this data out.
A:
perl -l -0777 -ne 'for (m{<Directory.*?</Directory>}gs) {print if /\+ExecCGI/}'
Or with GNU grep:
grep -zPo '(?s)<Directory(?:.(?!</Directory))*?\+ExecCGI.*?</Directory>'
| {
"pile_set_name": "StackExchange"
} |
Q:
Can you run multiple dry-runs concurrently during a TFS to VSTS migration?
Currently, one of our migrations has stalled on step one. I need do some testing and was thinking about kicking off another. A previously deleted migration took less than half the time of where I am now.
In case anyone was wondering how long these migrations take:
Previous Migration (17GB) took around 13 hours.
Now I'm at 20GB and on hour 24...
Thanks,
A:
You can run multiple dry-runs concurrently, but should be for different collections, also I believe the number of import requests that can come from a tenant in a day is capped to 5.
Regarding the migration time, it's depended on the actual size of collection database.
| {
"pile_set_name": "StackExchange"
} |
Q:
How hide line in wordpress comment section?
How can I hide the lines: [You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>"]
in wordpress?
A:
Add the following to your style sheet should work:
#respond form p.form-allowed-tags
{
display:none;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Apache Thrift for just processing, not server
I hope I don't have misunderstood the Thrift concept, but what I see from (example) questions like this, this framework is composed by different modular layers that can be enabled or disabled.
I'm mostly interesed in the "IDL part" of Thrift, so that I can create a common interface between my C++ code and an external Javascript application. I would like to call C++ functions using JS, with Binary data transmission, and I've already used the compiler for this.
But both my C++ (the server) and JS (client) application already exchange data using a C++ Webserver with Websockets support, it is not provided by Thrift.
So I was thinking to setup the following items:
In JS (already done):
TWebSocketTransport to send data to my "Websocket server" (with host ws://xxx.xxx.xxx.xxx)
TBinaryProtocol to encapsulate the data (using this JS implementation)
The compiled Thrift JS library with the correspondent C++ functions to call (done with the JS compiler)
In C++ (partial):
TBinaryProtocol to encode/decode the data
A TProcessor with handler to get the data from the client and process it
For now, the client is already able to sent requests to my websocket server, I see receiving them in binary form and I just need Thrift to:
Decode the input
Call the appropriate C++ function
Encode the output
My webserver will send the response to the client. So no "Thrift server" is needed here. I see there is the TProcessor->process() function, I'm trying to use it when I receive the binary data but it needs an in/out TProtocol. No problem here... but in order to create the TBinaryProtocol I also need a TTransport! If no Thrift server is expected... what Transport should I use?
I tried to set TTransport to NULL in TBinaryProtocol constructor, but once I use it it gives nullptr exception.
Code is something like:
Init:
boost::shared_ptr<MySDKServiceHandler> handler(new MySDKServiceHandler());
thriftCommandProcessor = boost::shared_ptr<TProcessor>(new MySDKServiceProcessor(handler));
thriftInputProtocol = boost::shared_ptr<TBinaryProtocol>(new TBinaryProtocol(TTransport???));
thriftOutputProtocol = boost::shared_ptr<TBinaryProtocol>(new TBinaryProtocol(TTransport???));
When data arrives:
this->thriftInputProtocol->writeBinary(input); // exception here
this->thriftCommandProcessor->process(this->thriftInputProtocol, this->thriftOutputProtocol, NULL);
this->thriftOutputProtocol->readBinary(output);
A:
I've managed to do it using the following components:
// create the Processor using my compiled Thrift class (from IDL)
boost::shared_ptr<MySDKServiceHandler> handler(new MySDKServiceHandler());
thriftCommandProcessor = boost::shared_ptr<TProcessor>(new ThriftSDKServiceProcessor(handler));
// Transport is needed, I use the TMemoryBuffer so everything is kept in local memory
boost::shared_ptr<TTransport> transport(new apache::thrift::transport::TMemoryBuffer());
// my client/server data is based on binary protocol. I pass the transport to it
thriftProtocol = boost::shared_ptr<TProtocol>(new TBinaryProtocol(transport, 0, 0, false, false));
/* .... when the message arrives through my webserver */
void parseMessage(const byte* input, const int input_size, byte*& output, int& output_size)
{
// get the transports to write and read Thrift data
boost::shared_ptr<TTransport> iTr = this->thriftProtocol->getInputTransport();
boost::shared_ptr<TTransport> oTr = this->thriftProtocol->getOutputTransport();
// "transmit" my data to Thrift
iTr->write(input, input_size);
iTr->flush();
// make the Thrift work using the Processor
this->thriftCommandProcessor->process(this->thriftProtocol, NULL);
// the output transport (oTr) contains the called procedure result
output = new byte[MAX_SDK_WS_REPLYSIZE];
output_size = oTr->read(output, MAX_SDK_WS_REPLYSIZE);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Django regex field - unique and without blank spaces
I have a ModelForm, in which I'm having a CharField, which is declared as unique in the Model.
But I have 2 problems:
If I fill in the form with a field having the same name I don't get an error message.
I'd like this field not to contain white spaces.
Is it possible to do that using a ModelForm?
A:
You can do something close to this:
class MyModelForm(forms.ModelForm):
# your field definitions go here
def clean_myuniquefield(self):
# strip all spaces
data = str(self.cleaned_data['myuniquefield']).replace(' ', '')
model = self._meta.model
# check if entry already exists
try:
obj = model.objects.get(myuniquefield=data)
except model.DoesNotExist:
return data
raise forms.ValidationError("Value already exists!")
| {
"pile_set_name": "StackExchange"
} |
Q:
Use jQuery to insert values into input fields?
In the code below, I would like to use jQuery to put the word "username" into the value field. Then, when the user selects the input box, the word "username" would disappear leaving an empty input field for the user to input his username.
Can this be done with jQuery?
<p class="login-username">
<input type="text" name="log" id="user_login" class="input" value="" size="20" tabindex="10">
</p>
A:
You can use HTML5 placeholder for that
<input type="text" placeholder="Username" />
And fix it for older browsers ( that's the exact part you were asking )
$('[placeholder]').focus(function() {
var input = $(this);
if (input.val() == input.attr('placeholder')) {
input.val('');
input.removeClass('placeholder');
}
}).blur(function() {
var input = $(this);
if (input.val() == '' || input.val() == input.attr('placeholder')) {
input.addClass('placeholder');
input.val(input.attr('placeholder'));
}
}).blur();
SOURCE
UPDATE
To alter the HTML via jQuery and add a placeholder to an input field you can do this
$("input").prop("placeholder", "username");
DEMO
| {
"pile_set_name": "StackExchange"
} |
Q:
Add node with different attribute in accordance with the condition
I'm trying to optimize my XSLT file to have a better readable code and to avoid repetitions.
In a part, I have a new element that must be added to the DOM (target is HTML) with different attributes, according to the condition.
ex:
<xsl:choose>
<xsl:when test="status = 0 or type = 2">
<img id="img_{$var1}_check" height="14" width="13" src="{$var1}.png" class="{$var1}" onclick="check({$var1})" alt="{$var1}" title="{$var1}"/>
</xsl:when>
<xsl:when test="status = 1 or type = 24">
<img id="img_{$var2}_check" height="14" width="13" src="{$var2}.png" class="{$var2}" onclick="check({$var2})" alt="{$var2}" title="{$var2}"/>
</xsl:when>
<xsl:when test="status = 2 or type = 4">
<img id="img_{$var3}_check" height="14" width="13" src="{$var3}.png" class="{$var3}" onclick="check({$var3})" alt="{$var3}" title="{$var3}"/>
</xsl:when>
<xsl:when test="status = 4 or type = 22">
<img id="img_{$var4}_check" height="14" width="13" src="{$var4}.png" class="{$var4}" onclick="check({$var4})" alt="{$var4}" title="{$var4}"/>
</xsl:when>
</xsl:choose>
Is there a way to not write the whole img element in each "when" ?
Regards
A:
What you could do, is put your xsl:choose inside a variable, and change the choose just to return either $var1, $var2, $var3 or $var4. Then, write out the img element using the value of this new variable
<xsl:variable name="imgvar">
<xsl:choose>
<xsl:when test="status = 0 or type = 2">
<xsl:value-of select="$var1" />
</xsl:when>
<xsl:when test="status = 1 or type = 24">
<xsl:value-of select="$var2" />
</xsl:when>
<xsl:when test="status = 2 or type = 4">
<xsl:value-of select="$var3" />
</xsl:when>
<xsl:when test="status = 4 or type = 22">
<xsl:value-of select="$var4" />
</xsl:when>
</xsl:choose>
</xsl:variable>
<img id="img_{$imgvar}_check" height="14" width="13" src="{$imgvar}.png" class="{$imgvar}" onclick="check({$imgvar})" alt="{$imgvar}" title="{$imgvar}"/>
I strongly suspect this is not what you want though, because it doesn't seem right to use the same variable in all of the attributes in your img element.
Another approach would be to use a named template, and just call this from the xsl:choose
<xsl:template name="img">
<xsl:param name="imgvar">
<img id="img_{$imgvar}_check" height="14" width="13" src="{$imgvar}.png" class="{$imgvar}" onclick="check({$imgvar})" alt="{$imgvar}" title="{$imgvar}"/>
</xsl:template>
<xsl:variable name="imgvar">
<xsl:choose>
<xsl:when test="status = 0 or type = 2">
<xsl:call-template name="img">
<xsl:with-param name="imgvar" select="$var1" />
</xsl:call-template>
</xsl:when>
<xsl:when test="status = 1 or type = 24">
<xsl:call-template name="img">
<xsl:with-param name="imgvar" select="$var2" />
</xsl:call-template>
</xsl:when>
<xsl:when test="status = 2 or type = 4">
<xsl:call-template name="img">
<xsl:with-param name="imgvar" select="$var3" />
</xsl:call-template>
</xsl:when>
<xsl:when test="status = 4 or type = 22">
<xsl:call-template name="img">
<xsl:with-param name="imgvar" select="$var4" />
</xsl:call-template>
</xsl:when>
</xsl:choose>
</xsl:variable>
This can then obviously be extended to use extra parameters if needed.
Another way, is to put the xsl:choose inside the img element, and write out different attributes for each choice.
<img height="14" width="13">
<xsl:choose>
<xsl:when test="status = 0 or type = 2">
<xsl:attribute name="id">img_<xsl:value-of select="$var1" /></xsl:attribute>
<xsl:attribute name="src"><xsl:value-of select="$var1" />.png</xsl:attribute>
</xsl:when>
<xsl:when test="status = 1 or type = 24">
<xsl:attribute name="id">img_<xsl:value-of select="$var2" /></xsl:attribute>
<xsl:attribute name="src"><xsl:value-of select="$var2" />.png</xsl:attribute>
</xsl:when>
<xsl:when test="status = 2 or type = 4">
<xsl:attribute name="id">img_<xsl:value-of select="$var3" /></xsl:attribute>
<xsl:attribute name="src"><xsl:value-of select="$var3" />.png</xsl:attribute>
</xsl:when>
<xsl:when test="status = 4 or type = 22">
<xsl:attribute name="id">img_<xsl:value-of select="$var4" /></xsl:attribute>
<xsl:attribute name="src"><xsl:value-of select="$var4" />.png</xsl:attribute>
</xsl:when>
</xsl:choose>
</img>
I would say using xsl:call-template was the best best here.
| {
"pile_set_name": "StackExchange"
} |
Q:
insert conditions with variable to select String only if the variable is not null
I need to write a function that would return a 'select' String sentence.
in the sentence i want to combine conditions only if the variables are not null.
public String getFilterCondition(Group group) {
String sendMail= group.getEmail();
String phone= group.getPhone();
String gender= group.getGender();
return "select * from member where sendMail='"+sendMail+"' and phone='"+phone+"' and gender='"+gender+"'" ;
}
the conditions with the variables will be include in the statement only if they are not null.
how can i do that in short way?
thanks!
A:
You can try something like this:
public String getFilterCondition(Group group) {
String sendMail = group.getEmail();
String phone = group.getPhone();
String gender = group.getGender();
String concat = "where ";
String query = "select * from member ";
if (sendMail != null) {
query += concat + "sendMail='" + sendMail + "' ";
concat = "and ";
}
if (phone != null) {
query += concat + "phone='" + phone + "'";
concat = "and ";
}
if (gender != null) {
query += concat + "gender='" + gender + "'";
}
return query;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Are there quicker and less error prone methods for finding determinant?
Edit: I don't think this is a duplicate. That question is generally about efficient algorithms for calculating the determinant. While my question is about practical tips and tricks which can be used when doing this sort of thing by hand.
During one of the questions I was solving I came across the following determinant:
$\left|\matrix{t-1&3&0&-3\\2&t+6&0&-13\\0&3&t-1&-3\\1&4&0&t-8}\right|$
I solved it directly by expanding from the third column and got the result $(t-1)^4$.
However, along the way I made a couple of calculation errors and finding them took me a while. This got me thinking that maybe there are some tips and tricks I could use to calculate this in a faster less error prone method. Or maybe, at least know that I got the right answer in some way?
Any tips that could interest me?
A:
If you put the matrix in triangular form you can take the determinant by taking the product of the diagonal entries
| {
"pile_set_name": "StackExchange"
} |
Q:
Changing jobs but ended up in same city after moving away for a month. Can I deduct both ways?
I'm referring to the moving expenses deductions listed here: http://www.irs.gov/publications/p521/
I lived in City A for 1.5 years, briefly moved to City B for about a month (never established residency or anything), and am now moving back to City A for a new job (and a new apartment). I have two sets of moving expenses now, and I'm wondering how I should handle these deductions.
A:
You have two moves, and you must pass the distance test and the time test.
From the same publication 521:
Distance test:
Your move will meet the distance test if your new main job location is
at least 50 miles farther from your former home than your old main job
location was from your former home. For example, if your old main job
location was 3 miles from your former home, your new main job location
must be at least 53 miles from that former home. You can use Worksheet
1 to see if you meet this test.
Time Test:
Time Test for Employees
If you are an employee, you must work full time for at least 39 weeks
during the first 12 months after you arrive in the general area of
your new job location (39-week test). Full-time employment depends on
what is usual for your type of work in your area.
For purposes of this test, the following four rules apply.
You count only your full-time work as an employee, not any work you do as a self-employed person.
You do not have to work for the same employer for all 39 weeks.
You do not have to work 39 weeks in a row.
You must work full time within the same general commuting area for all 39 weeks.
Let us assume:
you live in City A and work 5 miles away.
You thought you had a job 60 miles from City A so you moved to city B.
It fell apart after a month
you have another job
you moved back to City A.
The law doesn't care about the distance between the new home and the new job, only the distance between the new job and the old home compared to the old home and the old job.
Move #1: you fail the time test
Move #2: you might eventually pass the time test if you work 39 of 52 weeks, but what about the distance test. If you had a new job that was close to city B you could eventually claim Move #1 if you hadn't moved.
Combining both:
If Job #3 was more than 50 miles farther from house #1 when compared to job #1 and house #1; you might have a case to claim some expenses, assuming you eventually worked 39 of 52 weeks. The problem is if the distance isn't far enough you run the risk of the IRS rejecting the entire thing.
They would be concerned that somebody could spend a few bucks on a short duration cheap move to city B and then claim all the expenses for a move that was essentially across the street. You would have a hard time proving that the move to City B was intended to be permanent: you never established residency.
| {
"pile_set_name": "StackExchange"
} |
Q:
Malfunction SOQL Query
I have the below Data Categories
Group1
Category1
Category2
Group2
Now the query which I am trying to execute is below to get all the articles in published state and linked with Group 2, but the query is giving the malfunction error in workbench. Any help
SELECT Title FROM KnowledgeArticleVersion WHERE PublishStatus='online' WITH DATA CATEGORY Group2__c
A:
The syntax of the data category selection in a WITH DATA CATEGORY
clause in a SOQL query includes a category group name to use as a
filter, the filter selector, and the name of the category to use for
filtering.
If you want to select all the article in a group use the ABOVE_OR_BELOW filter selector like this: SELECT Title FROM KnowledgeArticleVersion WHERE PublishStatus='online' WITH DATA CATEGORY Group1__c ABOVE_OR_BELOW Category1__c
See Filtering Selectors for a list of valid selectors.
In your case, I believe there is no category under group2, so your query should look like this: SELECT Title FROM KnowledgeArticleVersion WHERE PublishStatus='online' WITH DATA CATEGORY Group2__c AT All__c
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I parse HL7 message starting with FHS
How can I parse a HL7 message starting with FHS to XML:
FHS|^~\&|Medical-Objects|Eli MOA Test Cap
BHS|^~\&|Medical-Objects|Eli MOA Test Cap
MSH|^~\&|MODemoSoftware|Eli MOA Test Cap^51675B57-9C95-4278-B52E-3FE5EEB6B3EE^GUID|||20121127180300|Eli MOA Test Cap (Capricorn)|ORU^R01|201211271803520050|P|2.3.1|||||||en
PID|1|HB117056|ABC123^^^MODemo^MC~401114835T^^^^PEN~401114835T||TEST^Patient||20010101|F||4^Non-indigenous|10/102 Wises Road^^Maroochydore^^4558||0754566000
PV1|1||AE\R\HBH^^^HBH&Medical Objects Demo Hospital&MODemoSoftware|||||0000000Y^REFERRING^Provider^^^DR^^^AUSHICPR^L^^^UPIN|UP3123000QW^CONSULTING^Provider^^^DR^^^AUSHICPR^L^^^UPIN
ORC|RE|589113676^MODemoSoftware|589113676^Eli MOA Test Cap^51675B57-9C95-4278-B52E-3FE5EEB6B3EE^GUID||IP||^^^20121127^^URGENT|||||0000000Y^REFERRING^Provider^^^DR^^^AUSHICPR^L^^^UPIN
A:
First of all your message has to starting segments (FHS and also the MSH) - so it may be recognized as two messages.
And Unfortunately with the basic HAPI Library this is not possible as HAPI does not know the FHS segment. When you use the HAPI TestPanel you'll see the result quite clear:
When you switch to the XML View - HAPI was able to convert the ORU message (starting with the MSH), but the first line (FHS) is still there.
Solution A: (IF you cannot modify the source HL7) Parse the "FHS" yourself into the XML format you want. And then you can use HAPI to convert the rest for you.
Solution B: Change the HL7 file and add your segments at the end. Then HAPI converts it.
Example HL7:
MSH|^~\&|MODemoSoftware|Eli MOA Test Cap^51675B57-9C95-4278-B52E-3FE5EEB6B3EE^GUID|||20121127180300|Eli MOA Test Cap (Capricorn)|ORU^R01|201211271803520050|P|2.3.1|||||||en
PID|1|HB117056|ABC123^^^MODemo^MC~401114835T^^^^PEN~401114835T||TEST^Patient||20010101|F||4^Non-indigenous|10/102 Wises Road^^Maroochydore^^4558||0754566000
PV1|1||AE\R\HBH^^^HBH&Medical Objects Demo Hospital&MODemoSoftware|||||0000000Y^REFERRING^Provider^^^DR^^^AUSHICPR^L^^^UPIN|UP3123000QW^CONSULTING^Provider^^^DR^^^AUSHICPR^L^^^UPIN
ORC|RE|589113676^MODemoSoftware|589113676^Eli MOA Test Cap^51675B57-9C95-4278-B52E-3FE5EEB6B3EE^GUID||IP||^^^20121127^^URGENT|||||0000000Y^REFERRING^Provider^^^DR^^^AUSHICPR^L^^^UPIN
FHS|Medical-Objects|Eli MOA Test Cap
BHS|Medical-Objects|Eli MOA Test Cap
XML Result:
<?xml version="1.0" encoding="UTF-8"?>
<ORU_R01 xmlns="urn:hl7-org:v2xml">
<MSH>
<MSH.1>|</MSH.1>
<MSH.2>^~\&</MSH.2>
<MSH.3>
<HD.1>MODemoSoftware</HD.1>
</MSH.3>
<MSH.4>
<HD.1>Eli MOA Test Cap</HD.1>
<HD.2>51675B57-9C95-4278-B52E-3FE5EEB6B3EE</HD.2>
<HD.3>GUID</HD.3>
</MSH.4>
<MSH.7>
<TS.1>20121127180300</TS.1>
</MSH.7>
<MSH.8>Eli MOA Test Cap (Capricorn)</MSH.8>
<MSH.9>
<MSG.1>ORU</MSG.1>
<MSG.2>R01</MSG.2>
</MSH.9>
<MSH.10>201211271803520050</MSH.10>
<MSH.11>
<PT.1>P</PT.1>
</MSH.11>
<MSH.12>
<VID.1>2.3.1</VID.1>
</MSH.12>
<MSH.19>
<CE.1>en</CE.1>
</MSH.19>
</MSH>
<ORU_R01.PIDPD1NK1NTEPV1PV2ORCOBRNTEOBXNTECTI>
<ORU_R01.PIDPD1NK1NTEPV1PV2>
<PID>
<PID.1>1</PID.1>
<PID.2>
<CX.1>HB117056</CX.1>
</PID.2>
<PID.3>
<CX.1>ABC123</CX.1>
<CX.4>
<HD.1>MODemo</HD.1>
</CX.4>
<CX.5>MC</CX.5>
</PID.3>
<PID.3>
<CX.1>401114835T</CX.1>
<CX.5>PEN</CX.5>
</PID.3>
<PID.3>
<CX.1>401114835T</CX.1>
</PID.3>
<PID.5>
<XPN.1>
<FN.1>TEST</FN.1>
</XPN.1>
<XPN.2>Patient</XPN.2>
</PID.5>
<PID.7>
<TS.1>20010101</TS.1>
</PID.7>
<PID.8>F</PID.8>
<PID.10>
<CE.1>4</CE.1>
<CE.2>Non-indigenous</CE.2>
</PID.10>
<PID.11>
<XAD.1>10/102 Wises Road</XAD.1>
<XAD.3>Maroochydore</XAD.3>
<XAD.5>4558</XAD.5>
</PID.11>
<PID.13>
<XTN.1>0754566000</XTN.1>
</PID.13>
</PID>
<ORU_R01.PV1PV2>
<PV1>
<PV1.1>1</PV1.1>
<PV1.3>
<PL.1>AE~HBH</PL.1>
<PL.4>
<HD.1>HBH</HD.1>
<HD.2>Medical Objects Demo Hospital</HD.2>
<HD.3>MODemoSoftware</HD.3>
</PL.4>
</PV1.3>
<PV1.8>
<XCN.1>0000000Y</XCN.1>
<XCN.2>
<FN.1>REFERRING</FN.1>
</XCN.2>
<XCN.3>Provider</XCN.3>
<XCN.6>DR</XCN.6>
<XCN.9>
<HD.1>AUSHICPR</HD.1>
</XCN.9>
<XCN.10>L</XCN.10>
<XCN.13>UPIN</XCN.13>
</PV1.8>
<PV1.9>
<XCN.1>UP3123000QW</XCN.1>
<XCN.2>
<FN.1>CONSULTING</FN.1>
</XCN.2>
<XCN.3>Provider</XCN.3>
<XCN.6>DR</XCN.6>
<XCN.9>
<HD.1>AUSHICPR</HD.1>
</XCN.9>
<XCN.10>L</XCN.10>
<XCN.13>UPIN</XCN.13>
</PV1.9>
</PV1>
</ORU_R01.PV1PV2>
</ORU_R01.PIDPD1NK1NTEPV1PV2>
<ORU_R01.ORCOBRNTEOBXNTECTI>
<ORC>
<ORC.1>RE</ORC.1>
<ORC.2>
<EI.1>589113676</EI.1>
<EI.2>MODemoSoftware</EI.2>
</ORC.2>
<ORC.3>
<EI.1>589113676</EI.1>
<EI.2>Eli MOA Test Cap</EI.2>
<EI.3>51675B57-9C95-4278-B52E-3FE5EEB6B3EE</EI.3>
<EI.4>GUID</EI.4>
</ORC.3>
<ORC.5>IP</ORC.5>
<ORC.7>
<TQ.4>
<TS.1>20121127</TS.1>
</TQ.4>
<TQ.6>URGENT</TQ.6>
</ORC.7>
<ORC.12>
<XCN.1>0000000Y</XCN.1>
<XCN.2>
<FN.1>REFERRING</FN.1>
</XCN.2>
<XCN.3>Provider</XCN.3>
<XCN.6>DR</XCN.6>
<XCN.9>
<HD.1>AUSHICPR</HD.1>
</XCN.9>
<XCN.10>L</XCN.10>
<XCN.13>UPIN</XCN.13>
</ORC.12>
</ORC>
<FHS>
<FHS.1>|</FHS.1>
<FHS.2>Medical-Objects</FHS.2>
<FHS.3>Eli MOA Test Cap</FHS.3>
</FHS>
<BHS>
<BHS.1>|</BHS.1>
<BHS.2>Medical-Objects</BHS.2>
<BHS.3>Eli MOA Test Cap</BHS.3>
</BHS>
</ORU_R01.ORCOBRNTEOBXNTECTI>
</ORU_R01.PIDPD1NK1NTEPV1PV2ORCOBRNTEOBXNTECTI>
</ORU_R01>
| {
"pile_set_name": "StackExchange"
} |
Q:
Добавить к картинке
Здравствуйте. Подскажите, пожалуйста, как решить данную задачу:
1) Есть изображение.jpg с нарисованными кнопками
2) Нужно сделать это изображение фоном HTML документа, а на месте каждой нарисованной кнопки повесить DIV с особым позиционирование, которое не будет сбиваться при изменении размера изображения.
Т.е. как сделать так, чтобы изображение стало резиновым (адаптивным) и добавленные DIV элементы не разъезжались при изменении размера картинки?
A:
Предлагаю изначально вычислить все пропорции картинки, т.е вычисляем сколько в процентах отдалена кнопка снизу, справа/слева и её ширину/высоту. Теперь при изменении ширины картинка будет уменьшаться, а вместе с ней и кнопки
.background {
position:absolute;
left:0;
top:0;
right:0;
left:0;
z-index: -1;
}
.background .image {display:inline-block;position:relative;width:100%;}
.background .image img {display:inline-block;width:100%;pointer-events: none;}
.background .image .button {
position: absolute;
background-color: transparent;
bottom: 12%;
width: 37.8%;
height: 13.8%;
border-radius: 10px;
cursor: pointer;
}
.background .image .button.b0 {left: 6.9%;}
.background .image .button.b1 {right: 7.9%;}
<div class="background">
<div class="image">
<img src="https://i.stack.imgur.com/z24fy.jpg">
<div class="button b0"></div>
<div class="button b1"></div>
</div>
</div>
| {
"pile_set_name": "StackExchange"
} |
Q:
when code segment, data segment or created when compiling a c program?
I am trying to understand the compilation process of a C program. The pre-processed program was given to the compiler (to create obj file). The compiler will check for compilation errors. But somewhere I read that code segment, data segment will be created by the compiler and places the corresponding entries in to those segments. Is this correct?
How will the compiler create the segments in the memory? Since we haven't started running the program. Can anyone please let me know what are the exact things performed by the compiler?
A:
This is quite simple.
So code segment is for instructions and data segment is for global and static variables.
It's obvious then, that in the end the compiler knows the size of both the code segment and data segment and this exactly the amount of memory required to load your program/library.
It's not actually memory allocation - this will happen at runtime.
But the point is that processor's instruction pointer should not get out of code segment. And this makes the length of code block quite important.
| {
"pile_set_name": "StackExchange"
} |
Q:
Linux : pthread_cond_signal() is not working inside a Signal Handler()
Summary
-----------
1. In main() am going for pthread_cond_wait().
2. In signal handler() am waking main() using pthread_cond_signal().
3. But main() is not coming out from pthread_cond_wait().
What is wrong here? help me out.
#include <stdio.h>
myclass *myObj = NULL;
In main I am trying to wait for a signal:
int main()
{
myObj = new myclass;
/* do something */
myobj->gotoWait(); <=== Wait blocked for ever.
/* do clean up here */
return 0;
}
Signal handler sending a signal to main thread:
static void signalHandler(int sig, siginfo_t *siginfo, void *context)
{
myObj->wakeFromWait();
}
Actual class implementing the waiting for and sending of signals.
What is wrong here?
myclass::gotoWait()
{
pthread_mutex_lock(&mtx);
pthread_cond_wait(&cnd, &mtx);
pthread_mutex_unlock(&mtx);
}
myclass::wakeFromWait()
{
pthread_mutex_lock(&mtx);
pthread_cond_signal(&cnd, &mtx);
pthread_mutex_unlock(&mtx);
}
A:
In Signal handler there are only a very limited number of syscalls allowed.
see man 7 signal
http://man7.org/linux/man-pages/man7/signal.7.html
My Suggestion is, to be on the safe side, the so called "self pipe trick".
http://man7.org/tlpi/code/online/diff/altio/self_pipe.c.html
You could start a thread which runs a select Loop on the self pipe and call your appropiate handler.
What is wrong in your code? You are locking a mutex inside the Signal handler
EDIT: Here there is a guide for signals
http://beej.us/guide/bgipc/output/html/multipage/signals.html
| {
"pile_set_name": "StackExchange"
} |
Q:
How to fix a class method function from a module giving a variable not defined error
So I am trying to create a simpler logging module, but I keep getting errors that make no sense.
The module is located in %appdata%/python/site-packages/loggingLocal/__init__.py
The modules code is as follows:
class Logger:
def __init__(self):
pass
@classmethod
def llog(cls, file, typeM, message):
llog_f = open(file, "a")
llog_f.write("\n" + typeM + ": " + message)
llog_f.close
The code that I am using to utilize the module is as follows:
import loggingLocal.__init__
logOb = Logger()
lfile = "logs/log.txt"
logOb.llog(lfile, "test", "testing testing 1 2 3")
I expect the file in logs/log.txt to contain test: testing testing 1 2 3 , but I get an error: Undefined variable 'Logger' on line three. This makes no sense as I am assigning logOb to the Logger class, not a variable.
I would like to note that I know I am not doing things in the most efficient way, but thats not what I am here for.
A:
When you import a module, you get only the module's name added to your local namespace, not all of the contents of the module (though you can get that if you specifically ask for it).
So if you do import loggingLocal (the __init__ part is not needed, more about it below), you'll only get the name loggingLocal in your main module's namespace. To access the Logger class within it, you need to use loggingLocal.Logger.
Or you can specifically state specific names that you want to copy from the imported module into your own namespace, using the alternative import syntax from some_module import some_name. In your case, you'd probably want from loggingLocal import Logger. You can give multiple names if you want, or you can give a * instead of any names, in which case you'll get either whatever names are listed in the module's __all__ attribute, or every global variable in the other module that has a name that doesn't start with an underscore.
There are some other small errors in your code. One I alluded to above, where you're naming the __init__ part of the module name in your import statement. That's not needed. __init__.py is a filename used for the module that makes up the root part of a package. That is, foo/__init__.py becomes the foo module. If you don't need a package for some other reason (e.g. because you also have foo/bar.py and foo/baz.py, which can be imported as foo.bar and foo.baz), you probably shouldn't use that structure. Instead, just rename your localLogging/__init__.py file to localLogging.py and get rid of the subdirectory.
The other issue is in your implementation of Logger. When you try to close the file you've created in llog, you're not actually succeeding. You only reference the close method, but never actually call it (you probable wanted to do llog_f.close()). I'd recommend using a with statement to handle the opening and closing of the file instead:
with open(file, "a") as llog_f:
llog_f.write("\n" + typeM + ": " + message)
This code will automatically close the file at the end of the indented block that follows the with statement. It will even close the file if there's an exception that causes the block to exit in an abnormal way.
| {
"pile_set_name": "StackExchange"
} |
Q:
navbar-nav links move in different resolutions/media querys?
I'm having an issue when my website is viewed in resolutions that have a width of 768-889 the links appear as the following;
I know that by using media query I can resolve this however I dont know which.
I know that its within the tablet media query as demonstrated below with the min and max width.
I have changed
.navbar-nav > li {
fontsize, padding
and also
.navbar .navbar-nav {
fontsize, padding,position:absolute;, justified etc.
/* tablets */
@media (max-width: 991px) and (min-width: 768px) {
.slider-size {
height: auto;
}
.slider-size > img {
width: 80%;
}
.navbar-nav > li {
font-size; 14px;
.navbar .navbar-nav {
display: inline-block;
float: none;
vertical-align: top;
}
HTML as requested..
<div id="container">
<header class="clearfix">
<div class="navbar navbar-default">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
<i class="glyphicon glyphicon-resize-vertical" style="font-size: 16px;color:#04fa00"></i>
</button>
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" rel="home" href="#">
<img style="max-width:100px; margin-top: -16px;"
src="/images/mainlogo.png">
</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav navbar-right">
<li><a href="index.html">HOME</a></li>
<li><a href="about.html">ABOUT</a></li>
<li><a href="services.html">SERVICES</a></li>
<li><a href="testimonals.php">TESTIMONALS</a></li>
<li><a href="gallery.php">GALLERY</a></li>
<li><a href="contact.php">CONTACT</a></li>
<li><a href="admin.php">ADMIN</a></li>
</ul>
</div><!-- /.navbar-collapse -->
</div><!-- /.navbar navbar-default -->
</header>
any help would be grateful. I know its something simple :?
A:
Try this:
Its very simple !!
Remove all those css and use this:
a {
padding: 20px 10px 20px 10px !important;
}
.item {
text-align: center;
width: auto;
}
.bs-example {
margin: 10px;
}
.slider-size {
width: 100%;
}
.carousel {
width: 80%;
margin: 0 auto; /* center your carousel if other than 100% */
}
/* Mobile */
@media (max-width: 767px) {
.slider-size {
height: 250px;
}
.slider-size > img {
width: 80%;
}
.navbar-default .navbar-nav > li > a {
line-height: 25px;
}
/* tablets */
@media (max-width: 991px) and (min-width: 768px) {
.slider-size {
height: auto;
}
.slider-size > img {
width: 80%;
}
.navbar-default .navbar-nav > li > a {
padding: 30px 30px 30px 30px !important;
}
}
/* laptops */
@media (max-width: 1023px) and (min-width: 992px) {
.slider-size {
height: 200px;
}
.slider-size > img {
width: 80%;
}
}
/* desktops */
@media (min-width: 1024px) {
.slider-size {
height: 200px;
}
.slider-size > img {
width: 60%;
}
.navbar-default .navbar-nav > li > a {
line-height: 25px;
}
}
}
Hope your problem will be solved !
| {
"pile_set_name": "StackExchange"
} |
Q:
Regular expression syntax for hours
I need a regular expression for hours in PHP, I am using ereg. I need it to accept 1-23 without leading zeros.
^([1-9])|([1][0-9])|([2][0-3])$
That's what I am using but I cannot find where is the mistake.
A:
Alternations (|) apply to everything in the surrounding group or globally if not in a group. So in your pattern, the ^ only applies to the first pattern and the $ only applies to the last pattern. In other words, your pattern matches any string which begins with a digit from 1 to 9, contains a 1 followed by a digit from 0 to 9, or ends with a 2 followed by a digit from 0 to 3.
Try putting the different options in one group:
^([1-9]|1[0-9]|2[0-3])$
Also note, 24-hour time starts at 00:00, so your pattern should look more like this:
^(1?[0-9]|2[0-3])$
Or this, if you need the hour to be 2 digits:
^([01][0-9]|2[0-3])$
| {
"pile_set_name": "StackExchange"
} |
Q:
expanding regex in Python
My program takes in a regex for describing a set of devices. For example,
--device=dev{01,02}{nyc}.hukka.com
should expand to dev01nyc.hukka.com and dev02nyc.hukka.com
How can I use the re module in Python to expand the user provided regex to complete strings that I can use? I am using Python 2.4.
A:
If we re.split on the braces, we get:
In [7]: re.split(r'\{(.*?)\}',userstring)
Out[7]: ['--device=dev', '01,02', '', 'nyc', '.hukka.com']
Every other item in the list came from inside braces, which we next need to split on commas:
In [8]: [ part.split(',') if i%2 else [part] for i,part in enumerate(re.split(r'\{(.*?)\}',userstring)) ]
Out[8]: [['--device=dev'], ['01', '02'], [''], ['nyc'], ['.hukka.com']]
Now we can use itertools.product to enumerate the possibilities:
import re
import itertools
userstring = '--device=dev{01,02}{nyc}.hukka.com'
for x in itertools.product(*[ part.split(',') if i%2 else [part] for i,part in
enumerate(re.split(r'\{(.*?)\}',userstring)) ]):
print(''.join(x))
yields
--device=dev01nyc.hukka.com
--device=dev02nyc.hukka.com
| {
"pile_set_name": "StackExchange"
} |
Q:
Select all Table/View Names with each table Row Count in Teredata
I have been stuck into a question.
The question is I want to get all Table name with their Row Count from Teradata.
I have this query which gives me all View Name from a specific Schema.
I ] SELECT TableName FROM dbc.tables WHERE tablekind='V' AND databasename='SCHEMA' order by TableName;
& I have this query which gives me row count for a specific Table/View in Schema.
II ] SELECT COUNT(*) as RowsNum FROM SCHEMA.TABLE_NAME;
Now can anyone tell me what to do to get the result from Query I (TableName) and put it into QUERY II (TABLE_NAME)
You help will be appreciated.
Thanks in advance,
Vrinda
A:
This is a SP to collect row counts from all tables within a database, it's very basic, no error checking etc.
It shows a cursor and dynamic SQL using dbc.SysExecSQL or EXECUTE IMMEDIATE:
CREATE SET TABLE RowCounts
(
DatabaseName VARCHAR(30) CHARACTER SET LATIN NOT CASESPECIFIC,
TableName VARCHAR(30) CHARACTER SET LATIN NOT CASESPECIFIC,
RowCount BIGINT,
COllectTimeStamp TIMESTAMP(2))
PRIMARY INDEX ( DatabaseName ,TableName )
;
REPLACE PROCEDURE GetRowCounts(IN DBName VARCHAR(30))
BEGIN
DECLARE SqlTxt VARCHAR(500);
FOR cur AS
SELECT
TRIM(DatabaseName) AS DBName,
TRIM(TableName) AS TabName
FROM dbc.Tables
WHERE DatabaseName = :DBName
AND TableKind = 'T'
DO
SET SqlTxt =
'INSERT INTO RowCounts ' ||
'SELECT ' ||
'''' || cur.DBName || '''' || ',' ||
'''' || cur.TabName || '''' || ',' ||
'CAST(COUNT(*) AS BIGINT)' || ',' ||
'CURRENT_TIMESTAMP(2) ' ||
'FROM ' || cur.DBName ||
'.' || cur.TabName || ';';
--CALL dbc.sysexecsql(:SqlTxt);
EXECUTE IMMEDIATE sqlTxt;
END FOR;
END;
If you can't create a table or SP you might use a VOLATILE TABLE (as DrBailey suggested) and run the INSERTs returned by following query:
SELECT
'INSERT INTO RowCounts ' ||
'SELECT ' ||
'''' || DatabaseName || '''' || ',' ||
'''' || TableName || '''' || ',' ||
'CAST(COUNT(*) AS BIGINT)' || ',' ||
'CURRENT_TIMESTAMP(2) ' ||
'FROM ' || DatabaseName ||
'.' || TableName || ';'
FROM dbc.tablesV
WHERE tablekind='V'
AND databasename='schema'
ORDER BY TableName;
But a routine like this might already exist on your system, you might ask you DBA. If it dosn't have to be 100% accurate this info might also be extracted from collected statistics.
| {
"pile_set_name": "StackExchange"
} |
Q:
Objective-C Fastest Way to find Closest NSDate in NSArray
I pass in an NSDate to the following function and want to find the NSDate in the array that is closest in time to the passed in value.
Please note that I do not want to know if the array contains the exact date like this post:
Find NSDate in sorted NSArray
I want to know which date in the array is nearest in time to my reference date.
I must do this frequently. The following works but is slow - how can I speed things up?
// Get the XYZ data nearest to the passed time
- (eciXYZ)eciDataForTime:(NSDate*)time {
// Iterate our list and find the nearest time
float nearestTime = 86400; // Maximum size of dataset ( 2 days )
int index = 0; // Track the index
int smallestDifferenceIndex = index; // Track the index with the smallest index
NSDate *lastListDate; // Track the closest list date
for ( index = 0 ; index < [self.time count]-1 ; index++ ) {
NSDate *listDate = [self.time objectAtIndex:index]; // Get the date - Time is an NSMutableArray of NSDates
// NSTimeInterval is specified in seconds; it yields sub-millisecond precision over a range of 10,000 years.
NSTimeInterval timeDifferenceBetweenDates = [listDate timeIntervalSinceDate:time];
if ( timeDifferenceBetweenDates < nearestTime && timeDifferenceBetweenDates > 0 ) {
nearestTime = timeDifferenceBetweenDates; // Update the tracker
smallestDifferenceIndex = index; // Update the smallest difference tracker
lastListDate = listDate; // Capture the closest date match
//NSLog(@"Time: %f %@",timeDifferenceBetweenDates,listDate);
}
}
A:
Edit: I was under the mistaken impression that NSMutableOrderedSet would automatically maintain order. In the past I probably had used a subclass to achieve this effect. There is no benefit to using it over NSArray unless you want set semantics.
keeping a collection sorted is a good way to keep searches fast. use NSOrderedSet or NSMutableOrderedSet instead of an array object if you can, otherwise you will have to keep the array sorted when you add to it, or if it is only created once then you just sort it then.
Also you can enumerate any collection (that conforms to the protocol) faster by using NSFastEnumeration.
example:
// for an NSOrdered set or NSArray of NSDates
for (NSDate* date in self.times) {
// do something with date
// cleaner, shorter code too
}
because your collection is sorted you now will be able to tell what the closest date is without having to iterate the entire collection (most of the time).
// searching for date closest to Tuesday
[Sunday] <- start search here
[Monday] <- one day before
[Thursday] <- two days after, we now know that Monday is closest
[Friday] <- never have to visit
[Saturday] <- never have to visit
As pointed out by @davecom you can search faster using a binary search. Normally you would achieve this using either CFArrayBSearchValues or the indexOfObject:inSortedRange:options:usingComparator: method on NSArray (it assumes the array is sorted so beware) and passing NSBinarySearchingOptions to the options parameter. In your case this won't work because you don't know the exact object or value you are looking for. You would have to roll your own binary search algorithm.
If this is not fast enough for your purposes we may need more information on context. It may be the best idea to use a C array/C++ list/NSPointerArray of timestamps instead. I feel like your biggest slowdown here is the Objective-C overhead, especially for the dates. If you don't use these as actual date objects anywhere then surely you would be better off using timestamps.
| {
"pile_set_name": "StackExchange"
} |
Q:
Connect Raspberry Pi2 to Windows 10 Shared folder
I have already shared a folder inside my Windows 10 computer in this address 192.168.1.179/Toshiba. And I would like to connect my Raspberry pi 2 to that folder so I can access all my files inside my Raspberry . The IP Address of my Raspberry Pi 2 is 192.168.1.132.
I don't need a password to connect with Windows shared folder.
I tried to use this command inside Raspberry Pi 2:
sudo mount -t cifs //192.168.1.179/Toshiba mount-point
I already have a folder called mount-point inside my Raspberry Pi 2.
Error
mount error(13): Permission denied
I can access to that folder if I use smbclient, I just write
smbclient //192.168.1.179/Toshiba
And the terminal shows me
Enter ismael's password:
Domain=[SALÓN] OS=[Windows 10 Pro 10240] Server=[Windows 10 Pro 6.3]
smb: \>
If I write dir or ls the prompt says me :
NT_STATUS_ACCESS_DENIED listing \*
A:
I believe you need to specify the option
-o username=your_username on windows
I believe it will prompt you for a password or you can specify it with the password=your_password
smclient is looking at the share which anyone can do but accessing the files on the share is limited to people who have an account.
| {
"pile_set_name": "StackExchange"
} |
Q:
CloudWatch set unit for a custom metric
I have a CloudWatch dashboard with a set of widgets. All the widgets have graphs/line charts based on custom metrics. I defined these custom metrics from metric-filters being defined on the CloudWatch log group.
For every custom metric, I want to set the unit to, for example, milliseconds, seconds, hours etc. CloudWatch console somehow shows all the metric units to be counts only.
Can we not modify the CloudWatch metric unit to be different than count? If not possible from the console, is it possible through the API?
A:
Every datapoint has a unit and that unit is set when the datapoint is published. If unit is not set, it defaults to None.
You can't change the unit when graphing or when fetching the data via API, graphs and APIs simply return the unit that is set on datapoints. Also, CloudWatch won't scale your data based on unit. If you have a datapoint with a value of 1200 milliseconds for example and you request this metric in seconds you will get no data, CloudWatch won't scale your data and return 1.2 seconds as one might expect.
So looks like CloudWatch logs are publishing data with unit equal to Count. I couldn't find a way to have it publish data with any other unit.
| {
"pile_set_name": "StackExchange"
} |
Q:
Codename one builds failing
I am trying to port my lwuit app to codenameone.
I have used a json package in the application. (org.json.me). This package is actually part of json jar and contains classes to manipulate json files.
The application was working fine when I used to make J2ME builds with LWUIT.
In the codename one emulator also, the application is working without any issues.
When I, try to send a J2ME build to the server by right clicking the project and selecting 'Send J2ME Build', my application's build process crashes with some warnings.
Executing: javac -source 1.2 -target 1.2 -classpath C:\Users\Shai\AppData\Local\Temp\build925171746515355215xxx\tmpclasses;C:\Users\Shai\Desktop\j2me\midpapis.jar -d C:\Users\Shai\AppData\Local\Temp\build925171746515355215xxx\tmpclasses C:\Users\Shai\AppData\Local\Temp\build925171746515355215xxx\tmpsrc\GREStub.java Executing: java -jar C:\Users\Shai\Desktop\j2me\proguard.jar -injars . -libraryjars C:\Users\Shai\Desktop\j2me\midpapis.jar -outjars C:\Users\Shai\AppData\Local\Temp\build925171746515355215xxx\result\GRE.jar -target 1.3 -keep public class ** extends javax.microedition.midlet.MIDlet { public *; } -defaultpackage '' -printmapping C:\Users\Shai\AppData\Local\Temp\build925171746515355215xxx\result\obfuscation_mapping.txt -overloadaggressively -dontusemixedcaseclassnames -useuniqueclassmembernames -dontoptimize ProGuard, version 4.7
Reading program directory [C:\Users\Shai\AppData\Local\Temp\build925171746515355215xxx\tmpclasses]
Reading library jar [C:\Users\Shai\Desktop\j2me\midpapis.jar]
Warning: com.mycompany.myapp.GRE: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.GRE: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.GRE: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.GRE: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.GRE: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.GRE$8: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.GRE$8: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal: can't find referenced class org.json.me.JSONException
Warning: com.mycompany.myapp.Verbal: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal$4: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal$4: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal$4: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal$4: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal$8: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal$8: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal$9: can't find referenced class org.json.me.JSONObject
Warning: com.mycompany.myapp.Verbal$9: can't find referenced class org.json.me.JSONObject
Note: com.codename1.impl.midp.GameCanvasImplementation: can't find dynamically referenced class com.siemens.mp.game.Light
Note: com.codename1.impl.midp.GameCanvasImplementation: can't find dynamically referenced class com.motorola.phonebook.PhoneBookRecord
Note: com.codename1.impl.midp.GameCanvasImplementation: can't find dynamically referenced class com.nokia.mid.ui.FullCanvas
Note: com.codename1.impl.midp.GameCanvasImplementation: can't find dynamically referenced class net.rim.device.api.system.Application
Note: com.codename1.impl.midp.GameCanvasImplementation: can't find dynamically referenced class com.mot.iden.util.Base64
Note: com.codename1.impl.midp.GameCanvasImplementation: can't find dynamically referenced class mmpp.media.MediaPlayer
Note: there were 6 unresolved dynamic references to classes or interfaces.
You should check if you need to specify additional program jars.
Warning: there were 26 unresolved references to classes or interfaces.
You may need to specify additional library jars (using '-libraryjars').
Error: Please correct the above warnings first.
Now, i feel that the server is not finding my json package. But i need this build to succeed. I have used the classes of this jar a lot in my app. ANd now i don't want to migrate to the inbuilt json parser as I'll have to change my code a lot which I wish to strictly avoid.
1)What can I do to resolve this?
2)Can we not use third party jars in codename one?
A:
You can't change library classpath in Codename One. Everything must be part of the source directories in order to work properly.
Codename One has its own JSON parser please read about it in the Codename One developer guide.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to update the data in an ng2-chartjs2 chart in Angular 2 and Node JS
I am using NodeJS, Angular2, and the ng2-chartjs2. Below I listed the relevant parts of my code that is rendering charts. The data is loaded into this.data from an API using a fixed date range. I would like to allow the user to select a date range and then update the chart. From here I know that you can call update() on the chart object to update the data in it, but I don't know how to get a hold of the chart object, since the component code never actually has a reference to it - it's done automagically when the template is rendered. Looking at the source code (line 13) I see that the author intended to make the object available. I contacted the author but haven't received a response yet and need to get moving. I have learned a lot about Angular2 but am no expert yet, so perhaps a deeper understanding of Angular2 makes this obvious. How can I either get access to the object to call update() on it, or do it some other clean way?
The template contains
<chart [options]="simple.options"></chart>
and the component typescript code contains
import { ChartComponent } from 'ng2-chartjs2';
...
@Component({
selector: 'home',
templateUrl: 'client/components/home/home.component.html',
styleUrls: ['client/components/home/home.component.css'],
directives: [DashboardLayoutComponent, CORE_DIRECTIVES, ChartComponent],
pipes: [AddCommasPipe],
})
...
setCurrentSimpleChart = (simpleType: number): void => {
this.simple.options = {
type: 'line',
options: this.globalOptions,
data: {
labels: this.data[simpleType].labels,
datasets: [{
label: this.titles[simpleType],
data: this.data[simpleType].data,
backgroundColor: 'rgba(255, 99, 132, 0.2)',
borderColor: 'rgba(255,99,132,1)',
borderWidth: 1
}],
},
};
...
}
Update: In case this helps anyone: I actually have two different charts on the page, so I googled around based on the accepted answer and found ViewChildren, and mapped them to different variables so I can update them both separately, with
[this.simpleChart, this.liftChart] = this.chartComponents.toArray().map(component => component.chart);
(Note also that this was using an rc of angular2 - since then directives, etc have been moved out of the components themselves.)
A:
You can hold reference to component by using ViewChild:
@ViewChild(ChartComponent) chartComp;
And then you can get chart object:
let chart = this.chartComp.chart;
Here is the corresponding plunker
| {
"pile_set_name": "StackExchange"
} |
Q:
How to access auto-generated github page?
I have auto-generated a github page according to the instructions here. Now I have a new branch called gh-pages.
I don't really get the next step, can I host the generated page on github ? or do I need to deploy it on my own web server ?
Do I need to merge the page branch with my master branch ?
Thanks.
A:
Found the answer here. To access the page go to username.github.com/projectname. Any change you make to the gh-pages branch will be reflected there.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do you connect to SQL Server running on the host from a .NET app in a windows docker container
I have a Windows docker container and a simple test app that is attempting to connect with the SQL Server running on the host, but it is unable to connect.
I am able to ping the host from the container using "ping -4 hostmachinename", but the SqlConnection Open method fails.
To test I run these commands:
docker build --tag=heydocker .
docker run heydocker:latest
# Dockerfile:
FROM microsoft/dotnet-framework
WORKDIR /app
COPY . /app
EXPOSE 1433
CMD ["program.exe"]
This is my app code:
// program.cs:
using System;
using System.Data;
using System.Data.Common;
using System.Data.SqlClient;
namespace Test
{
public static class program
{
const string CONNECT = "Data Source=docker.for.win.localhost;Database=MG108RC3_All;User ID=sa;Pwd=password;Network Library=dbmssocn";
public static void Main()
{
Console.WriteLine( "Hello Docker" );
try
{
using ( var con = new SqlConnection( CONNECT ) )
{
con.Open();
Console.WriteLine( "Open SUCCESS" );
}
}
catch ( Exception ex )
{
Console.WriteLine( $"ERROR: {ex.Message}" );
}
}
}
}
I have tried omitting the EXPOSE in the Dockerfile, as well as using EXPOSE 1433:1433.
I have also tried using the actual machine name in the connection string, as well as host.docker.internal.
When using docker.for.win.localhost I get the exception: ERROR: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - No such host is known.)
When using the machine name I get: ERROR: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - The wait operation timed out.)
I have tried various options for --net, but since I am able to ping my host with the default network settings I am not confident that is the problem.
A:
OK, I'm an idiot. I assumed that since I could access my SQL server from another machine on the network, that should be sufficient. But my firewall is disabled for our corporate network, and the container is using a private network (with firewall enabled). Turning off the firewall for private networks allowed the connection to succeed.
Whew!
| {
"pile_set_name": "StackExchange"
} |
Q:
Chef-Server 12 Getting me errors when the node is not time synced
I'm getting this error when the clock on my node is different than 15 minutes from my chef server.
Authentication Error:
Failed to authenticate to the chef server (http 401). enter code
here`The request failed because your clock has drifted by more than 15
minutes.
Syncing your clock to an NTP Time source should resolve the issue.
Does Someone know how to change that setting on the server or other method to avoid this check? Thanks.
A:
I solved this issue overriding a config on the server. I edited this file /etc/opscode/chef-server.rb
opscode_erchef['auth_skew] = put your threshold here on seconds
E.g: opscode_erchef['auth_skew] = 43200 (threshold of 12h)
With that way to solve, the Web Console, last check-in metric will be wrong, but it works fine.
| {
"pile_set_name": "StackExchange"
} |
Q:
QT creating push buttons that add text to a text edit box
New to QT just playing around with it to see if its something I will enjoy using and if so would like to go on and learn the program in depth.
Struggling a bit with the button concept. I have created a button and a textedit area. I want to add a string of text into the textedit window when the button is pressed.
I can't seem to find anything on google or the QT wiki to achieve this. Can someone point me in the direction so I can at least get started and have a play with this great tool.
A:
In Qt signals and slots are being used to communicate between the objects. This should provide you with the necessary information to get you started.
A signal is emitted when a particular event occurs. Qt's widgets have many predefined signals, but we can always subclass widgets to add our own signals to them. A slot is a function that is called in response to a particular signal. Qt's widgets have many pre-defined slots, but it is common practice to subclass widgets and add your own slots so that you can handle the signals that you are interested in.
So, in your particular case you need to connect the QPushButton clicked() signal with your custom slot that does what is needed (add the text to the textarea):
QPushButton * btn = new QPushButton("Button", this);
connect(btn, SIGNAL(clicked()), this, SLOT(onBtnClicked()));
And we need to declare our slot in the header:
private slots:
void onBtnClicked();
And define it:
void MySpecialWidget::onClick()
{
// Do what is to be done
}
If you have done everything correctly it should work... Otherwise have a look at the console to see if there are any messages looking like:
Object::connect: No such slot MySpecialWidget::onClick() in ...
or
Object::connect: No such signal ....
They should give you a hint about what is going on.
Finally I recommend to have a look at the broad set of Qt examples.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to disable a WinForms TreeView node checkbox?
I need to be able to disable some of the checkboxes in a TreeView control of a WinForms application, but there's no such functionality built-in to the standard TreeView control.
I am already using the TreeView.BeforeCheck event and cancel it if the node is disabled and that works perfectly fine.
I also change the ForeColor of the disabled nodes to GrayText.
Does anyone have a simple and robust solution?
A:
Since there's support in C++ we can resolve it using p/invoke.
Here's the setup for the p/invoke part, just make it available to the calling class.
// constants used to hide a checkbox
public const int TVIF_STATE = 0x8;
public const int TVIS_STATEIMAGEMASK = 0xF000;
public const int TV_FIRST = 0x1100;
public const int TVM_SETITEM = TV_FIRST + 63;
[DllImport("user32.dll")]
static extern IntPtr SendMessage(IntPtr hWnd, uint Msg, IntPtr wParam,
IntPtr lParam);
// struct used to set node properties
public struct TVITEM
{
public int mask;
public IntPtr hItem;
public int state;
public int stateMask;
[MarshalAs(UnmanagedType.LPTStr)]
public String lpszText;
public int cchTextMax;
public int iImage;
public int iSelectedImage;
public int cChildren;
public IntPtr lParam;
}
We want to determine on a node by node basis. The easiest way to do that is on the draw node event. We have to set our tree to be set as owner drawn in order for this event, so be sure to set that to something other than the default setting.
this.tree.DrawMode = TreeViewDrawMode.OwnerDrawText;
this.tree.DrawNode += new DrawTreeNodeEventHandler(tree_DrawNode);
In your tree_DrawNode function determine if the node being drawn is supposed to have a checkbox, and hide it when approriate. Then set the Default Draw property to true since we don't want to worry about drawing all the other details.
void tree_DrawNode(object sender, DrawTreeNodeEventArgs e)
{
if (e.Node.Level == 1)
{
HideCheckBox(e.Node);
e.DrawDefault = true;
}
else
{
e.Graphics.DrawString(e.Node.Text, e.Node.TreeView.Font,
Brushes.Black, e.Node.Bounds.X, e.Node.Bounds.Y);
}
}
Lastly, the actual call to the function we defined:
private void HideCheckBox(TreeNode node)
{
TVITEM tvi = new TVITEM();
tvi.hItem = node.Handle;
tvi.mask = TVIF_STATE;
tvi.stateMask = TVIS_STATEIMAGEMASK;
tvi.state = 0;
IntPtr lparam = Marshal.AllocHGlobal(Marshal.SizeOf(tvi));
Marshal.StructureToPtr(tvi, lparam, false);
SendMessage(node.TreeView.Handle, TVM_SETITEM, IntPtr.Zero, lparam);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to remove hyper link from html method
Please walk through below code and help me, how to remove hyper link from html method.
I want final output in HTML format in footercontent variable.
<!DOCTYPE html>
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<script>
$(document).ready(function(){
var footercontent = $('footer').html();
alert(footercontent);
});
</script>
</head>
<body>
<footer>
<a href="#">Site Map</a> | <a href="#">Privacy statement</a> | <a href="#">Tutorials</a>
<p>Fotoer contetn 1</p>
<p>Footer content 2</p>
<p>Footer content 3</p>
<p>Footer content 4</p>
</footer>
</body>
</html>
A:
You can use jquery .replaceWith()
$("footer > a").replaceWith(function(){
return $( this ).contents();
});
fiddle
| {
"pile_set_name": "StackExchange"
} |
Q:
Using file find to grab source files
I am trying to use (as an exercise) the file find callback capability to filter source files out. It does not work so far and just grab everything under the sun.
could you point me out in the good direction ?
#!/usr/local/bin/perl
use strict;
use warnings;
use File::Find;
my @srcFiles;
my @srcExt = qw(cpp h py pl);
my @startDir = qw(.);
find( sub{
my @fields = split /'.'/, $File::Find::name;
push @srcFiles, $File::Find::name if grep $fields[-1], @srcExt;
},
@startDir);
print 'Source files found: ', @srcFiles;
Thanks
A:
Your problem is in your split
my @ar = split /'.'/, $File::Find::name;
^
here is your problem.
you want to split the file using dot so, you tried '.'. But it is not escaping. Escape the character use back slash. So syntax should be
my @ar = split /\./, $File::Find::name;
So your code is,
find( sub{
my @fields = split /\./, $File::Find::name;
push @srcFiles, $File::Find::name if grep $fields[-1], @srcExt;
},
@startDir);
Or else try something as follow, this is more effective than your code and it will fix some bugs as @hobbs mentioned in a comment.
Join the array by the pipe separated. Then use precompiled regex(qr) to add the non capturing group. Then parse variable into the subroutine and check it.
#!/usr/local/bin/perl
use strict;
use warnings;
use File::Find;
my @srcFiles;
my @srcExt = qw(cpp h py pl);
my $match = join("|",@srcExt);
$match = qr/\.(?:$match)$/;
my @startDir = qw(.);
find(sub
{
my $s = $File::Find::name;
push(@srcFiles,$s) if($s =~m/$match/);
}
, @startDir);
print @srcFiles;
| {
"pile_set_name": "StackExchange"
} |
Q:
Program execution steps
I have a c++ program that works fine, however it needs to run for a long time. But while it is running I could continue to develop some parts of it. If I recompile my program, this will replace the binary with a new one. Does this will modify the behavior of the running program? Or are the process and the binary file two separate things once the program is launched?
More generally, what are the steps of a program execution?
A:
On Linux, the process uses memory mapping to map the text section of the executable file and shared libraries directly into the running process memory. So if you could overwrite the executable file, it would affect the running process. However, writing into a file that's mapped for execution is prohibited -- you get a "Text file busy" error.
However, you can still recompile the program. If the compiler (actually the linker) gets this error, it removes the old executable file and creates a new one. On Unix, if you remove a file that's in use, the file contents are not actually removed from the disk, only the reference from the directory entry is removed; the file isn't fully deleted until all references to it (directory entries, file descriptors and memory mappings) go away. So the running process continues to be mapped to the old, nameless file. You can see this with the following demonstration:
barmar@dev:~$ ls -li testsleep
229774 -rwxr-xr-x 1 barmar adm 4584 Apr 24 04:30 testsleep
barmar@dev:~$ ./testsleep &
[1] 17538
barmar@dev:~$ touch testsleep.c
barmar@dev:~$ make testsleep
cc testsleep.c -o testsleep
barmar@dev:~$ ls -li testsleep
229779 -rwxr-xr-x 1 barmar adm 4584 Apr 24 04:32 testsleep
The inode number changed from 229774 to 229779 when I recompiled the program while it was running, indicating that a new file was created.
| {
"pile_set_name": "StackExchange"
} |
Q:
switch statements
The problem is to write the paragraph 5 different times. Each paragraph has a different cage number and corresponding animal. So cage 1 has a lion and cage 2 has a tiger. The problem is I don't know how to combine both the cage number the different corresponding animal in the same paragraph.
I don't know how to enter the switch statement in the second line of the paragraph. I tried writing println("This cage holds a " + i); but Eclipse gave me an error. How do i incorporate both variables n and i in the same paragraph at the same time?
import acm.program.*;
public class ZooAnimals extends ConsoleProgram {
private static final int START = 1;
public void run(){
for (int n = START; n <=5; n++ ) {
println("This animal is in cage" + n);
println("This cage holds a " ); <---- type of animal goes in here.
println("Wild animals are very dangerous.");
}
for(int i = START; i<=5; i++) {
switch(i) {
case 1: println("lion");
case 2: println("tiger");
case 3: println("elephant");
case 4: println("snakes");
case 5: println("hippo");
}
}
}
}
A:
I would write a small method like so:
public String getAnimal(int cage)
{
switch(cage) {
case 1: return "lion";
case 2: return "tiger";
case 3: return "elephant";
case 4: return "snakes";
case 5: return "hippo";
default: return "Animal Not Found!";
}
}
I would then replace this code:
for (int n = START; n <=5; n++ ) {
println("This animal is in cage" + n);
println("This cage holds a " ); <-----------type of animal goes in here.
println("Wild animals are very dangerous.");
}
with this:
for (int n = START; n <=5; n++ ) {
println("This animal is in cage" + n);
println("This cage holds a " + getAnimal(n)); <-----------type of animal goes in here.
println("Wild animals are very dangerous.");
}
| {
"pile_set_name": "StackExchange"
} |
Q:
User Profile Picture not showing in SP2010 Feb2012 CU
Environment: SharePoint 2010 Enterprise Edition, February 2012 CU
We didn't synchronise profile pictures with our Active Directory so far, but decided to change that. We tested it in our dev environment, and it worked. We tried it in our prod environment, and it doesn't work.
What we did was
have a connection to ap.ourdomain, and a connection to all the other domains (uk.ourdomain, sa.ourdomain, am.ourdomain, ...)
Add the User Property mapping Picture (SP) <- thumbnailPhoto (AD) (for both connections)
ran an incremental sync, but later also full sync
ran Update-SPProfilePhotoStore -MySiteHostLocation mysite -CreateThumbnailsForImportedPhotos $true with the correct account (farm admin) (Note: mysite resolves to mysite.ap.ourdomain for us, for colleagues in Europe it resolves to mysite.eu.ourdomain, etc.)
Still, pictures don't show up for our users. We noticed that some pictures actually DO show up for users outside of ap.ourdomain who've got their photo in their MySite (so the PictureURL for colleagues in the US may be mysite.am.ourdomain/User Photos/Profile Pictures/....)
Is there any way to check if the photos are indeed imported from AD? Anything we did wrong?
Update: Checked in the FIM client, thumbnailPhoto is synced as seen in DS_FULLSYNC, but no photos show up in MOSS_EXPORT or mysite/User Photos/
Update 4 June 2012: Wanted to escalate it to external support today, and while preparing my email, I wanted to take screenshots of our setup. What can I say, the Picture<->thumbnailPhoto mapping was gone! It was still there last Friday (I looked at it in the late afternoon, and saw that mapping don't know how many times since we encountered this issue), but today there was no trace of it (no, nobody removed it). So we set it up again, ran a full sync, and this time it worked! I have no idea what happened, I can only say that it's working now as it should.
A:
Update 4 June 2012: Wanted to escalate it to external support today, and while preparing my email, I wanted to take screenshots of our setup. What can I say, the Picture<->thumbnailPhoto mapping was gone! It was still there last Friday (I looked at it in the late afternoon, and saw that mapping don't know how many times since we encountered this issue), but today there was no trace of it (no, nobody removed it). So we set it up again, ran a full sync, and this time it worked! I have no idea what happened, I can only say that it's working now as it should.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to execute cucumber feature file parallel
I have below feature files (Separate feature files) in src/test/resources/feature/ and I would like to run them in parallel. Like: One feature file has to execute in chrome and another one has to execute in firefox as mentioned @Tags name.
Feature: Refund item
@chrome
Scenario: Jeff returns a faulty microwave
Given Jeff has bought a microwave for $100
And he has a receipt
When he returns the microwave
Then Jeff should be refunded $100
Feature: Refund Money
@firefox
Scenario: Jeff returns the money
Given Jeff has bought a microwave for $100
And he has a receipt
When he returns the microwave
Then Jeff should be refunded $100
Can somebody assist me to achieve this.I'm using cucumber-java 1.2.2 version, and AbstractTestNGCucumberTests using as runner. Also, let me know how can I create a Test Runner dynamically by using feature files and make them run in parallel.
A:
Update: 4.0.0 version is available at maven central repository with bunch of changes.for more details go here.
Update: 2.2.0 version is available at maven central repository.
You can use opensource plugin cucumber-jvm-parallel-plugin which has many advantages over existing solutions. Available at maven repository
<dependency>
<groupId>com.github.temyers</groupId>
<artifactId>cucumber-jvm-parallel-plugin</artifactId>
<version>2.1.0</version>
</dependency>
First you need to add this plugin with required configuration in your project pom file.
<plugin>
<groupId>com.github.temyers</groupId>
<artifactId>cucumber-jvm-parallel-plugin</artifactId>
<version>2.1.0</version>
<executions>
<execution>
<id>generateRunners</id>
<phase>generate-test-sources</phase>
<goals>
<goal>generateRunners</goal>
</goals>
<configuration>
<!-- Mandatory -->
<!-- comma separated list of package names to scan for glue code -->
<glue>foo, bar</glue>
<outputDirectory>${project.build.directory}/generated-test-sources/cucumber</outputDirectory>
<!-- The directory, which must be in the root of the runtime classpath, containing your feature files. -->
<featuresDirectory>src/test/resources/features/</featuresDirectory>
<!-- Directory where the cucumber report files shall be written -->
<cucumberOutputDir>target/cucumber-parallel</cucumberOutputDir>
<!-- comma separated list of output formats json,html,rerun.txt -->
<format>json</format>
<!-- CucumberOptions.strict property -->
<strict>true</strict>
<!-- CucumberOptions.monochrome property -->
<monochrome>true</monochrome>
<!-- The tags to run, maps to CucumberOptions.tags property you can pass ANDed tags like "@tag1","@tag2" and ORed tags like "@tag1,@tag2,@tag3" -->
<tags></tags>
<!-- If set to true, only feature files containing the required tags shall be generated. -->
<filterFeaturesByTags>false</filterFeaturesByTags>
<!-- Generate TestNG runners instead of default JUnit ones. -->
<useTestNG>false</useTestNG>
<!-- The naming scheme to use for the generated test classes. One of 'simple' or 'feature-title' -->
<namingScheme>simple</namingScheme>
<!-- The class naming pattern to use. Only required/used if naming scheme is 'pattern'.-->
<namingPattern>Parallel{c}IT</namingPattern>
<!-- One of [SCENARIO, FEATURE]. SCENARIO generates one runner per scenario. FEATURE generates a runner per feature. -->
<parallelScheme>SCENARIO</parallelScheme>
<!-- This is optional, required only if you want to specify a custom template for the generated sources (this is a relative path) -->
<customVmTemplate>src/test/resources/cucumber-custom-runner.vm</customVmTemplate>
</configuration>
</execution>
</executions>
</plugin>
Now add below plugin just below above plugin which will invoke runner classes generated by above plugin
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.19</version>
<configuration>
<forkCount>5</forkCount>
<reuseForks>true</reuseForks>
<includes>
<include>**/*IT.class</include>
</includes>
</configuration>
</plugin>
Above two plugins will do magic for cucumber test running in parallel (provided you machine also have advanced hardware support).
Strictly provided <forkCount>n</forkCount> here 'n' is directly proportional to 1) Advanced Hardware support and 2) you available nodes i.e. registered browser instances to HUB.
One major and most important changes is your WebDriver class must be SHARED and you should not implement driver.quit() method, as closing is take care by shutdown hook.
import cucumber.api.Scenario;
import cucumber.api.java.After;
import cucumber.api.java.Before;
import org.openqa.selenium.OutputType;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebDriverException;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.support.events.EventFiringWebDriver;
public class SharedDriver extends EventFiringWebDriver {
private static WebDriver REAL_DRIVER = null;
private static final Thread CLOSE_THREAD = new Thread() {
@Override
public void run() {
REAL_DRIVER.close();
}
};
static {
Runtime.getRuntime().addShutdownHook(CLOSE_THREAD);
}
public SharedDriver() {
super(CreateDriver());
}
public static WebDriver CreateDriver() {
WebDriver webDriver;
if (REAL_DRIVER == null)
webDriver = new FirefoxDriver();
setWebDriver(webDriver);
return webDriver;
}
public static void setWebDriver(WebDriver webDriver) {
this.REAL_DRIVER = webDriver;
}
public static WebDriver getWebDriver() {
return this.REAL_DRIVER;
}
@Override
public void close() {
if (Thread.currentThread() != CLOSE_THREAD) {
throw new UnsupportedOperationException("You shouldn't close this WebDriver. It's shared and will close when the JVM exits.");
}
super.close();
}
@Before
public void deleteAllCookies() {
manage().deleteAllCookies();
}
@After
public void embedScreenshot(Scenario scenario) {
try {
byte[] screenshot = getScreenshotAs(OutputType.BYTES);
scenario.embed(screenshot, "image/png");
} catch (WebDriverException somePlatformsDontSupportScreenshots) {
System.err.println(somePlatformsDontSupportScreenshots.getMessage());
}
}
}
Considering you want to execute more than 50 threads i.e. same no of browser instances are registered to HUB but Hub will die if it doesn't get enough memory therefore to avoid this critical situation you should start hub with -DPOOL_MAX=512 (or larger) as stated in grid2 documentation.
Really large (>50 node) Hub installations may need to increase the jetty threads by setting -DPOOL_MAX=512 (or larger) on the java command line.
java -jar selenium-server-standalone-<version>.jar -role hub -DPOOL_MAX=512
A:
Cucumber does not support parallel execution out of the box.
I've tried, but it is not friendly.
We have to use maven's capability to invoke it in parallel. Refer link
Also there is a github project which uses custom plugin to execute in parallel.
Refer cucumber-jvm-parallel-plugin
A:
If all you are expecting is to be able to run multiple features in parallel, then you can try doing the following :
Duplicate the class AbstractTestNGCucumberTests in your test project and set the attribute parallel=true to the @DataProvider annotated method.
Since the default dataprovider-thread-count from TestNG is 10 and now that you have instructed TestNG to run features in parallel, you should start seeing your feature files get executed in parallel.
But I understand that Cucumber reporting is inherently not thread safe, so your reports may appear garbled.
| {
"pile_set_name": "StackExchange"
} |
Q:
R Sum every n rows across n columns
I have a data.frame that looks like this:
Geotype <- c(1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3)
Strategy <- c("Demand", "Strategy 1", "Strategy 2", "Strategy 3", "Strategy 4", "Strategy 5", "Strategy 6")
Year.1 <- c(1:21)
Year.2 <- c(1:21)
Year.3 <- c(1:21)
Year.4 <- c(1:21)
mydata <- data.frame(Geotype,Strategy,Year.1, Year.2, Year.3, Year.4)
I want to sum each Strategy for each Year.
This means I need to sum 6 rows down each column in the data frame and then skip the Demand row. I then want to repeat this for all columns (40 years).
I want the output data frame to look like this:
Geotype.output <- c(1, 2, 3)
Year.1.output <- c(27, 69, 111)
Year.2.output <- c(27, 69, 111)
Year.3.output <- c(27, 69, 111)
Year.4.output <- c(27, 69, 111)
output <- data.frame(Geotype.output,Year.1.output, Year.2.output, Year.3.output, Year.4.output)
Any suggestions on how to do this elegantly? I tried to hack a solution together using this, this and this, but I wasn't successful because I need to skip a row.
A:
You can try with base R aggregate function (to aggregate data by Geotype, using function sum as "unique value") but using a reduced data.frame (without the "Demand" rows and the Strategy column):
aggregate(.~Geotype, data=mydata[mydata$Strategy !="Demand", -2], FUN=sum)
# Geotype Year.1 Year.2 Year.3 Year.4
#1 1 27 27 27 27
#2 2 69 69 69 69
#3 3 111 111 111 111
A:
Using data.table:
library(data.table)
setDT(mydata)
output = mydata[Strategy != "Demand",
.(Year.1.output = sum (Year.1),
Year.2.output = sum (Year.2),
Year.3.output = sum (Year.3),
Year.4.output = sum (Year.4)),
by = Geotype]
# Geotype Year.1.output Year.2.output Year.3.output Year.4.output
# 1: 1 27 27 27 27
# 2: 2 69 69 69 69
# 3: 3 111 111 111 111
We can simplify this to deal more easily with many year columns by
setDT(mydata)[Strategy != "Demand",
lapply(.SD, sum),
by=Geotype,
.SDcols=grep("Year", names(mydata))]
A:
I would prefer getting my data in a long format like so:
library(dplyr)
library(tidyr)
library(reshape2)
mydata %>% gather(key, value, - Geotype, - Strategy) %>%
filter(Strategy!="Demand") %>% group_by(Geotype, key) %>%
summarize(sum = sum(value))
resultin in:
Geotype key sum
<dbl> <chr> <int>
1 1 Year.1 27
2 1 Year.2 27
3 1 Year.3 27
4 1 Year.4 27
5 2 Year.1 69
6 2 Year.2 69
7 2 Year.3 69
8 2 Year.4 69
9 3 Year.1 111
10 3 Year.2 111
11 3 Year.3 111
12 3 Year.4 111
Using spread:
mydata %>% gather(key, value, - Geotype, - Strategy) %>%
filter(Strategy!="Demand") %>% group_by(Geotype, key) %>%
summarize(sum = sum(value)) %>% spread(key, sum)
yields
Geotype Year.1 Year.2 Year.3 Year.4
* <dbl> <int> <int> <int> <int>
1 1 27 27 27 27
2 2 69 69 69 69
3 3 111 111 111 111
| {
"pile_set_name": "StackExchange"
} |
Q:
Meaning of "binge"?
What does it really mean? And what do binge-watching; binge-reading; binge-eating mean?
A:
Binge as a modifier for an activity indicates that the activity is performed both episodically and excessively.
A binge-drinker (which was the most common original use) is one who "goes on a binge". This implies that s/he gets very drunk, but not all the time. If it were all the time, it would be chronic or habitual.
| {
"pile_set_name": "StackExchange"
} |
Q:
Twenty-Fifth or 25th?
Possible Duplicate:
What is the best format to use when writing out dates?
In represent a time and date, which of the following is the most proper (did I even frame this question right?)
Tuesday, September 25, 2012
vs
Tuesday, September 25th, 2012
vs
Tuesday, September Twenty-Fifth, 2012
For whatever reason, this has sparked debate where I work.
A:
It isn't a question of any of them being proper. Different people follow different conventions. The important thing is that those working in any one organisation decide on a particular format and stick to it. My own practice, following the practice where I once worked, is to write 25 September 2012, putting the day of the week in front only if necessary. That seems to me to be clear and simple.
| {
"pile_set_name": "StackExchange"
} |
Q:
Failed to acquire global mutex lock with apache and mod_python
I have a web application that is being migrated from Ubuntu 14.04 to Ubuntu 16.04. I have followed all of the instructions that I normally would have done for setting this up in 14.04. The application runs fine...as long as I don't logout of my ssh session, I start to get these errors when I do:
(22)Invalid argument: Failed to acquire global mutex lock at index 7
I have mpm_prefork enabled, mpm_event is disabled. (My instructions do not mention mpm_worker, but it is disabled as well)
Apache is running as a local user (not www-data or root).
When I run ipcs -s I see several semaphore arrays for this user. If I log out of my ssh session and log back in, those semaphores are gone. Coincidentally, if I start apache without logging in as that user, it works perfectly fine until someone logs in as that user and logs out.
I have confirmed that ipcrm is not being called when the semaphores are removed.
A:
This could be related to systemd-logind, which has RemoveIPC=yes set by default in /etc/systemd/logind.conf. Try setting it to no.
| {
"pile_set_name": "StackExchange"
} |
Q:
Select subset of rows from pandas DataFrame using entries from a separate partial MultiIndex
I have data in a pandas DataFrame with a MultiIndex. Let's call the labels of my MultiIndex "Run", "Trigger", and "Cluster". Separately, I have a list of pre-computed selection criteria that I get as a list of entries passing (these tend to be sparse, so listing passing indexes is most space efficient). The selection cuts may only be partially indexed, e.g. may only specify "Run" or ("Run", "Trigger") pairs.
How do I efficiently apply these cuts, ideally without having to inspect them to find their levels?
For example, consider the following data:
index = pandas.MultiIndex.from_product([[0,1,2],[0,1,2],[0,1]], names=['Run','Trigger','Cluster'])
df = pandas.DataFrame(np.random.rand(len(index),3), index=index, columns=['a','b','c'])
print(df)
a b c
Run Trigger Cluster
0 0 0 0.789090 0.776966 0.764152
1 0.196648 0.635954 0.479195
1 0 0.007268 0.675339 0.966958
1 0.055030 0.794982 0.660357
2 0 0.987798 0.907868 0.583545
1 0.114886 0.839434 0.070730
1 0 0 0.520827 0.626102 0.088976
1 0.377423 0.934224 0.404226
1 0 0.081669 0.485830 0.442296
1 0.620439 0.537927 0.406362
2 0 0.155784 0.243656 0.830895
1 0.734176 0.997579 0.226272
2 0 0 0.867951 0.353823 0.541483
1 0.615694 0.202370 0.229423
1 0 0.912423 0.239199 0.406443
1 0.188609 0.053396 0.222914
2 0 0.698515 0.493518 0.201951
1 0.415195 0.975365 0.687365
Selection criteria may take any of the following forms:
set1:
Int64Index([0], dtype='int64', name='Run')
set2:
MultiIndex([(0, 1),
(1, 2)],
names=['Run', 'Trigger'])
set3:
MultiIndex([(0, 0, 1),
(1, 0, 1),
(2, 1, 0)],
names=['Run', 'Trigger', 'Cluster'])
Application of these selection lists using a hypothetical select method would result in:
>>> print(df.select(set1))
a b c
Run Trigger Cluster
0 0 0 0.789090 0.776966 0.764152
1 0.196648 0.635954 0.479195
1 0 0.007268 0.675339 0.966958
1 0.055030 0.794982 0.660357
2 0 0.987798 0.907868 0.583545
1 0.114886 0.839434 0.070730
>>> print(df.select(set2))
a b c
Run Trigger Cluster
0 1 0 0.007268 0.675339 0.966958
1 0.055030 0.794982 0.660357
1 2 0 0.155784 0.243656 0.830895
1 0.734176 0.997579 0.226272
>>> print(df.select(set3))
a b c
Run Trigger Cluster
0 0 1 0.196648 0.635954 0.479195
1 0 1 0.377423 0.934224 0.404226
2 1 0 0.912423 0.239199 0.406443
pandas can join these kinds of mixed-level indices easily, so it seems like this should be a straightforward operation, but I can't figure out the write calls. loc works for set3 because the indices are the same depth, but I need a general solution.
A:
One way to achieve this using pure pandas is the following:
df.align(setN.to_series(), axis=0, join='inner')[0]
That is, convert the 'other' index to a Series and select the parts of each that would be kept during an inner join operation.
| {
"pile_set_name": "StackExchange"
} |
Q:
Tomcat 7 fails to start due to absence of jmxremote.access file while JMX authentication is disabled
My tomcat 7 (hosting at amazon-eu, java 1.7.0_51) fails to start with the following exception:
SEVERE: Catalina.start:org.apache.catalina.LifecycleException: Failed to start component [StandardServer[8005]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
at org.apache.catalina.startup.Catalina.start(Catalina.java:684)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:322)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:456)
Caused by: java.lang.IllegalArgumentException: jmxremote.access (No such file or directory)
at javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:372)
at org.apache.catalina.mbeans.JmxRemoteLifecycleListener.createServer(JmxRemoteLifecycleListener.java:304)
at org.apache.catalina.mbeans.JmxRemoteLifecycleListener.lifecycleEvent(JmxRemoteLifecycleListener.java:258)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90)
at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:402)
at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:347)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:725)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
... 7 more
Caused by: java.io.FileNotFoundException: jmxremote.access (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:137)
at java.io.FileInputStream.<init>(FileInputStream.java:96)
at com.sun.jmx.remote.security.MBeanServerFileAccessController.propertiesFromFile(MBeanServerFileAccessController.java:294)
at com.sun.jmx.remote.security.MBeanServerFileAccessController.<init>(MBeanServerFileAccessController.java:133)
at javax.management.remote.rmi.RMIConnectorServer.start(RMIConnectorServer.java:370)
... 15 more
JMX is enabled, because I have the following line in /usr/share/tomcat7/conf/tomcat7.conf:
$CATALINA_OPTS=-Dcom.sun.management.jmxremote -Djava.rmi.server.hostname=ec2-<ip>.eu-west-1.compute.amazonaws.com -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
and the following one in /usr/share/tomcat7/conf/server.xml:
<Listener className="org.apache.catalina.mbeans.JmxRemoteLifecycleListener"
rmiRegistryPortPlatform="9998" rmiServerPortPlatform="9998"/>
If I comment both these lines, everything is fine.
The question is: why jmxremote.access is required while "authenticate" was set to false.
A:
You're absolutely right, if you set com.sun.management.jmxremote.authenticate=false you shouldn't need the jmxremote.access file, I believe the problem is that the JMX parameters aren't getting picked up by tomcat, for what I know, tomcat7.conf isn't a standard config file for tomcat (check this), try adding them instead to /usr/share/tomcat7/bin/catalina.sh, like this:
CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote -Djava.rmi.server.hostname=ec2-<ip>.eu-west-1.compute.amazonaws.com -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
fore more info on how to configure tomcat check this, you'll see some more doc in the header of catalina.sh.
| {
"pile_set_name": "StackExchange"
} |
Q:
Finding the cause of "Unknown provider" errors
I'm getting the following error:
Error: [$injector:unpr] Unknown provider: nProvider <- n
I know this is being caused by the minification process and I understand why. However is there an easy way to determine which file is actually causing the issue?
A:
Angular 1.3.x has an ng-strict-di directive that is placed on the same element as the ng-app directive. This element causes your app to throw an error whenever dependencies have not been annotated. While it still doesn't give you the line number of the offending code, it does give you the function with its parameters (i.e. function($scope, myServiceName)) which is hopefully unique enough that you can find it pretty quickly in a good code edit.
A good overview of the directive: ng-strict-di.
A:
I understand the question and I have an answer, it's only slightly convoluted.
The way I found the problem was to rename all identifiers to make them ALL unique, then you get something useful to look for in your compiled javascript which will hopefully point you towards the culprit.
download my modified version of uglify (pull request pending...)
brew install node if you don't have node installed.
./bin/uglifyjs --unique_ids original.min.js >new.min.js
Now replace your compiled js with new.min.js and load your app to reproduce the problem
now you should get a dependency injection error like n4536
If your editor is awesome with super long lines you can just load new.min.js, look for n4536 and hopefully that'll help you identify the culprit.
If not this'll work to print some context around the problem.
egrep -o '.{199}n4536.{99}' new.min.js
A:
Angular's injector has 3 ways to resolve dependencies for you:
1. Inferring dependencies from function argument names. This is most used in all angular's examples, e.g.
app.controller('MyController', function($scope, MyService) { ... });
In this case injector casts function as string, parses argument names and looks for services/factories/anything-else matching that name.
2. Inline annotations. You might also encounter this syntax:
app.controller('MyController', ['$scope', 'MyService', function($scope, MyService) { ... }]);
In this case you make it much easier for the injector, since you explicitly state names of dependencies you require. The names are enclosed in quotes and js minifiers do not modify strings in code.
3. Inline annotations as property. If you define your controllers as functions, you might set annotations in special property $inject:
function MyController($scope, MyService) {...}
MyController.$inject = ['$scope', 'MyService'];
In this case we also explicitly state dependencies.
My guess is you're using the solution no. 1. Once minifier changes names of your implicitly defined dependencies, injector no longer knows, what are your function's dependencies. To overcome this you should use 2nd or 3rd way of annotating dependencies.
| {
"pile_set_name": "StackExchange"
} |
Q:
The body of the email can't be seen when using sp_send_dbmail in sql server 2005
I create a procedure to send automatic emails. The email gets to the address and everything seems working fine but the body can not be seen. I'm using sql 2005 and MS exchange server 2007. The part of the procedure that writes the body is as follow.
declare @bodymsg as varchar(1000)
set @bodymsg = 'The application '
set @bodymsg = @bodymsg + @appnum
set @bodymsg = @bodymsg + ' have been auto assign to you by the call center auto assign program.'
set @bodymsg = @bodymsg + CHAR(13)
set @bodymsg = @bodymsg + 'The borrower information is as follow:'
set @bodymsg = @bodymsg + CHAR(13)
set @bodymsg = @bodymsg + 'Name: '
set @bodymsg = @bodymsg + @borrower
set @bodymsg = @bodymsg + CHAR(13)
set @bodymsg = @bodymsg + 'Email: '
set @bodymsg = @bodymsg + @borremail
set @bodymsg = @bodymsg + CHAR(13)
set @bodymsg = @bodymsg + 'Phone: '
set @bodymsg = @bodymsg + @borrhome
set @bodymsg = @bodymsg + CHAR(13)
set @bodymsg = @bodymsg + 'Cellphone: '
set @bodymsg = @bodymsg + @borrcell
set @bodymsg = @bodymsg + CHAR(13)
set @bodymsg = @bodymsg + CHAR(13)
set @bodymsg = @bodymsg + 'Please contact the borrower ASAP.'
execute [msdb].[dbo].[sp_send_dbmail]
@profile_name = 'CallCenter',
@recipients = @email,
@subject = @subjectmsg,
@body = @bodymsg,
@body_format = 'TEXT'
A:
Just before you execute the sp_send_dbmail PRINT the @bodymsg parameter out so that you know the data has been built correctly.
e.g.
PRINT @bodymsg
execute [msdb].[dbo].[sp_send_dbmail]
@profile_name = 'CallCenter',
@recipients = @email,
@subject = @subjectmsg,
@body = @bodymsg,
@body_format = 'TEXT'
As you passing a number of parameters to it one of them could be setting the @bodymsg to NULL
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.