source
sequence | text
stringlengths 99
98.5k
|
---|---|
[
"stackoverflow",
"0019912137.txt"
] | Q:
Ordering linestring direction algorithm
I want to build an algorithm in python to flip linestrings (arrays of coordinates) in a linestring collection which represent segments along a road, so that I can merge all coordinates into a single array where the coordinates are rising monotonic.
So my Segmentcollection looks something like this:
segmentCollection = [['1,1', '1,3', '2,3'],
['4,3', '2,3'],
['4,3', '7,10', '5,5']]
EDIT: SO the structure is a list of lists of 2D cartesian coordinate tuples ('1,1' for example is a point at x=1 and y=1, '7,10' is a point at x=7 and y=10, and so on). The whole problem is to merge all these lists to one list of coordinate tuples which are ordered in the sense of following a road in one direction...in fact these are segments which I get from a road network routing service,but I only get segments,where each segment is directed the way it is digitized in the database,not into the direction you have to drive. I would like to get a single polyline for the navigation route out of it.
So:
- I can assume, that all segments are in the right order
- I cannot assume that the Coordinates of each segment are in the right order
- Therefore I also cannot assume that the first coordinate of the first segment is the beginning
- And I also cannot assume that the last coordinate of the last segment is the end
- (EDIT) Even thought I Know,where the start and end point of my navigation request is located,these do not have to be identical with one of the coordinate tuples in these lists,because they only have to be somewhere near a routing graph element.
The algorithm should iterate through every segment, flip it if necessary, and append it then to the resulting array. For the first segment,the challenge is to find the starting point (the point which is NOT connected to the next segment). All other segments are then connected with one point to the last segment in the order (a directed graph).
I'd wonder if there isn't some kind of sorting data structure (sorting tree or anything) which does exactly that. Could you please give some ideas? After messing around a while with loops and array comparisons my brain is knocked out, and I just need a kick into the right direction in the true sense of the word.
A:
If I understand correctly, you don't even need to sort things. I just translated your English text into Python:
def joinSegments( s ):
if s[0][0] == s[1][0] or s[0][0] == s[1][-1]:
s[0].reverse()
c = s[0][:]
for x in s[1:]:
if x[-1] == c[-1]:
x.reverse()
c += x
return c
It still contains duplicate points, but removing those should be straightforward.
|
[
"stackoverflow",
"0001654023.txt"
] | Q:
does mono run on 64-bit linux? any issues?
I need to run/develop a mono-based application on a new dedicated server, and the ISP I usually use only offers 64bit Linux (of which I'll take Ubuntu)
Is there any problems running mono on this configuration?
A:
I have been using Mono2.4 for a while on 64-bit OpenSUSE 11.1, with no particular problem.
An instance of that was a client/server architecture on top of an SVN server (with hook-scripts called by the server, sending messages over the network). So it used multithreading, network TCP/IP sessions, a bit of cryptography, basic GUI configuration windows, serializing, a little bit of reflection.
The only issue was a difference on how the windows forms were behaving in comparison to their .NET equivalent, but this is nothing to do with 64-bit/32-bit.
I did encounter a few problems when accessing an external C++ dynamic library, had to review the marshalling of a few pointers with a sloppy implementation (my mistake).
Are there specific libraries you are using?
|
[
"security.stackexchange",
"0000079884.txt"
] | Q:
Is it possible to perform one-way OTR MITM?
Here's something that is bugging me recently: suppose that me and my friend establish an OTR session and - as a result of that - DH key exchange is performed. My friend verifies my key, but I cannot verify his fingerprint. Despite that, we have a secure channel over which he can send me one bit of information - whether my key was valid or not. Can I trust the OTR session if he successfully verified my key and sent me this confirmation or is there still a risk of a man-in-the-middle attack?
A:
I ended up asking on the OTR's developer mailing list and here's the response I got:
On Wed, Apr 08, 2015 at 02:51:14PM +0200, Jacek Wielemborek wrote:
Hello,
Here's something that keeps bugging me for a moment. It's a hypothetical
situation, so please refrain from question like "how did you send the FP
to your friend?".
My friend receives my OTR fingerprint over a secure channel (e.g.
meeting in person), but he doesn't give the fingerprint to me and
destroys the channel,
Me and my friend establish a perfectly secure channel that has two
limitations: my friend can only send messages to me (not the other way)
and only one bit of information can be transferred.
We agree that over this channel, he will tell me during a future OTR
session whether my fingerprint matched what he received from me during
step 1,
We go back home, establish an OTR session over the internet, which
isn't secure yet. My friend verifies my fingerprint based on what we got
during step 1 and tells me whether it matched over channel from step 2,
If the fingerprint he sees on the screen matches the thing he got from
step 1, can I assume that there can be no man in the middle? In other
words, is it possible to perform a one-way OTR MITM where my friend
actually sees my real fingerprint, but when he responds, I can't see
his, but one from the MITM?
Hopefully I explained this clearly. Cheers,
Jacek Wielemborek
Jacek,
Way back in OTRv1, this was actually possible. However, the MITM
wouldn't be able to read, write, or modify the messages; the issue
was that your buddy would see your key as being the partner in the
conversation, while you would see the MITM as being your partner in
the conversation. (The messages would actually be going to your
true buddy, though.)
We upgraded the key exchange protocol in 2005 to address this problem,
and I do not believe a MITM could pull this off with OTRv2 or OTRv3
(unless one of the endpoints were compromised, of course).
[Even better is to exchange a shared secret in your step 1 instead,
and then use the SMP to mutually authenticate your long-term keys.
But this requires a secure channel in the secrecy sense, whereas
your version only requires a secure channel in the authenticity
sense; the latter is sometimes easier to pull off in practice.]
-- Ian Goldberg Associate Professor and University Research Chair Cheriton School of Computer Science, University of Waterloo Visiting
Professor, ESAT/COSIC, KU Leuven
|
[
"stackoverflow",
"0024870044.txt"
] | Q:
How to manually submit a post request to a server?
I am looking for a way to manually submit a Post request to a server, without using the website's UI. I can see the request headers and the post parameters in Firebug when I perform the action manually (clicking the UI's "submit" button). I am hoping there is a way to reverse engineer some Javascript using these headers and parameters so that we can automate this process.
Reason: My company recently purchased some process automation software that enables us to write automation bots that access out business partner's portal site and automatically adjust our digital marketing bids. For one of our partner sites, front-end manipulation doesn't appear to work, because the Post request is submitted via AJAX.
The software does allow us to execute custom javascript within the environment, so I am trying to construct some Javascript using the headers and request parameters.
Is there a standard template into which I can plug these parameters to execute Javascript that will send the Post request to the server?
Thank you
UPDATE:
Thank you all for your help! I've made some progress but am still having difficulty implementing the solution within the software.
The following request works when I run the code in Firebug in Firefox:
$.ajax({
type: "POST",
url: "http://acp.example.com/campaigns/122828",
data: "data-string"
});
However, the software we're using might be a little out of date and I'm not sure it recognizes the AJAX syntax.
Is there a way to effectively write the same statement above, but in Javascript rather than AJAX? Then I think it would work.
A:
You can use AJAX to post data to a server without any direct UI interaction. I will break down a simple jQuery example below:
$.ajax({
type: "POST",
url: url,
data: data,
success: success,
});
$.ajax Is a method offered by the jQuery framework to make AJAX requests simple and cross browser compatible. As you can see I have passed in a JSON object containing various values:
type - This is the first key I have specified, in this instance you'll want this to be of the value POST as this determines the HTTP Request Method.
url - This specifies the server end point, for example: post/data/here.php would post the data to that url so that it can be picked up and handled correctly.
data - This key expects a JSON object, string or array of data to send in the POST request.
success - This key expects a function, it is called on the server's response to the request, with any relevant data passed through.
More documentation is available at: http://api.jquery.com/jquery.ajax/
|
[
"stackoverflow",
"0007141879.txt"
] | Q:
How can I set up my htaccess to do this?
I am trying to set up my htaccess file to perform these redirections:
http://www.mysite.com/about should link to http://www.mysite.com/content/pages/about.php
http://www.mysite.com/login should link to http://www.mysite.com/content/pages/login.php
http://www.mysite.com/prices should link to http://www.mysite.com/content/pages/prices.php
Thanks
A:
To only redirect URLs you have provided you can use this kind of rule (you can add other pages to the list of you need):
Options +FollowSymLinks -MultiViews
RewriteEngine On
RewriteBase /
RewriteRule ^(about|login|prices)$ /content/pages/$1.php [L]
To redirect ALL non-existing pages to /content/pages/PAGE_NAME.php, you can use this rule:
Options +FollowSymLinks -MultiViews
RewriteEngine On
RewriteBase /
# do not do anything for already existing files (like images/css/js etc)
RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule .+ - [L]
# redirect everything else
RewriteCond %{REQUEST_URI} !^/content/pages/
RewriteRule ^(.+)$ /content/pages/$1.php [L]
NOTES:
You need to place these rules into .htaccess in website root folder. If placed elsewhere some small tweaking may be required.
|
[
"math.stackexchange",
"0000261780.txt"
] | Q:
Rational Gaussian-type integral with sin
Here is a tough integral. Does anyone have any clever ideas?. I have tried all sorts of things, but make no real headway.
$\displaystyle \int_{0}^{\infty}\frac{e^{-x^{2}}\sin^{2}(x)}{x^{2}}dx$
Would residues be a consideration with this one?.
Thanks all.
A:
The answer is
$$\int_0^\infty dx\,\frac{e^{-x^2}\sin^2 x}{x^2}=\frac{\pi}{2}\,{\rm erf}(1)-\frac{\sqrt{\pi}}{2e}(e-1).$$
Consider
$$I(a)=\int_0^\infty dx\,\frac{e^{-ax^2}\sin^2 x}{x^2}$$
for which
$$-I'(a)=\int_0^\infty dx\,e^{-ax^2}\sin^2 x.$$
This last integral can be done by expanding sine in terms of exponentials and completing the square in each term. The result is
$$-I'(a)=\frac{\sqrt{\pi}}{16a}e^{-1/a}\left(e^{1/a}-1\right)$$
which can be anti-differentiated (I used Mathematica) in terms of the error function. As $a\to\infty$, $I(a)\to 0$, setting the constant of integration. Plugging in $a=1$ gives your answer.
|
[
"math.stackexchange",
"0000692935.txt"
] | Q:
Regarding the order of elements in a factor group
If I have understood it correctly, a factor group consists of all cosets of a subgroup $H$ of $G$. Since it is a requirement that $H$ is normal, left and right cosets are equivalent. I am asked to find the order of the element $5 + <4>$ in the factor group $\mathbb{Z}_{12}/<4>$. After some googling, I found the standard procedure to solve such tasks, and concluded with the following:
Since $<4> = \{0, 4, 8\}$ in $\mathbb{Z}_{12}$, and $5 +_8 5 +_8 5 + _8 5 = 8,$ we have that the order of $5 + <4>$ in $\mathbb{Z}_{12}$ is 4.
Note that at this point, I am shamefully reciting something I memorized while changing the numbers so that it'll fit the task. I have little to no understanding as to why this works. I do know that a factor group is a group of all cosets of the group $H$, in this case, $<4> \le \space \mathbb{Z}_{12}$, so if I want to explain my answer further, is my current intuitive guess good enough?:
We wish to find $$|<5 + \{0, 4, 8\}>|$$ in $\mathbb{Z}_{12}/<4>$. We have that $$<5 +\{0, 4, 8\}> \space=\{\{5,9,1\},\{10,2,6\},\{3,7,11\},\{8,0,4\}\}$$
Which is of order 4.
A:
Your intuitive guess is pretty spot-on, and is much better than what Google told you to do.
Another way to tell that you have listed all of the cosets in the factor group is by noticing that the last element in the set (the one you got by adding the generator to itself 4 times) has 0 in it. That means it's the identity coset. Once you get the identity coset, you can stop writing down elements.
If you're having trouble picturing these cosets, you can also imagine the factor group construction as a type of "gluing". In the factor group, two elements are "glued" together if their difference is in the subgroup $H$. Then operations in the factor group are just operations in the original group, keeping the gluing in mind.
Now here's yet another way to tell that $5+<4>$ has order $4$: if you add it to itself $4$ times you get $8$ mod $12$. But $8$ is glued to $0$, since their difference is in $<4>$.
|
[
"stackoverflow",
"0054982457.txt"
] | Q:
Trying to check if the x, y parameter in this function has been the same for quite some time, but doesn't seem to work
So... I'm trying to check if the x, y parameter in this function below has been the same for quite some time, and if it has, the reward variable should go down... I don't know if my issue is due to the mouseX, mouseY values being numpy arrays, but...
Code:
def xystoreandcheck(x, y, reward):
global mouseX
np.append(x, mouseX)
global mouseY
np.append(y, mouseY)
if len(mouseX) > 4:
if mouseX[-1] == mouseX[-2] or mouseX[-3] == mouseX[-1]:
reward += -10.00
print("Actor reward is now " + str(reward) + " due to agent failing to move mouse pointer in X coords.")
if len(mouseY) > 4:
if mouseY[-1] == mouseY[-2] or mouseY[-3] == mouseY[-1]:
reward += -10.00
print("Actor reward is now " + str(reward) + " due to agent failing to move mouse pointer in Y coords.")
return reward
A:
I see two things that could cause your problems:
np.append() is not an in-place operation so you should assign its return value. Also the first parameter should be the array and the second parameter the value you want to append: mouseX = np.append(mouseX, x)
Do you want to check if the last three values are all the same? Then the or in the condition should be an and.
|
[
"superuser",
"0001191896.txt"
] | Q:
Why doesn't `apt-get -f install` happen automatically?
Occasionally with Ubuntu, if my packages get in a tangle, I need to run apt-get -f install to fix it.
If this is a routine fix, why doesn't it happen automatically?
Is there any reason I wouldn't want to run it?
A:
Option -f (or its equivalent long version, --fix-broken) makes apt-get attempt to fix broken dependencies. If you ask why it's not enabled by default, I'd say it's a good thing to know when your packages have issues, and then correct them.
A:
Your question seems to indicate that, so far, you have encountered only minor problems in your package handling. An example of such a simple problem occurs when you install google-chrome, where the installation fails because of the lack of the libappindicator1 package, and a simple invocation of apt-get -f install will download the missing package, and then resume and complete the installation of google-chorme.
Alas, not all situations are so easy. Sometimes you run into truly complex problems, where you need to downgrade some packages so that you can upgrade some other package. Under these conditions, you will likely have several courses open to you, and you will most likely want/need to be able to choose among different possibilities. Even the laying out of the different courses available to you is not standard, and depends upon the tool used. You mention apt-get -f install which is a rather simple instrument indeed (but better than its predecessor, deborphan, for which I rarely find a use nowadays).
In fact, in these situations I prefer the much more skillful aptitude, of which the Debian Admin Handbook says (page 285):
6.4.1.3. Better Solver Algorithms
To conclude this section, let's note that aptitude has more elaborate algorithms compared to apt-get when it comes to resolving difficult situations. When a set of actions is requested and when these combined actions would lead to an incoherent system, aptitude evaluates several possible scenarios and presents them in order of decreasing relevance. However, these algorithms are not failproof. Fortunately there is always the possibility to manually select the actions to perform. When the currently selected actions lead to contradictions, the upper part of the screen indicates a number of “broken” packages (and you can directly navigate to those packages by pressing b). It is then possible to manually build a solution for the problems found. In particular, you can get access to the different available versions by simply selecting the package with Enter. If the selection of one of these versions solves the problem, you should not hesitate to use the function. When the number of broken packages gets down to zero, you can safely go to the summary screen of pending actions for a last check before you apply them.
So you see that neither the instrument used to solve the broken configuration, nor the course of action available to you are as simple as you seem to imply. Thus it is better to leave every user to exercise his free will, by choosing an instrument (deborphan/apt-get/aptitude/synaptic/the CLI/...) and a choice of packages whenever this choice is not unambiguous.
|
[
"stackoverflow",
"0037652243.txt"
] | Q:
Record syntax and sum types
I have this question about sum types in Haskell.
I'd like to create a sum type which is comprised of two or more other types, and each of the types may contain multiple fields. A trivial example would be like this:
data T3 = T1 { a :: Int, b :: Float} | T2 { x :: Char } deriving (Show)
In my understanding, T1 and T2 are data constructors which use record syntax. It seems that the definition of T3 will grow as the number of fields in T1 or T2 increases. My question is that how to practically handle these sum type constructors if the number of fields are large? Or, is it a good idea to mix sum type with record syntax?
A:
I don't quite understand what concerns you have, but to answer the question in the last line: no, it is rather not a good idea to mix sum types with record syntax. Records in general remain a bit of a weak spot of the Haskell language; they don't handle scoping very well at all. It's usually fine as long as you just have some seperate types with different record labels, but as soon as sum types or name clashes come in it gets rather nasty.
In particular, Haskell permits you to use a record field accessor of the T1 constructor for any value of type T3 – print $ a (T2 'x') will compile without warnings, but give a rather hard to foresee error at runtime.
In your example, it fortunately looks like you can easily avoid that kind of trouble:
data T3 = T3_1 T1 | T3_2 T2
deriving (Show)
data T1 = T1 { a :: Int
, b :: Float}
deriving (Show)
data T2 = T2 { x :: Char }
deriving (Show)
Now, any deconstruction you could write will be properly typechecked to make sense.
And such a structure of meaningful, small specialised sub-types† is generally better to handle than a single monolithic type, especially if you have many functions that really deal only with part of the data structure.
The flip side is that it gets quadratically tedious to unwrap the layers of constructors, but that's fortunately a solved problem now: lens libraries allow you to compose accessor/modifiers very neatly.
Speaking of solved problems: Nikita Volkov has come up with a really nice concept for entirely replacing the problem-ridden record syntax.
†Um... actually these aren't subtypes in any proper sense of the word, but you get what I mean.
|
[
"stackoverflow",
"0057556449.txt"
] | Q:
How to bypass an error dialog when calling a C++ executable from MATLAB?
I need to run a C++ executable from a for loop in MATLAB. I have written the following piece of code for the purpose,
EqNumbers = [17 18 20 21 22 23];
for i = 1:length(EqNumbers)
EqNumber = EqNumbers(i);
WriteRunE_File(EqNumber);
filename=['RunE_1.tcl'];
system(['OpenSees.exe<',filename]);
end
It is working fine most of the time, however, sometimes debug errors (like the one shown below) will appear. It is prompting me for action, if I press the "Abort" button then the program will continue for the next iteration. I just want to make this process automated, every time manually pressing the abort button is not possible for me, because there are more than 1000 iteartions in the program.
I tried using try-catch end as follows, but it did not serve the purpose.
EqNumbers = [17 18 20 21 22 23];
for i = 1:length(EqNumbers)
try
% Code to be executed goes here.
EqNumber = EqNumbers(i);
WriteRunE_File(EqNumber);
filename=['RunE_1.tcl'];
system(['OpenSees.exe<',filename]);
catch
disp('An error occurred in Equke');
disp('Execution will continue.');
end
end
I am searching for a way to bypass the error message or automatically press the "Abort" button. So that the program will move to the next iteration automatically.
Note:
I don't have access to the C++ source code (all I have is an executable), hence I cannot update the value of citaR. That's why I am looking for a workaround in MATLAB.
A:
MATLAB is not popping up this dialog. Your system is.
Someone created a program that uses an uninitialised variable and has undefined behaviour. They built it in debug mode. This combination results in an assertion. You cannot just turn that off.
Even if you could, you are aborting the program. That doesn't mean "ignore the problem": it means "abort the program". It's not completing its work. It's crashed. Every single time.
The executable is faulty. Period.
The author of the program should give you a release version: ideally, a non-buggy one.
Or, since the program is open source and can be found right here, you could try building a fresh version, or debugging it and contributing a fix.
|
[
"stackoverflow",
"0027814915.txt"
] | Q:
How to add App Groups iOS 8 Extension?
I am having a strange issue: In the project i am working we have to introduce Share extension. The problem i have now is that the app Groups cannot be added.
When i activate it it shows that i have an issue at https://www.dropbox.com/s/sp7tqbv9x6q175i/Screenshot%202015-01-07%2009.56.34.png?dl=0 . If i use a different bundle id for the main app target it works but i must use this bundle id.
And if i try to add app groups https://www.dropbox.com/s/7pi1n4j8xajngvm/Screenshot%202015-01-07%2010.03.44.png?dl=0 the same error appears but one row up. I tried to change this settings from the provision portal from app IDs, there works and adds it without a problem.
Thank you.
A:
Ok so i fixed this issue by going to Provision portal,identifiers->App IDs, your id, edit, app groups enable, edit set app group and save. Now go to provision profile and re-generate all the inactive Provision profiles and download them. After that it worked.
|
[
"stackoverflow",
"0000411422.txt"
] | Q:
Pointer to void in C++?
I'm reading some code in the Ogre3D implementation and I can't understand what a void * type variable means. What does a pointer to void mean in C++?
A:
A pointer to void, void* can point to any object:
int a = 5;
void *p = &a;
double b = 3.14;
p = &b;
You can't dereference, increment or decrement that pointer, because you don't know what type you point to. The idea is that void* can be used for functions like memcpy that just copy memory blocks around, and don't care about the type that they copy.
A:
It's just a generic pointer, used to pass data when you don't know the type. You have to cast it to the correct type in order to use it.
A:
It's a raw pointer to a spot in memory. It doesn't allow any pointer arithmetic like char * or int *.
Here's some examples of usage
http://theory.uwinnipeg.ca/programming/node87.html
|
[
"stackoverflow",
"0014539867.txt"
] | Q:
How to display a progress indicator in pure C/C++ (cout/printf)?
I'm writing a console program in C++ to download a large file. I have known the file size, and I start a work thread to download. I want to show a progress indicator to make it look cooler.
How to display different strings at different times, but at the same position, in cout or printf?
A:
With a fixed width of your output, use something like the following:
float progress = 0.0;
while (progress < 1.0) {
int barWidth = 70;
std::cout << "[";
int pos = barWidth * progress;
for (int i = 0; i < barWidth; ++i) {
if (i < pos) std::cout << "=";
else if (i == pos) std::cout << ">";
else std::cout << " ";
}
std::cout << "] " << int(progress * 100.0) << " %\r";
std::cout.flush();
progress += 0.16; // for demonstration only
}
std::cout << std::endl;
http://ideone.com/Yg8NKj
[> ] 0 %
[===========> ] 15 %
[======================> ] 31 %
[=================================> ] 47 %
[============================================> ] 63 %
[========================================================> ] 80 %
[===================================================================> ] 96 %
Note that this output is shown one line below each other, but in a terminal emulator (I think also in Windows command line) it will be printed on the same line.
At the very end, don't forget to print a newline before printing more stuff.
If you want to remove the bar at the end, you have to overwrite it with spaces, to print something shorter like for example "Done.".
Also, the same can of course be done using printf in C; adapting the code above should be straight-forward.
A:
You can use a "carriage return" (\r) without a line-feed (\n), and hope your console does the right thing.
A:
For a C solution with an adjustable progress bar width, you can use the following:
#define PBSTR "||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||"
#define PBWIDTH 60
void printProgress(double percentage) {
int val = (int) (percentage * 100);
int lpad = (int) (percentage * PBWIDTH);
int rpad = PBWIDTH - lpad;
printf("\r%3d%% [%.*s%*s]", val, lpad, PBSTR, rpad, "");
fflush(stdout);
}
It will output something like this:
75% [|||||||||||||||||||||||||||||||||||||||||| ]
|
[
"stackoverflow",
"0057306863.txt"
] | Q:
Get text after each nth occurrence from string and number them
Querying Redshift through Aginity
I have a small table with just 1 field in and I want to get the value after each occurrence of the XX value up to the next space and also a column to number each occurrence.
MYTABLE:
MYFIELD
The quick XX brown fox XX jumps over the XX lazy dog
Get text XX after each XX nth XX occurrence XX from string
Desired Output:
MYFIELD OCC FIELDOUTPUT
The quick XX brown fox XX jumps over the XX lazy dog 1 brown
The quick XX brown fox XX jumps over the XX lazy dog 2 jumps
The quick XX brown fox XX jumps over the XX lazy dog 3 lazy
Get text XX after each XX nth XX occurrence XX from string 1 after
Get text XX after each XX nth XX occurrence XX from string 2 nth
Get text XX after each XX nth XX occurrence XX from string 3 occurrence
Get text XX after each XX nth XX occurrence XX from string 4 from
SQL Fiddle: http://sqlfiddle.com/#!15/991c8d
A:
You could split string with ORDINALITY:
WITH cte AS (
SELECT *
FROM MyTABLE, regexp_split_to_table(MYFIELD, E'\\s+') WITH ORDINALITY s(c,rn)
), cte2 AS (
SELECT myfield, c, LEAD(c) OVER(PARTITION BY MYFIELD ORDER BY rn) AS FieldOutput, rn
FROM cte
)
SELECT MYFIELD, Fieldoutput,
ROW_NUMBER() OVER(PARTITION BY MYFIELD ORDER BY rn) AS occ
FROM cte2
WHERE c = 'XX'
ORDER BY MYFIELD,rn;
db<>fiddle demo
A:
WITH dummy_values AS (
SELECT 1 UNION ALL
SELECT 1 UNION ALL
SELECT 1
)
, seq AS (
SELECT (ROW_NUMBER() OVER ())::INT occ
FROM dummy_values d1, dummy_values d2, dummy_values d3
)
SELECT
"MYFIELD"
, occ
, REGEXP_REPLACE(REGEXP_SUBSTR("MYFIELD", 'XX \\S+', 1, occ), 'XX ', '') fieldoutput
FROM mytable
JOIN seq ON occ <= REGEXP_COUNT("MYFIELD", 'XX ')
|
[
"english.meta.stackexchange",
"0000002991.txt"
] | Q:
Why was this answer deleted?
I posted this answer in good faith, but it's been deleted.
If you can't see it because you don't have sufficient rep to see deleted answers, it said the apostrophe in Oakland Athletics logo relates to the fact that A is a modern colloquial adjective/noun meaning of excellent quality/an excellent example.
The OP assumed A's in the logo specifically and only represented an "ungrammatical" abbreviation of Athletics - my answer said it was a plural collective descriptor for the team (who are all A's in the sense of excellent baseball players).
Some people might disagree this influenced the logo designer, but why should my answer deleted? Whether intended or not, that connotation will be present for some people (not just me, I'm sure).
I'll admit my link confirming the meaning of A was based on searching for "He's fucking A, man!", but since when did ELU get that prissy? That particular search string was an obvious one to choose if I wanted the modern slang definition; why should I have to delete the expletive before posting?
I suppose my question isn't really Why was this answer deleted? It's more Does anyone agree with me that this answer should not have been deleted? So upvote this question if you think it shouldn't.
A:
Simchona said:
Your answer was flagged as "Not an Answer", to
which I responded by deleting. Your answer
didn't really say whether or not the apostrophe
was misused.
Here's the answer in question:
It means each and every one of the team is an A - as in "He's fucking A, man!"
Andrew Grimm commented on the answer:
I wasn't the original person who downvoted and
flagged for deletion. However, this is not a serious
answer - it's a joke, and ought to be a comment.
That this non-answer had profanity in it may not
have helped.
Also, th eanswer was very short and your question here includes more explanation that I think would have been useful in the original answer:
... the
apostrophe in Oakland Athletics logo relates to
the fact that A is a modern colloquial adjective/ noun meaning of excellent quality/an excellent example.
The OP assumed A's in the logo specifically and only represented an "ungrammatical" abbreviation of Athletics - my answer said it was a plural collective descriptor for the team (who are
all A's in the sense of excellent baseball players). ...
|
[
"stackoverflow",
"0052662861.txt"
] | Q:
Pass List by Value instead of Reference to Task state
I am trying to create a task to print count of List:
List<int> test = new List<int>{1};
Task t = new Task((o) =>
{
List<int> a = (List<int>)o;
Console.WriteLine(a.Count);
}, test);
t.Start();
t.Wait();
above code print number 1, works as expected:
1
then I clear List<int> test before task start :
List<int> test = new List<int>{1};
Task t = new Task((o) =>
{
List<int> a = (List<int>)o;
Console.WriteLine(a.Count);
}, test);
test.Clear();
t.Start();
t.Wait();
But it print out same number 0:
0
it should print the number 1 like above, I think this is the problem of the List have passed as a reference instead of value, How to fix that?
A:
Create a copy of the list using
Task t = new Task(action, new List<int>(test));
or
Task t = new Task(action, test.ToList());
This creates separate list instances that are not shared between each task.
|
[
"rpg.stackexchange",
"0000174612.txt"
] | Q:
If a 20th level caster fails to learn a spell, can they never try again?
On page 238 of the Core Rulebook, under "Learn a Spell", the rules state:
Failure You fail to learn the spell but can try again after you gain a level. The materials aren’t expended.
Since character level is capped at 20th, does this mean that a 20th level caster who fails to Learn a Spell can never try to learn that spell again?
A:
They would be unable to try again.
As noted learn a spell states:
Critical Success You expend half the materials and learn the spell.
Success You expend the materials and learn the spell.
Failure You fail to learn the spell but can try again after you gain a level. The materials aren’t expended.
Critical Failure As failure, plus you expend half the materials.
This means that a level 20 sorcerer would be unable to try again since there aren't any rules for going beyond level 20[1] and thus the would be unable to get the level neded to try again.
Magical Shorthand would fix this.
Magical Shorthand states:
Learning spells comes easily to you. If you’re an expert in a tradition’s associated skill, you take 10 minutes per spell level to learn a spell of that tradition, rather than 1 hour per spell level. If you fail to learn the spell, you can try again after 1 week or after you gain a level, whichever comes first. If you’re a master in the tradition’s associated skill, learning a spell takes 5 minutes per spell level, and if you’re legendary, it takes 1 minute per spell level. You can use downtime to learn and inscribe new spells. This works as if you were using Earn Income with the tradition’s associated skill, but instead of gaining money, you choose a spell available to you to learn and gain a discount on learning it, learning it for free if your earned income equals or exceeds its cost.
This means that a level 20 sorcerer with Magical Shorthand could continue to attempt to learn a spell every week until they succeeded.
[1]: There aren't any rules for going beyond 20 as far as I'm aware, though I could be wrong.
|
[
"judaism.stackexchange",
"0000038180.txt"
] | Q:
What's wrong with making coffee on Shabbat?
If, on Shabbat, someone pours water from a kli shlishi (which according to all opinions I'm aware of) over coffee beans which are laying on a filter is there any problem with drinking the resulting coffee water? It doesn't seem like it would be bishul, borer, m'rakeid or nolad, so I'm not sure why it would be forbidden.
I'm not looking for psak halacha, just to understand the concept.
A:
Shemirath Shabbath, by R' YY Neuwirth (3:58), states very clearly that pouring water over tea leaves resting in a filter that is suspended over a cup, so that water passes through the filter into the cup, is not a problem of Borer. Using water cooled by transferring it to a third vessel (Keli Shelishi) is given there as an acceptable method for brewing fresh tea on Shabbath.
"Tea bags may be used to make tea on Shabbath, but only by putting them into water ... which is already in a keli shelishi..."
"...Boiling water may be poured into a strainer containing tea leaves, upon condition that the tea leaves were boiled up before Shabbath. This does not involve the prohibition against selection, since a) the strainer separates the water from the tea leaves immediately upon its being poured in and b) the water which comes out is the same water which has just been poured in, and it was separate and drinkable even before the whole process took place."
I don't think it is any stretch to apply this to coffee as well.
|
[
"stackoverflow",
"0032501594.txt"
] | Q:
Using a WCF application, "An error occurred while sending the request" VB
I have written a very basic WCF application that I am hosting on an IIS Server. Using the Test client, (From Visual Studio) the function returns an integer; which is intended.
However, my client application (A universal Windows 10 app) is forcing me to use Async Methods.
The below code is what I'm using to call the method. although It chokes at the Await with the error shown in the title. (System.ServiceModel.CommunicationException)
Dim theService As New MyService.Service1Client()
Dim a = Await theService.giveNumberAsync()
Dim dialog = New MessageDialog(a.ToString)
dialog.ShowAsync()
Ideally, I would like it to returnt the value and am stumped as to how to get any further with this problem
A:
Have you given your app the capabilities
Internet (Client)
Internet (Client & Server)
Private Networks (Client & Server)
in it´s manifest?
|
[
"stackoverflow",
"0017963411.txt"
] | Q:
Complex SQL query with multiple tables and relations
In this Query, I have to list pair of players with their playerID and playerName who play for the exact same teams.If a player plays for 3 teams, the other has to play for exact same 3 teams. No less, no more. If two players currently do not play for any team, they should also be included. The query should return (playerID1, playername1, playerID2, playerName2) with no repetition such as if player 1 info comes before player 2, there should not be another tuple with player 2 info coming before player 1.
For example if player A plays for yankees and redsox, and player b plays for Yankees, Red Sox, and Dodgers I should not get them. They both have to play for Yankees, and Red Sox and no one else. Right now this query finds answer if players play for any same team.
Tables:
player(playerID: integer, playerName: string)
team(teamID: integer, teamName: string, sport: string)
plays(playerID: integer, teamID: integer)
Example data:
PLAYER
playerID playerName
1 Rondo
2 Allen
3 Pierce
4 Garnett
5 Perkins
TEAM
teamID teamName sport
1 Celtics Basketball
2 Lakers Basketball
3 Patriots Football
4 Red Sox Baseball
5 Bulls Basketball
PLAYS
playerID TeamID
1 1
1 2
1 3
2 1
2 3
3 1
3 3
So I should get this as answer-
2, Allen, 3, Pierce
4, Garnett, 5, Perkins
.
2, Allen, 3 Pierce is an snwer because both play for exclusively CELTICS and PATRIOTS
4, Garnett, 5, Perkins iss an answer because both players play for no teams which should be in output.
Right now the Query I have is
SELECT p1.PLAYERID,
f1.PLAYERNAME,
p2.PLAYERID,
f2.PLAYERNAME
FROM PLAYER f1,
PLAYER f2,
PLAYS p1
FULL OUTER JOIN PLAYS p2
ON p1.PLAYERID < p2.PLAYERID
AND p1.TEAMID = p2.TEAMID
GROUP BY p1.PLAYERID,
f1.PLAYERID,
p2.PLAYERID,
f2.PLAYERID
HAVING Count(p1.PLAYERID) = Count(*)
AND Count(p2.PLAYERID) = Count(*)
AND p1.PLAYERID = f1.PLAYERID
AND p2.PLAYERID = f2.PLAYERID;
I am not 100% sure but I think this finds players who play for the same team but I want to find out players who play for the exclusively all same TEAMS as explained above
I am stuck on how to approach it after this. Any hints on how to approach this problem. Thanks for your time.
A:
I believe this query will do what you want:
SELECT array_agg(players), player_teams
FROM (
SELECT DISTINCT t1.t1player AS players, t1.player_teams
FROM (
SELECT
p.playerid AS t1id,
concat(p.playerid,':', p.playername, ' ') AS t1player,
array_agg(pl.teamid ORDER BY pl.teamid) AS player_teams
FROM player p
LEFT JOIN plays pl ON p.playerid = pl.playerid
GROUP BY p.playerid, p.playername
) t1
INNER JOIN (
SELECT
p.playerid AS t2id,
array_agg(pl.teamid ORDER BY pl.teamid) AS player_teams
FROM player p
LEFT JOIN plays pl ON p.playerid = pl.playerid
GROUP BY p.playerid, p.playername
) t2 ON t1.player_teams=t2.player_teams AND t1.t1id <> t2.t2id
) innerQuery
GROUP BY player_teams
Result:
PLAYERS PLAYER_TEAMS
2:Allen,3:Pierce 1,3
4:Garnett,5:Perkins
It uses array_agg over the teamid for each player in plays to match players with the exact same team configuration. I Included a column with the teams for example, but that can be removed without affecting the results as long as it isn't removed from the group by clause.
SQL Fiddle example.Tested with Postgesql 9.2.4
EDIT: Fixed an error that duplicated rows.
|
[
"stackoverflow",
"0046425062.txt"
] | Q:
Should I prefer MonadUnliftIO or MonadMask for bracketting like functions?
I'm currently building a new API, and one of the functions it currently provides is:
inSpan :: Tracer -> Text -> IO a -> IO a
I'm looking to move that Tracer into a monad, giving me a signature more like
inSpan :: MonadTracer m => Text -> m a -> m a
The implementation of inSpan uses bracket, which means I have two main options:
class MonadUnliftIO m => MonadTracer m
or
class MonadMask m => MonadTracer m
But which should I prefer? Note that I'm in control of all the types I've mentioned, which makes me slightly lean towards MonadMask as it doesn't enforce IO at the bottom (that is, we could perhaps have a pure MonadTracer instance).
Is there anything else I should consider?
A:
Let's lay out the options first (repeating some of your question in the process):
MonadMask from the exceptions library. This can work on a wide range of monads and transformers, and does not require that the base monad be IO.
MonadUnliftIO from the unliftio-core (or unliftio) library. This library only works for monads with IO at their base, and which is somehow isomorphic to ReaderT env IO.
MonadBaseControl from the monad-control library. This library will require IO at the base, but will allow non-ReaderT.
Now the tradeoffs. MonadUnliftIO is the newest addition to the fray, and has the least developed library support. This means that, in addition to the limitations of what monads can be instances, many good instances just haven't been written yet.
The important question is: why does MonadUnliftIO make this seemingly arbitrary requirement around ReaderT-like things? This is to prevent issues with lost monadic state. For example, the semantics of bracket_ (put 1) (put 2) (put 3) are not really clear, and therefore MonadUnliftIO disallows a StateT instance.
MonadBaseControl relaxes the ReaderT restriction and has wider library support. It's also considered more complicated internally than the other two, but for your usages that shouldn't really matter. And it allows you to make mistakes with the monadic state as mentioned above. If you're careful in your usage, this won't matter.
MonadMask allows totally pure transformer stacks. I think there's a good argument to be had around the usefulness of modeling asynchronous exceptions in a pure stack, but I understand this kind of approach is something people want to do sometimes. In exchange for getting more instances, you still have the limitations around monadic state, plus the inability to lift some IO control actions, like timeout or forkIO.
My recommendation:
If you want to match the way most people are doing things today, it's probably best to choose MonadMask, it's the most well adopted solution.
If you want that goal, but you also need to do a timeout or withMVar or something, use MonadBaseControl.
And if you know there's a specific set of monads you need compatibility with, and want compile time guarantees about the correctness of your code vis-a-vis monadic state, use MonadUnliftIO.
|
[
"stackoverflow",
"0010787941.txt"
] | Q:
Hide a div when another div is shown
I have 4 divs, two of them are shown on click (link), and hidden the same way. When I click the link for the other 2 divs, the first 2 should be hidden again and the other way around. Right now all 4 divs would be shown if the 2 links were clicked.
Easy: click link>show div; click second link>show second div while hiding fist div
The 2 links:
<a class="show_hideAbout show_hideAboutArr" href="#" >About</a>
<a class="show_hideContact show_hideContactArr" href="#" >Contact</a>
First 2 divs:
<div class="slidingDivAbout">Some Content</div>
<div class="slidingDivAboutArr">
<img src="img/dropdownarrow.png" width="24" height="12" alt="">
</div>
Other 2 divs:
<div class="slidingDivContact">Some Content</div>
<div class="slidingDivContactArr">
<img src="img/dropdownarrow.png" width="24" height="12" alt="">
</div>
And my script:
$(document).ready(function() {
$(".slidingDivAbout").hide();
$(".show_hideAbout").show();
$('.show_hideAbout').click(function() {
$(".slidingDivAbout").slideToggle(350);
});
$(".slidingDivAboutArr").hide();
$(".show_hideAboutArr").show();
$('.show_hideAboutArr').click(function() {
$(".slidingDivAboutArr").fadeToggle("fast", "linear");
});
$(".slidingDivContact").hide();
$(".show_hideContact").show();
$('.show_hideContact').click(function() {
$(".slidingDivContact").slideToggle(350);
});
$(".slidingDivContactArr").hide();
$(".show_hideContactArr").show();
$('.show_hideContactArr').click(function() {
$(".slidingDivContactArr").fadeToggle("fast", "linear");
});
});
A:
Without change your markup you can use this:
$(document).ready(function() {
$(".slidingDivAbout, .slidingDivAboutArr, .slidingDivContact, .slidingDivContactArr").hide(0);
$('.show_hideAbout').click(function() {
$(".slidingDivContact").slideUp(300, function() {
$(".slidingDivContactArr").fadeOut(300, function() {
$(".slidingDivAbout").slideToggle(350, "linear", function() {
$(".slidingDivAboutArr").fadeToggle(350);
});
});
});
});
$('.show_hideContact').click(function() {
$(".slidingDivAbout").slideUp(300, function() {
$(".slidingDivAboutArr").fadeOut(300, function() {
$(".slidingDivContact").slideToggle(350, function() {
$(".slidingDivContactArr").fadeToggle(350, "linear");
});
});
});
});
});
Working Sample
|
[
"stackoverflow",
"0026210439.txt"
] | Q:
How do I check whether have encoded in utf-8 successfully
Given a string
u ='abc'
which syntax is the right one to encode into utf8?
u.encode('utf-8')
or
u.encode('utf8')
And how do I know that I have already encoded in utr-8?
A:
First of all you need to make a distinction if you're talking about Python 2 or Python 3 because unicode handling is one of the biggest differences between the two versions.
Python 2
unicode type contains text characters
str contains sequences of 8-bit bytes, sometimes representing text in some unspecified encoding
s.decode(encoding) takes a sequence bytes and builds a text string out of it, once given the encoding used by the bytes. It goes from str to unicode, for example "Citt\xe0".decode("iso8859-1") will give you the text "Città" (Italian for city) and the same will happen for "Citt\xc3\xa0".decode("utf-8"). The encoding may be omitted and in that case the meaning is "use the default encoding".
u.encode(encoding) takes a text string and builds the byte sequence representing it in the given encoding, thus reversing the processing of decode. It goes from unicode to str. As above the encoding can be omitted.
Part of the confusion when handling unicode with Python is that the language tries to be a bit too smart and does things automatically.
For example you can call encode also on an str object and the meaning is "encode the text that comes from decoding these bytes when using the default encoding, eventually using the specified encoding or the default encoding if not specified".
Similarly you can also call decode on an unicode object, meaning "decode the bytes that come from this text when using the default encoding, eventually using the specified encoding".
For example if I write
u"Citt\u00e0".decode("utf-8")
Python gives as error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe0' in
position 3: ordinal not in range(128)
NOTE: the error is about encoding that failed, while I asked for decoding. The reason is that I asked to decode text (nonsense because that is already "decoded"... it's text) and Python decided to first encode it using the "ascii" encoding and that failed. IMO much better would have to just not have decode on unicode objects and not have encode on string objects: the error message would have been clearer.
More confusion is that in Python 2 str is used for unencoded bytes, but it's also used everywhere for text and for example string literals are str objects.
Python 3
To solve some of the issues Python 3 made a few key changes
str is for text and contains unicode characters, string literals are unicode text
unicode type doesn't exist any more
bytes type is used for 8-bit bytes sequences that may represent text in some unspecified encoding
For example in Python 3
'Città'.encode('iso8859-1') → b'Citt\xe0'
'Città'.encode('utf-8') → b'Citt\xc3\xa0'
also you cannot call decode on text strings and you cannot call encode on byte sequences.
Failures
Sometimes encoding text into bytes may fail, because the specified encoding cannot handle all of unicode. For example iso8859-1 cannot handle Chinese. These errors can be processed in a few ways like raising an exception (default), or replacing characters that cannot be encoded with something else.
The encoding utf-8 however is able to encode any unicode character and thus encoding to utf-8 never fails. Thus it doesn't make sense to ask how to know if encoding text into utf-8 was done correctly, because it always happens (for utf-8).
Also decoding may fail, because the sequence of bytes may make no sense in the specified encoding. For example the sequence of bytes 0x43 0x69 0x74 0x74 0xE0 cannot be interpreted as utf-8 because the byte 0xE0 cannot appear without a proper prefix.
There are encodings like iso8859-1 where however decoding cannot fail because any byte 0..255 has a meaning as a character. Most "local encodings" are of this type... they map all 256 possible 8-bit values to some character, but only covering a tiny fraction of the unicode characters.
Decoding using iso8859-1 will never raise an error (any byte sequence is valid) but of course it can give you nonsense text if the bytes where using another encoding.
|
[
"stackoverflow",
"0020781774.txt"
] | Q:
If I start today, what hardware need to develop for iOS
I read some questions about it, but some are old and with the recent ios 7 this can be a little different. I usually develop android, and I have opportunity to port my apps to iOS, but I havent a mac or ipad/iphone etc..
Ok, I was thinking to buy a used mac-mini, and a new ipadmini retina, but, I dont know how used mac mini can affect to my developments, I only need a mac to upload and sign the apps, I develop all using a framework on windows.
Can I buy a old mac mini? It's compatible? I havent anybody can help me with this. Thanks a lot.
Sorry my english.
A:
All this information is available on Apple's developer site http://developer.apple.com/
But in a nutshell, you need a Mac running an Intel processor and OS X 10.9 or later. Any Mac Mini a few years old should be able to do this..
|
[
"softwareengineering.stackexchange",
"0000050038.txt"
] | Q:
How to 'reproduce copyright` in an app?
Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and
the following disclaimer in the
documentation and/or other materials
provided with the distribution.
The above is quoted from a BSD licensed COPYING file. How can I "reproduce the above copyright" when I distribute my app in binary form? Should I put the COPYING file content in some Credit/About screen of my app or what? Thanks!
A:
See for example how Apple does it.
On iOS you can reach a Copyright menu inside the settings menu. It's a rather long entry, listing all copyrights, including BSD licenses.
|
[
"stackoverflow",
"0012147040.txt"
] | Q:
Division in script and floating-point
I would like to do the following operation in my script:
1 - ((m - 20) / 34)
I would like to assign the result of this operation to another variable. I want my script use floating point math. For example, for m = 34:
results = 1 - ((34 - 20) / 34) == 0.588
A:
You could use the bc calculator. It will do arbitrary precision math using decimals (not binary floating point) if you set increease scale from its default of 0:
$ m=34
$ bc <<< "scale = 10; 1 - (($m - 20) / 34)"
.5882352942
The -l option will load the standard math library and default the scale to 20:
$ bc -l <<< "1 - (($m - 20) / 34)"
.58823529411764705883
You can then use printf to format the output, if you so choose:
printf "%.3f\n" "$(bc -l ...)"
A:
Bash does not do floating point math. You can use awk or bc to handle this. Here is an awk example:
$ m=34; awk -v m=$m 'BEGIN { print 1 - ((m - 20) / 34) }'
0.588235
To assign the output to a variable:
var=$(awk -v m=$m 'BEGIN { print 1 - ((m - 20) / 34) }')
A:
Teach bash e.g. integer division with floating point results:
#!/bin/bash
div () # Arguments: dividend and divisor
{
if [ $2 -eq 0 ]; then echo division by 0; exit; fi
local p=12 # precision
local c=${c:-0} # precision counter
local d=. # decimal separator
local r=$(($1/$2)); echo -n $r # result of division
local m=$(($r*$2))
[ $c -eq 0 ] && [ $m -ne $1 ] && echo -n $d
[ $1 -eq $m ] || [ $c -eq $p ] && echo && return
local e=$(($1-$m))
c=$(($c+1))
div $(($e*10)) $2
}
result=$(div 1080 633) # write to variable
echo $result
result=$(div 7 34)
echo $result
result=$(div 8 32)
echo $result
result=$(div 246891510 2)
echo $result
result=$(div 5000000 177)
echo $result
Output:
1.706161137440
0.205882352941
0.25
123445755
28248.587570621468
|
[
"sharepoint.stackexchange",
"0000211361.txt"
] | Q:
Zip files not highlighted issue in Enterprise Search site
I have enabled enterprise search in my SharePoint 2013 farm. The search service crawls a publishing site collection that is one of my Web Applications. A few users have uploaded some zip files into the publishing site.
I have observed that, in zip files/compressed files, the searched keyword is not mentioned or highlighted. In order to avoid the situation can the zip files be restricted in Search results? Can I hide the zip files from crawling?
A:
You can prevent the files from being crawled. They won't show up in search results at all. Just create a crawl rule that Excludes the path to the zip files https://yoursite/*.zip
|
[
"stackoverflow",
"0049338639.txt"
] | Q:
Notify vue that item internal has been changed
Say I have some instance of class, which could be modified internal by method call (some props could be added, some removed, etc).
How could I notify vue about these changes (how to say vue “reload this item to be reactive”)?
Thanks.
PS: there are no access to vue from class (suppose it is external library)
Here is an example https://jsfiddle.net/h34a7s0n/50/
And possible solution:
// Let's say structure of 'item' is changed.
// We have to give some kick to Vue to reinitialize observer
// And yes, we need to know 'item' internals :(
// First remove old observer.
delete this.item._data.__ob__;
// Next define the new one
Vue.util.defineReactive(this.item.__ob__.value, '_data', this.item._data);
// And finally notify about changes, like $set does
this.item.__ob__.dep.notify();
Yes, it is dirty. But it works: https://jsfiddle.net/h34a7s0n/89/
Is there any clean way to solve?
A:
The problem is v2 property of _data. v2, specifically, is not reactive.
Official docs - Change Detection Caveats:
Due to the limitations of modern JavaScript (and the abandonment of Object.observe), Vue cannot detect property addition or deletion.
To work around it, you can either declare it in the constructor (this option works best when you know what properties you'll have):
constructor(v1) {
this._data = {
v1,
v2: null // <========= added this
};
}
class C {
constructor(v1) {
this._data = {
v1,
v2: null // <========= added this
};
}
addV2(v2) {
this._data['v2'] = v2;
alert( 'Item is fully loaded' );
console.log(this._data);
}
get value1() {
return this._data.v1;
}
get value2() {
if ('v2' in this._data) {
return this._data.v2;
}
return;
}
}
new Vue({
el: '#app',
template: `
<div>
<div>
{{item.value1}}, {{item.value2}}
<button @click="fullLoad">+</button>
</div>
</div>`,
data: {
item: new C(0)
},
methods: {
fullLoad() {
this.item.addV2(2 * Math.random());
// How to notify vue about changes here?
}
}
})
<script src="https://unpkg.com/vue"></script>
<div id="app"></div>
Or change it using Vue.set(). (This option works best for the general case.)
addV2(v2) {
Vue.set(this._data, 'v2', v2); // <========= changed here
alert( 'Item is fully loaded' );
console.log(this._data);
}
class C {
constructor(v1) {
this._data = {
v1
};
}
addV2(v2) {
Vue.set(this._data, 'v2', v2); // <========= changed here
alert( 'Item is fully loaded' );
console.log(this._data);
}
get value1() {
return this._data.v1;
}
get value2() {
if ('v2' in this._data) {
return this._data.v2;
}
return;
}
}
new Vue({
el: '#app',
template: `
<div>
<div>
{{item.value1}}, {{item.value2}}
<button @click="fullLoad">+</button>
</div>
</div>`,
data: {
item: new C(0)
},
methods: {
fullLoad() {
this.item.addV2(2 * Math.random());
// How to notify vue about changes here?
}
}
})
<script src="https://unpkg.com/vue"></script>
<div id="app"></div>
No access to the third-party class code
If you have no access to C class internals, you can re-add the item data property, as below:
methods: {
fullLoad() {
let i = this.item; // save for reinsertion
this.item = null; // remove
Vue.set(this, 'item', i); // reinsert as fully reactive
this.item.addV2(2 * Math.random());
}
}
Runnable demo:
class C {
constructor(v1) {
this._data = {
v1
};
}
addV2(v2) {
this._data['v2'] = v2;
alert( 'Item is fully loaded' );
console.log(this._data);
}
get value1() {
return this._data.v1;
}
get value2() {
if ('v2' in this._data) {
return this._data.v2;
}
return;
}
}
new Vue({
el: '#app',
template: `
<div>
<div>
{{item.value1}}, {{item.value2}}
<button @click="fullLoad">+</button>
</div>
</div>`,
data: {
item: new C(0)
},
methods: {
fullLoad() {
let i = this.item; // save for reinsertion
this.item = null; // remove
Vue.set(this, 'item', i); // reinsert as fully reactive
this.item.addV2(2 * Math.random());
}
}
})
<script src="https://unpkg.com/vue"></script>
<div id="app"></div>
|
[
"stackoverflow",
"0012999665.txt"
] | Q:
how to get/trace asp.net outgoing response text
my server seems to be sometimes returning wrong html to webclients
im using asp.net 4 with VS 2012. debugging on IIS Express.
in order to debug this issue, id like to trace the html that asp.net is sending
in the Global_asax_PreRequestHandlerExecute i can access the response code and status, but cant seem to find the body html
i tried to read the OutputStream like this:
Dim ms = New MemoryStream
CurContext.Response.OutputStream.CopyTo(ms)
Dim sr = New StreamReader(ms)
Dim rtext = sr.ReadToEnd
but that throws a NotSupportedException Stream does not support reading.
any ideas?
thanks a lot
EDIT
i now tested this for sure
i have a label on the page with the following attributes
<asp:label id="l" runat="server" Font-Bold="true" Font-Size="X-Large" BackColor="Pink"/>
when displayed in the browser it shows just fine, as follows:
<span id="C1_FormView1_l" style="background-color:Pink;font-size:X-Large;font-weight:bold;">Processed</span>
but when downloaded with webclient i get
<span id="C1_FormView1_l"><b><font size="6">Processed</font></b></span>
why is the backcolor lost? and btw, why doesn't it use the more modern style attribute instead of adding b and font
if i could read the ResponseStream i would at least know WHERE it gets lost, even that i dont know now.
thank you very much
P.S. if .net 4.5 is better for this, then i might consider changing the target framework
A:
this does not answer my original question technically, but it does solve the issue i was having
the problem was that the html wasnt rendering correctly
i now remembered that aspx has adaptive rendering, so i fgured the useragent used in the request might be to blame
i changed my code to:
Dim myReq As HttpWebRequest = WebRequest.Create(MailUrl)
myReq.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1"
Dim resp As HttpWebResponse = myReq.GetResponse
Dim stream = resp.GetResponseStream
Dim rdr = New StreamReader(stream)
Dim BodyText = rdr.ReadToEnd
and now the html is rendering in correct modern Html5/Css3 markup
i appreciate your help and guidance.
|
[
"stackoverflow",
"0014045801.txt"
] | Q:
Different pselect() behaviour on OSX vs Linux?
I am trying to implement a basic event loop with pselect, so I have blocked some signals, saved the signal mask and used it with pselect so that the signals will only be delivered during that call.
If a signal is sent outside of the pselect call, it is blocked until pselect as it should, however it does not interrupt the pselect call. If a signal is sent while pselect is blocking, it will be handled AND pselect will be interrupted. This behaviour is only present in OSX, in linux it seems to function correctly.
Here is a code example:
#include <stdio.h>
#include <string.h>
#include <sys/select.h>
#include <errno.h>
#include <unistd.h>
#include <signal.h>
int shouldQuit = 0;
void signalHandler(int signal)
{
printf("Handled signal %d\n", signal);
shouldQuit = 1;
}
int main(int argc, char** argv)
{
sigset_t originalSignals;
sigset_t blockedSignals;
sigemptyset(&blockedSignals);
sigaddset(&blockedSignals, SIGINT);
if(sigprocmask(SIG_BLOCK, &blockedSignals, &originalSignals) != 0)
{
perror("Failed to block signals");
return -1;
}
struct sigaction signalAction;
memset(&signalAction, 0, sizeof(struct sigaction));
signalAction.sa_mask = blockedSignals;
signalAction.sa_handler = signalHandler;
if(sigaction(SIGINT, &signalAction, NULL) == -1)
{
perror("Could not set signal handler");
return -1;
}
while(!shouldQuit)
{
fd_set set;
FD_ZERO(&set);
FD_SET(STDIN_FILENO, &set);
printf("Starting pselect\n");
int result = pselect(STDIN_FILENO + 1, &set, NULL, NULL, NULL, &originalSignals);
printf("Done pselect\n");
if(result == -1)
{
if(errno != EAGAIN && errno != EWOULDBLOCK && errno != EINTR)
{
perror("pselect failed");
}
}
else
{
printf("Start Sleeping\n");
sleep(5);
printf("Done Sleeping\n");
}
}
return 0;
}
The program waits until you input something on stdin, then sleeps for 5 seconds. To create the problem, "a" is typed to create data on stdin. Then, while the program is sleeping, an INT signal is sent with Crtl-C.
On Linux:
Starting pselect
a
Done pselect
Start Sleeping
^CDone Sleeping
Starting pselect
Handled signal 2
Done pselect
On OSX:
Starting pselect
a
Done pselect
Start Sleeping
^CDone Sleeping
Starting pselect
Handled signal 2
^CHandled signal 2
Done pselect
A:
Confirmed that it acts that way on OSX, and if you look at the source for pselect (http://www.opensource.apple.com/source/Libc/Libc-320.1.3/gen/FreeBSD/pselect.c), you'll see why.
After sigprocmask() restores the signal mask, the kernel delivers the signal to the process, and your handler gets invoked. The problem here is, that the signal can be delivered before select() gets invoked, so select() won't return with an error.
There's some more discussion about the issue at http://lwn.net/Articles/176911/ - linux used to use a similar userspace implementation that had the same problem.
If you want to make that pattern safe on all platforms, you'll have to either use something like libev or libevent and let them handle the messiness, or use sigprocmask() and select() yourself.
e.g.
sigset_t omask;
if (sigprocmask(SIG_SETMASK, &originalSignals, &omask) < 0) {
perror("sigprocmask");
break;
}
/* Must re-check the flag here with signals re-enabled */
if (shouldQuit)
break;
printf("Starting select\n");
int result = select(STDIN_FILENO + 1, &set, NULL, NULL, NULL);
int save_errno = errno;
if (sigprocmask(SIG_SETMASK, &omask, NULL) < 0) {
perror("sigprocmask");
break;
}
/* Recheck again after the signal is blocked */
if (shouldQuit)
break;
printf("Done pselect\n");
if(result == -1)
{
errno = save_errno;
if(errno != EAGAIN && errno != EWOULDBLOCK && errno != EINTR)
{
perror("pselect failed");
}
}
There are a couple of other things you should do with your code:
declare your 'shouldQuit' variable as volatile sig_atomic_t
volatile sig_atomic_t shouldQuit = 0;
always save errno before calling any other function (such as printf()), since that function may cause errno to be overwritten with another value. Thats why the code above aves errno immediately after the select() call.
Really, I strongly recommend using an existing event loop handling library like libev or libevent - I do, even though I can write my own, because it is so easy to get wrong.
|
[
"stackoverflow",
"0046976952.txt"
] | Q:
Set undefined value using *ngIf and ngModelChange
I am using (ngModelChange) with several attribute but one of its attribute can be null for some entry. For now the only solution I have found is to duplicate the input with *ngIf condition to check if attribute is not null.
<input *ngIf="!member.instrument" [(ngModel)]="member.firstname" (ngModelChange)="updateField(member.key,noinstrument,member.firstname)">
<input *ngIf="member.instrument"[(ngModel)]="member.firstname" (ngModelChange)="updateField(member.key,member.instrument.key,member.firstname)">
If I don't do this I have the following error calling the ngModelChange :
ERROR TypeError: Cannot read property 'member.instrument.key' of
undefined
I'm sure there is a way to do this with only one input field... Maybe setting the member.instrument.key to null when it's not defined?
A:
Try something like this
updateField(member.key,
member.instrument?member.instrument.key:undefined,
member.firstname)"
|
[
"stackoverflow",
"0042940348.txt"
] | Q:
confusion over the phrase "extend Object.prototype or one of the other build-in prototype"
I'm currently studying javascript prototype and inheritance, and I have encountered the following paragraphs on MDN
I'm not exactly sure what the author meant by extend Object.prototype or one of the other build-in prototype. Could someone please clarify the concept, preferably with a code sample? Thanks
A:
The term "built-in prototype" refers to the prototype objects from which standard objects inherit. This includes the language-specified Boolean.prototype, Number.prototype, String.prototype, Symbol.prototype, Object.prototype, Array.prototype, Function.prototype, Date.prototype, and the prototype objects for the various Errors, typed arrays, data structures ((Weak-) Map, Set) and iterators.
It also encompasses other native prototype objects in the environment, for example the DOM (Node.prototype, Element.prototype, Document.prototype, …) and other Web APIs (e.g. XMLHttpRequest.prototype).
See the definition of built-in objects and the whole section about standard built-in objects in ES6.
In general, you should not mess with them. They are supplied by the environment, they are not yours - don't touch them and create your own methods on them. If you want to write modular, interoperable code, you should not depend on custom, global modifications of built-ins. See also Why is extending native objects a bad practice? for more discussion.
|
[
"stackoverflow",
"0011438233.txt"
] | Q:
Dialogs must be user-initiated issue and async. requests
I get en ""Dialogs must be user-initiated" error.
Any clue how it could be resolved?
Thank you!
private void ExportDataToCSV()
{
string separator = GetSeparator();
LoadClientContentItemStatistics(); // async. request of MVVM Light Toolkit
foreach (var item in this._view)
{
sb.AppendLine("\"" + item.Player.PlayerGroup.Name + "\"" + separator + "\"" + item.Player.Name + "\"" + separator + "\"" + item.ClientContentItem.Name + "\"" + separator + "\"" + item.TotalTime.ToString() + "\"" + separator + "\"" + item.TotalQuantity.ToString() + "\"" + separator + Environment.NewLine);
}
SaveFileDialog sfd = new SaveFileDialog()
{
DefaultExt = "csv",
Filter = "CSV Files (*.csv)|*.csv|All files (*.*)|*.*",
FilterIndex = 1
};
if (sfd.ShowDialog() == true) // "Dialogs must be user-initiated" ERROR
{
using (Stream stream = sfd.OpenFile())
{
using (StreamWriter writer = new StreamWriter(stream))
{
writer.Write(sb.ToString());
writer.Close();
}
stream.Close();
}
}
IsExport = true;
}
A:
This error means exactly what it says: the user must initiate this dialog. So, you can only call this method from a user click event or another user initiated event.
|
[
"spanish.stackexchange",
"0000003719.txt"
] | Q:
What is the history of the "personal a"?
What is the historical origin of the "personal a" in Spanish?
Examples of the personal a:
George sees Mary. -> Jorge ve a María.
I see the waitress. -> Veo a la mesera.
But with the exact same sentence structure, the 'a' is omitted when referencing non-persons:
George sees the dog. -> Jorge ve el perro.
I see the table. -> Veo la mesa.
A:
The personal a can be confusing to English speakers because we are accustomed to sentence structure conveying meaning.
For example, when I say
John picked up the brother
it's "obvious" that the brother is the direct object. We take it for granted in English but the reason we know this is because brother comes after the verb. Reversing it:
The brother picked up John
changes the meaning of the sentence. Specifically, who is the direct object and who is the subject.
In Spanish that same meaning is not conveyed by sentence structure. For example:
Jorge ve María -> ??
María ve Jorge. -> ??
We have no idea who saw who. While to English natives it appears we can understand the sentences, in reality the meaning is not there.
This is where the personal a comes in.
Jorge ve a María -> George sees Mary.
A Jorge ve María. -> Mary sees George.
Now we can tell who sees who. The personal a indicates who is the direct object.
The personal a is mostly used when the direct object is a person, but it can also be applied to things that can be personified, e.g. pets.
Juanita extraña a su perro.
Another example is with certain pronouns, e.g. alguien, nadie, quien, etc.
Yo no vi a nadie
A quien pertenece la tele?
The benefit of all this is that Spanish gives the speaker much more flexibility in constructing sentences. This is common throughout the language, e.g.:
Me quiero ir
vs
Quiero irme
Both mean "I want to leave"
A:
Habrá que remontarse al latín, porque en el español antiguo ya estaba:
Enbió el rey don Alfonso a Ruy Díaz mio Çid por las parias que le avían a dar los reyes de Córdova e de Sevilla cada año.
Según la RAE procede del latín ad (to, toward, en inglés), que es una preposición de acusativo que indica dirección (hacia, hasta,etc...), proximidad (junto a, en,...), finalidad (para,...) o comparación (ante,según, ...) y que en todos esos significados puede traducirse a veces como "a" en español. La palabra latina procede del protoindoeuropeo y está relacionado con el at del inglés (de origen protogermámico).
Esa indicación de orientación, como explica Trevor, en los complementos directos (CD) personificados evita la ambigüedad del español, donde no es obligatoria la presencia del sujeto delante del verbo, a diferencia de otros idiomas. De esta forma
Jorge quiere María (ambiguo)
Podría interpretarse como:
Jorge quiere a María (no ambiguo)
Pero también podría ser:
A Jorge quiere María (no ambiguo)
Pero también esa orientación ejerce una restricción semántica de especificidad y definición, que puede ser aplicable *o no a personas u objetos más o menos personificados:
Conozco a un policía (persona específica: Pepe)
Necesito un policía (persona no específicada: Pepe, Juan, etc.)
Si en vez de policía ponemos un nombre propio (Pepe), el CD es necesariamente específico, y por tanto con "a".
En caso de CD que son cosas comunes, como "una pared", ciertamente puede omitirse la "a" porque no son única
Pintó la mesa
Pero la razón no es simplemente que el CD no sea personal. También ocurre que no hay un sentido de dirección o movimiento, ni es ambiguo. De tal forma sí se emplea en:
Saltó a la mesa (alguien salta sobre la mesa)
Y si se quita:
Saltó la mesa (Poltergesist phenomenon)
|
[
"stackoverflow",
"0063382114.txt"
] | Q:
How to call another component function?
HomeComponent-component.ts
import { Router, NavigationExtras } from '@angular/router';
export class HomeComponent OnInit {
price;
constructor(private router: Router) { }
async onSubmit(customerData): Promise<any>{
this.price = customerData.price //price = 2500
}
//This is the function help to send the data to another function
test(){
const navigationExtras: NavigationExtras = {state: {price: this.price }};
this.router.navigate(['price-data'], navigationExtras);//this is going to /price-data component
}
}
button.component.html
<span (click)="test()">Start</span></button>
Now when i click the button i want to call the homecomponent test() function. As well as both are two component not parent or child. So How is it do?
A:
Can you show the HTML where the button component is contained?
To do this, the button component should have an output property
@Output("buttonClicked") buttonClicked = new EventEmitter();
Then the parent will hook it up like this:
<app-button>
(buttonClicked)='onButtonClicked()'
</app-button>
And in the parent code there will be this:
onButtonClicked(){
//do something in parent component
}
|
[
"stackoverflow",
"0022624879.txt"
] | Q:
How to do knex.js migrations?
I'm still not sure how to do my migrations with knex. Here is what I have so far. It works on up, but down gives me FK constraint error even though foreign_key_checks = 0.
exports.up = function(knex, Promise) {
return Promise.all([
knex.raw('SET foreign_key_checks = 0;'),
/* CREATE Member table */
knex.schema.createTable('Member', function (table) {
table.bigIncrements('id').primary().unsigned();
table.string('email',50);
table.string('password');
/* CREATE FKS */
table.bigInteger('ReferralId').unsigned().index();
table.bigInteger('AddressId').unsigned().index().inTable('Address').references('id');
}),
/* CREATE Address table */
knex.schema.createTable('Address', function (table) {
table.bigIncrements('id').primary().unsigned();
table.index(['city','state','zip']);
table.string('city',50).notNullable();
table.string('state',2).notNullable();
table.integer('zip',5).unsigned().notNullable();
}),
knex.raw('SET foreign_key_checks = 1;')
]);
};
exports.down = function(knex, Promise) {
return Promise.all([
knex.raw('SET foreign_key_checks = 0;'),
knex.schema.dropTable('Address'),
knex.schema.dropTable('Member'),
knex.raw('SET foreign_key_checks = 1;')
]);
};
A:
jedd.ahyoung is correct. You don't need to limit your connection pool to 1. You just need to chain your promises so they are not run in parallel.
For example:
exports.up = function(knex, Promise) {
return removeForeignKeyChecks()
.then(createMemberTable)
.then(createAddressTable)
.then(addForeignKeyChecks);
function removeForeignKeyChecks() {
return knex.raw('SET foreign_key_checks = 0;');
}
function addForeignKeyChecks() {
return knex.raw('SET foreign_key_checks = 1;');
}
function createMemberTable() {
return knex.schema.createTable('Member', function (table) {
table.bigIncrements('id').primary().unsigned();
table.string('email',50);
table.string('password');
/* CREATE FKS */
table.bigInteger('ReferralId').unsigned().index();
table.bigInteger('AddressId').unsigned().index().inTable('Address').references('id');
});
}
function createAddressTable() {
return knex.schema.createTable('Address', function (table) {
table.bigIncrements('id').primary().unsigned();
table.index(['city','state','zip']);
table.string('city',50).notNullable();
table.string('state',2).notNullable();
table.integer('zip',5).unsigned().notNullable();
});
}
};
Also I may be missing something but it looks like you won't need to remove and then reinstate the foreign key checks if you create the address table before the member table.
Here's how the final code would look:
exports.up = function(knex, Promise) {
return createAddressTable()
.then(createMemberTable);
function createMemberTable() {
return knex.schema.createTable('Member', function (table) {
table.bigIncrements('id').primary().unsigned();
table.string('email',50);
table.string('password');
/* CREATE FKS */
table.bigInteger('ReferralId').unsigned().index();
table.bigInteger('AddressId').unsigned().index().inTable('Address').references('id');
});
}
function createAddressTable() {
return knex.schema.createTable('Address', function (table) {
table.bigIncrements('id').primary().unsigned();
table.index(['city','state','zip']);
table.string('city',50).notNullable();
table.string('state',2).notNullable();
table.integer('zip',5).unsigned().notNullable();
});
}
};
A:
Figured out that it wasn't working because of connection pooling. It would use a different connection to run each migration task which caused foreign key checks not to be set properly. setting
pool:{
max:1
}
in the migration config file fixed this.
A:
I solved this problem by using a transaction
transation.js
module.exports = function transaction(fn) {
return function _transaction(knex, Promise) {
return knex.transaction(function(trx) {
return trx
.raw('SET foreign_key_checks = 0;')
.then(function() {
return fn(trx, Promise);
})
.finally(function() {
return trx.raw('SET foreign_key_checks = 1;');
});
});
};
}
Migration file
var transaction = require('../transaction');
function up(trx, Promise) {
return trx.schema
.createTable('contract', function(table) {
table.boolean('active').notNullable();
table.integer('defaultPriority').unsigned().references('priority.id');
table.integer('defaultIssueStatus').unsigned().references('issueStatus.id');
table.integer('owner').notNullable().unsigned().references('user.id');
})
.createTable('user', function (table) {
table.increments('id').primary();
table.datetime('createdAt');
table.datetime('updatedAt');
table.string('phoneNumber').notNullable().unique();
table.string('password').notNullable();
table.string('name').notNullable().unique();
table.string('email');
table.string('status');
table.string('roles').defaultTo('user');
table.integer('contract').unsigned().references('contract.id');
});
}
function down(trx, Promise) {
return trx.schema
.dropTable('contract')
.dropTable('user');
}
exports.up = transaction(up);
exports.down = transaction(down);
|
[
"math.stackexchange",
"0000442551.txt"
] | Q:
Wimpy powerset function
Define the 'wimpy powerset function' $\mathcal{W} : \mathrm{Set} \rightarrow \mathrm{Set}$ by writing $$\mathcal{W}(B) = \{X \in \mathcal{P}(B) : |X| < |B|\}.$$
A few preliminary observations.
If $B$ is finite, then $|\mathcal{W}(B)| + 1 = |\mathcal{P}(B)|.$
If $B$ is countable (e.g. take $B=\mathbb{N}$), then $|\mathcal{W}(B)| = |B|.$
What else is known about $\mathcal{W}$? In particular:
What can we say about $\mathcal{W}(\aleph_1)$ and $\mathcal{W}(\beth_1)$?
Do there exist sets $B$ such that $|\mathcal{W}(B)| = |\mathcal{P}(B)|$?
A:
We're assuming ZFC, right?
$|\mathcal W(\omega_1)|=\beth_1$.
$\beth_1\le|\mathcal W(\beth_1)|\le\beth_2$;
if $2^{\aleph_0}=\aleph_1$, then $|\mathcal W(\beth_1)|=\beth_1$, but
if $2^{\aleph_0}=\aleph_2$ and $2^{\aleph_1}=2^{\aleph_2}=\aleph_3$, then $|\mathcal W(\beth_1)|=\beth_2$.
$|\mathcal W(\beth_{\omega})|=|\mathcal P(\beth_{\omega})|=\beth_{\omega+1}$.
|
[
"stackoverflow",
"0013065085.txt"
] | Q:
super.clone() operation not works in Derived Class
This is raised because of the technical difficulties faced in my Project.
Problem:
I need to clone a Object of a Class where it extended the properties(Inheritance) from a third party library class(where we don't have access to modify its contents)
Let me explain with example below:
Parent Class:
public class UnChangeableBaseClass {
//fields and Methods
}
Child Class:
class DerivedLocalClass extends UnChangeableBaseClass implements Cloneable {
// local fields and methods
public Object clone(){
Object clonedObj= null;
try{
clonedObj = super.clone();
}
catch(CloneNotSupportedException e){
//log exceptions
}
}
}
When I try to do this, super.clone() method refers to Class - UnChangeableBaseClass Type and it doesn't overrides the Object clone() methods. I believe all classes were extended with java.lang.Object class, implicitly protected Object clone() method would be inherited to this Parent Class. So, i thought in such a way that this method in Derived Class would overrides the Parent/Object clone method. But during runtime JVM search for the clone method explicitly defined in UnChangeableBaseClass. Hope I explained in proper way without confusing you.
My questions are follows:
How can I implement clone method in this typical case, where we cannot add any method
in parent class to have super.clone() to call Object clone method.
If above case is not possible, is there any other way to clone the Derived Class
Object (by considering all the limitations in above scenario)
Finally, just to know the reason for this JVM behaviour (described above).
A:
Correct Method signature is below
@Override
public Object clone() throws CloneNotSupportedException {
return super.clone();
}
Clone is Protected method in Object class so it is accessible to you inside class, and if you are extending from it.
super.clone() is just needed to call clone() method from object which calls method internalClone on this which is current class object.
internalClone((Cloneable) this);
So above clone() method inside Object will only throw CloneNotSupportedException if instance on which it is called is not Cloneable
I see some misconceptions about clone method
clone() method is protected inside Object class so you can not call clone() outside of class. e.g. child.clone() unless you override it and make access public
Cloneable is marker interface and if you do not mark class Cloneable then you will get CloneNotSupportedException if you call clone() method
If a class contains only primitive fields or references to immutable objects, then it is usually the case that no fields in the object returned by super.clone need to be modified.
By convention, the returned object should be obtained by calling super.clone. If a class and all of its superclasses (except Object) obey this convention, it will be the case that x.clone().getClass() == x.getClass().
So below code works fine
public class Child extends UnChangeableBaseClass
implements
Cloneable {
int index = 0;
public Child(int index) {
this.index = 10;
}
@Override
public Object clone() throws CloneNotSupportedException {
return super.clone();
}
}
References :
Object#clone()
Cloneable
|
[
"wordpress.stackexchange",
"0000138300.txt"
] | Q:
How can I trust switch_to_blog()?
When I call switch_to_blog() with a blog id, I don’t know whether that blog actually exists. The function returns always TRUE.
Test case:
switch_to_blog( PHP_INT_MAX );
$post = get_post( 1 );
restore_current_blog();
This will result in database errors which are exposed to the user. How can I prevent that?
Real world use case
I was the lead developer of Multilingual Press. When a user translates a post, she gets a screen like this:
Now the following can happen:
She saves the post successfully and continues translating the post.
Another user, a network admin, deletes the German blog while she is writing.
She hits save again and gets database errors.
I want to avoid that scenario. How can I check quickly if the target blog exists? I call switch_to_blog() very often in multiple different classes, so it has to be fast.
A:
@G.M.’s idea to cache the check has lead me to the following helper function. I’ve put it into the global namespace to have it available everywhere.
The function doesn’t say anything about the blog status, just if it exists and is not marked as deleted. The database query is very fast (0.0001 seconds) and runs just one query per site id, no matter how often the function is called.
if ( ! function_exists( 'blog_exists' ) ) {
/**
* Checks if a blog exists and is not marked as deleted.
*
* @link http://wordpress.stackexchange.com/q/138300/73
* @param int $blog_id
* @param int $site_id
* @return bool
*/
function blog_exists( $blog_id, $site_id = 0 ) {
global $wpdb;
static $cache = array ();
$site_id = (int) $site_id;
if ( 0 === $site_id )
$site_id = get_current_site()->id;
if ( empty ( $cache ) or empty ( $cache[ $site_id ] ) ) {
if ( wp_is_large_network() ) // we do not test large sites.
return TRUE;
$query = "SELECT `blog_id` FROM $wpdb->blogs
WHERE site_id = $site_id AND deleted = 0";
$result = $wpdb->get_col( $query );
// Make sure the array is always filled with something.
if ( empty ( $result ) )
$cache[ $site_id ] = array ( 'do not check again' );
else
$cache[ $site_id ] = $result;
}
return in_array( $blog_id, $cache[ $site_id ] );
}
}
Usage
if ( ! blog_exists( $blog_id ) )
return new WP_Error( '410', "The blog with the id $blog_id has vanished." );
|
[
"gaming.stackexchange",
"0000027850.txt"
] | Q:
Can I still get my free games from the PSN Welcome Back promotion?
When the PlayStation Network came back online, we could download 2 full games for free.
I downloaded Infamous, then decided to wait to download the second one.
Do you know if I can still download the second game? Where in the menu of PlayStation store can I find it?
Did I have a limited time to download these games?
For information, I remember that one of these game was Little Big Planet.
A:
The Playstation Welcome Back promotion has unfortunately expired. It ended on July 5th, 2011. It was temporarily extended, as its previous end date was July 1st. However, the offer is definitely over now.
|
[
"stackoverflow",
"0051995507.txt"
] | Q:
create hyperlink on a column in excel sheet to open multilayered subfolder
I have folders and sub-folders like this 8 layers and 500K records in one sheet:
C:\999\236\857\871
C:\999\234\567\874
C:\999\234\567\873
C:\999\234\586\396
C:\999\234\566\458
In Test worksheet Column A has data
236857871
234567874
234567873
234586396
234566458
I wanted to create a macro to create a hyperlink on the existing data in Column A so that when I click on the data, the respective folder would open. I grafted a macro from one that was available in StackOverflow below. It creates only one destination...it could not create a link for respective records. Can I get help?
Sub HyperlinkNums ()
Dim WK As Workbooks
Dim sh As Worksheet
Dim i As Long
Dim lr As Long
Dim Rng As Range, Cell As Range
Set sh = Workbooks("Bigboss.xlsm").Sheets("Test")
lr = sh.Range("A" & sh.Rows.Count).End(xlUp).Row
Set Rng = sh.Range("A5:A" & lr)
sh.range("A5").Activate
For i = 7 To lr
For Each Cell In Rng
If Cell.Value > 1 Then
sh.Hyperlinks.Add Anchor:=Cell, Address:= _
"C:\999\" & Left(ActiveCell, 3) & "\" & _
Mid(ActiveCell, 4, 3) & "\" & Mid(ActiveCell, 7, 3) & "\" & _
Right(ActiveCell, 3), TextToDisplay:=Cell.Value
End If
Next Cell
Next
End Sub.
A:
So, the largest issue in your code is that you are always referring to the ActiveCell. You are using a For Each...Next loop, and you should be using the rng object that you are looping.
You also have a redundant loop: For i = 7 To lr. You can get rid of this.
And I am not a big fan of using semi-reserved keywords as variables, so I slightly renamed the cell variable to cel. I think this may be what you are looking for:
Option Explicit
Sub HyperlinkNums()
Dim WK As Workbooks
Dim sh As Worksheet
Dim lr As Long
Dim Rng As Range, Cel As Range
Set sh = Workbooks("Bigboss.xlsm").Sheets("Test")
lr = sh.Range("A" & sh.Rows.Count).End(xlUp).Row
Set Rng = sh.Range("A5:A" & lr)
sh.Range("A5").Activate
For Each Cel In Rng
If Cel.Value > 1 Then
sh.Hyperlinks.Add Cel, "C:\999\" & Left(Cel.Text, 3) & "\" & _
Mid(Cel.Text, 4, 3) & "\" & Right(Cel.Text, 3), _
TextToDisplay:=Cel.Text
End If
Next Cel
End Sub
Also, I was slightly confused about the usage of Mid(ActiveCell, 7, 3), which it appeared to have the same meaning to Right(ActiveCell, 3). I removed that portion.
|
[
"stackoverflow",
"0062717512.txt"
] | Q:
How to update hours in data field in mongodb
I have set of mongo document, I need to convert/update the below values like ("workedDate" : ISODate("2020-07-01T00:00:00Z"))
"workedDate" : ISODate("2020-07-01T20:03:04Z"),
"workedDate" : ISODate("2020-07-01T19:59:07Z"),
"workedDate" : ISODate("2020-06-30T14:00:00Z"),
"workedDate" : ISODate("2020-07-01T19:49:29Z")
I have tried the below query:
db.timeentrys.update(
{ },
{
$set: {
workedDate:{$dateFromParts:{
year:{$year:"$workedDate"},
month:{$month:"$workedDate"},
day:{$dayOfMonth:"$workedDate"}
}}
}
}
)
Getting the below error :
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 52,
"errmsg" : "The dollar ($) prefixed field '$dateFromParts' in 'workedDate.$dateFromParts' is not valid for storage."
}
})
A:
$dateFromParts is an aggregation expression. You can only use aggregation expressions in an update if you are using MongoDB 4.2, and provide a pipeline array as the second argument to update instead of an object.
Edit
In this use, just wrap the update object in [] to make it an array:
db.timeentrys.update(
{ },
[{$set: {
workedDate:{$dateFromParts:{
year:{$year:"$workedDate"},
month:{$month:"$workedDate"},
day:{$dayOfMonth:"$workedDate"}
}}
}}]
)
|
[
"stackoverflow",
"0043862055.txt"
] | Q:
How to .update() value to NULL in sequelize
I'm writing my service to update a row using sequelize for PostGres. When I try out my query using a PSequel it works fine:
UPDATE "test_table" SET "test_col"=NULL WHERE "id"= '2'
But using sequelize it throws a 500 error:
db.TestTable.update({ testCol: NULL }, { where: { id: id } })
.then((count) => {
if (count) {
return count;
}
});
My model does allowNull which I believe is what allows null values to be the default as well as set:
testCol: {
type: DataTypes.INTEGER,
allowNull: true,
defaultValue: null,
field: 'test_col'
},
Any other value but NULL works as expected. Is there a different method for setting null values?
A:
From the looks of it, I think your issue is that you are using SQL's syntax for a null value ('NULL') where you should be using JS syntax ('null').
db.TestTable.update({ testCol: null }, { where: { id: id } })
.then((count) => {
if (count) {
return count;
}
});
should work.
|
[
"stackoverflow",
"0024376176.txt"
] | Q:
PHP MYSQLI returns one row instead of many
I'm getting 1 results from the MySQL database when many results are expected.
Look below and you'll see the MySQLI object has NUM_ROWS = 42, yet I only get 1 result when I print_r the fetch_array(MYSQLI_ASSOC)
The debug response:
SELECT * from edi_packets_to_send WHERE `send` = 't' AND `network_id` = '8012'
mysqli_result Object
(
[current_field] => 0
[field_count] => 13
[lengths] =>
[num_rows] => 42
[type] => 0
)
Array
(
[id] => 413
[packet] => 02 07 00 01 ff 14 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
[priority] => 4
[send] => t
[seq] =>
[network_id] => 8012
[times] => 2014-06-23 22:52:28
[termostat_location] => 0007
[packet_type] => 14
[no_of_attemps] => 4
[network_network_id] => 61
[cms_room_id] => 157
[action_code] => 4
)
My Code:
<pre>
<?php
define('DB1_HOSTNAME','localhost');
define('DB1_USERNAME','xxxxxxx');
define('DB1_PASSWORD','xxxxxxx');
define('DB1_DATABASE','xxxxxxx');
define('DB1_PORT','3306');
/* DB connection */
$db = mysqli_connect(DB1_HOSTNAME, DB1_USERNAME, DB1_PASSWORD, DB1_DATABASE);
if (mysqli_connect_errno($db)) {throw new exception("Failed to connect to MySQL: " . mysqli_connect_error());}
$sql = "SELECT * from edi_packets_to_send WHERE `send` = 't' AND `network_id` = '8012' ";
echo "<br>".($sql)."<br><br>";
if(!$results = $db->query($sql)){throw new Exception("SQL Failed ".__file__." on line ".__line__.":\n".$sql."\n".mysqli_error($db));}
print_r($results);
if ($results->num_rows > 0){
$array = $results->fetch_array(MYSQLI_ASSOC);
print_r($array);
}
Database table:
DROP TABLE IF EXISTS `edi_packets_to_send`;
CREATE TABLE `edi_packets_to_send` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`packet` varchar(249) DEFAULT NULL,
`priority` int(11) DEFAULT NULL,
`send` varchar(15) DEFAULT NULL,
`seq` varchar(6) DEFAULT NULL,
`network_id` int(11) DEFAULT NULL,
`times` datetime DEFAULT NULL,
`termostat_location` varchar(12) DEFAULT NULL,
`packet_type` varchar(6) DEFAULT NULL,
`no_of_attemps` int(11) DEFAULT NULL,
`network_network_id` int(11) DEFAULT NULL,
`cms_room_id` int(11) DEFAULT NULL,
`action_code` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `id` (`id`),
KEY `priority` (`priority`),
KEY `cms_room_id` (`cms_room_id`),
KEY `network_id` (`network_id`),
KEY `network_network_id` (`network_network_id`),
KEY `send` (`send`),
KEY `action_code` (`action_code`)
) ENGINE=InnoDB AUTO_INCREMENT=413 DEFAULT CHARSET=utf8;
INSERT INTO `edi_packets_to_send` VALUES ('413', '02 07 00 01 ff 14 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff', '4', 't', null, '8012', '2014-06-23 22:52:28', '0007', '14', '4', '61', '157', '4');
INSERT INTO `edi_packets_to_send` VALUES ('414', '02 07 00 01 ff 1e ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff', '5', 't', null, '8012', '2014-06-23 22:10:25', '0007', '1e', '1', '61', '157', '5');
INSERT INTO `edi_packets_to_send` VALUES ('415', '02 07 00 01 ff 1f ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff', '111', 't', null, '8012', '2014-06-23 22:05:30', '0007', '1f', '1', '61', '157', '111');
INSERT INTO `edi_packets_to_send` VALUES ('416', '02 07 00 01 ff 32 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff', '18', 't', null, '8012', '2014-06-23 22:09:19', '0007', '32', '1', '61', '157', '18');
INSERT INTO `edi_packets_to_send` VALUES ('417', '02 07 00 01 ff 0a ff ff 00 ff ff ff ff ff 05 14 05 1e 3c 1e 00 01 00 3e 0a 37 ff ff', '20', 't', null, '8012', '2014-06-23 22:07:57', '0007', '0a', '1', '61', '157', '20');
INSERT INTO `edi_packets_to_send` VALUES ('418', '02 07 00 01 ff 0b 0a 52 4c 40 4d 01 01 ff ff ff ff ff ff ff ff ff ff ff ff ff ff', '21', 't', null, '8012', '2014-06-23 22:07:24', '0007', '0b', '1', '61', '157', '21');
INSERT INTO `edi_packets_to_send` VALUES ('419', '02 07 00 01 ff 32 ff 00 00 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff', '90', 't', null, '8012', '2014-06-23 22:06:11', '0007', '32', '1', '61', '157', '90');
INSERT INTO `edi_packets_to_send` VALUES ('420', '02 08 00 01 ff 14 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff', '4', 't', null, '8012', '2014-06-23 22:11:14', '0008', '14', '1', '61', '158', '4');
INSERT INTO `edi_packets_to_send` VALUES ('421', '02 08 00 01 ff 1e ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff', '5', 't', null, '8012', '2014-06-23 22:10:08', '0008', '1e', '1', '61', '158', '5');
INSERT INTO `edi_packets_to_send` VALUES ('422', '02 08 00 01 ff 1f ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff', '111', 't', null, '8012', '2014-06-23 22:05:13', '0008', '1f', '1', '61', '158', '111');
INSERT INTO `edi_packets_to_send` VALUES ('423', '02 08 00 01 ff 32 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff', '18', 't', null, '8012', '2014-06-23 22:09:03', '0008', '32', '1', '61', '158', '18');
Any idea why this particular query isn't working correctly for me ?
PS: I'm running it on localhost Windows IIS 7.5 PHP Version 5.3.28 (dev)
A:
You need to do a loop, fetch_array is designed to grab one row, return whatever format you requested, and increment the row index so that the next call will get the next row.
Try this instead:
if ($results->num_rows > 0){
for($rowid=0; $rowid < $results->num_rows; $rowid++) {
$array=$results->fetch_array(MYSQLI_ASSOC);
print_r($array); //print the current row array
}
}
|
[
"stackoverflow",
"0056796978.txt"
] | Q:
Nested iif() Statement is "Too Complex" to run within MS Access
I am currently updating a database that I created for work to classify transactions into a transaction type. This requires me to use an iif() statement that has become too complex to run. Before we get too far along, I want to apologize for the lengthy description, but I want to make sure I provide enough information.
To set the stage
Transactions (RefID's) can be one of the following:
3PL
4PL
Air Freight
Customs Only
One of the complexities of this task involves the fact that a Charge Code ("CC"), similar to an item number or service name, can be 3PL or 4PL depending on the circumstances of the transaction. For example, if the CC of Ocean_Freight exists on a RefID that also has a CC of PO_Management, the transaction is a 3PL transaction. However, if the CC of PO_Management exists without Ocean_Freight on the RefID, this would be a 4PL Transaction.
I have the following CC's which can be used to define a transaction:
CC Descriptions
3PL Only
Ocean_Freight
this CC will define the transaction unless there is a CC from the "3PL or 4PL Depending on Situation" section below
Drayage Management
this CC will define the transaction unless there is a CC from the "3PL or 4PL Depending on Situation" section below
Air Freight Only
Air_Freight
3PL or 4PL Depending on Situation
PO_Management
3PL when CC exists on a RefID with Ocean_Freight or Drayage Management
4PL when CC exists on a RefID without the aforementioned CC's
CROM Fee
3PL when CC exists on a RefID with Ocean_Freight or Drayage Management
4PL when CC exists on a RefID without Ocean_Freight, Drayage Management, or PO_Management
EDI
3PL when CC exists on a RefID with Ocean_Freight or Drayage Management
4PL when CC exists on a RefID without Ocean_Freight, Drayage Management, or PO_Management
Booking Management Fee
3PL when CC exists on a RefID with Ocean_Freight or Drayage Management
4PL when CC exists on a RefID without Ocean_Freight, Drayage Management, PO_Management, or EDI
Forwarding Fee
3PL when CC exists on a RefID with Ocean_Freight or Drayage Management
4PL when CC exists on a RefID without Ocean_Freight, Drayage Management, PO_Management, EDI, or Booking Management Fee
Handling Charge
3PL when CC exists on a RefID with Ocean_Freight or Drayage Management
4PL when CC exists on a RefID without Ocean_Freight, Drayage Management, PO_Management, EDI, Booking Management Fee, or Forwarding Fee
Customs Only
As a note - each of the preceding CC's can be considered what I classify as a Transaction Defining Charge Code (TDCC), in the absence of one of these CC's and the presence of the Customs Entry CC, the transaction is defined as a "Customs Only" transaction.
A Sample Transaction:
What I have done to this point
I previously accomplished this within Access using a nested iif() statement, but in some cases I was pulling duplicate records because I wasn't isolating each of the CC's. For example, if PO_Management and Handling Charge existed on the same transaction, both would get ascribed a value of "4PL", when in reality, I only want one to define the transaction. This is what sent me down this path of repairing the code.
The query that drives most of this is called "Step 2)" and it does a sum(iif(criteria here),1,0) based on whether or not a CC exists on a RefID. It provides a value of >0 if a CC exists on a RefID which allows me to reference this query to determine how I should define a RefID.
To further refine my original methodology, I made another query called "Steps." Within this query is where I apply the logic from above within the CC descriptions section above.
I have tried using a nested iif() statement and also tried using the Switch() function, but both get to the same point, "The expression you entered is too complex." I have done some research and I believe the answer is a Private Function using VBA, but I have had no luck understanding how to create the functions. Does anyone have a better way of attacking this problem? Please find a sample of my latest attempt at a switch() function which kicks out the error below:
Transaction Type:
Switch(
[Steps]![OF] > 0 And [Steps]![CC] = "Ocean Freight","3PL",
[Steps]![AF] > 0 And [Steps]![CC] = "Air_Freight","Air Freight",
[Steps]![Dray] > 0 And [Steps]![CC] = "Drayage Management","3PL",
[Steps]![PO 4PL] > 0 And [Steps]![CC] = "PO_Management","4PL",
[Steps]![PO 3PL] > 0 And [Steps]![CC] = "PO_Management","3PL",
[Steps]![CROM 4PL] > 0 And [Steps]![CC] = "CROM Fee","4PL",
[Steps]![CROM 3PL] > 0 And [Steps]![CC] = "CROM Fee","3PL",
[Steps]![EDI 4PL] > 0 And [Steps]![CC] = "EDI","4PL",
[Steps]![EDI 3PL] > 0 And [Steps]![CC] = "EDI","3PL",
[Steps]![BMF 4PL] > 0 And [Steps]![CC] = "Booking Management Fee","4PL",
[Steps]![BMF 3PL] > 0 And [Steps]![CC] = "Booking Management Fee","3PL",
[Steps]![FF 4PL] > 0 And [Steps]![CC] = "Forwarding Fee","4PL",
[Steps]![FF 3PL] > 0 And [Steps]![CC] = "Forwarding Fee","3PL",
[Steps]![Handling 4PL] > 0 And [Steps]![CC] = "Handling Charge","4PL",
[Steps]![Handling 3PL] > 0 And [Steps]![CC] = "Handling Charge","3PL"
)
What Needs to Happen?
Ultimately, I want to reference the "Steps" Query to drive a Field in my output query called "transaction type." This is, of course, where things go sideways for me because I cannot get enough nests within my iif() statement. This suggests to me that I am going about this all wrong and a far simpler solution exists.
A:
You have a relatively straightforward 1-to-1 mapping situation. An efficient and flexible way to tackle this would be to create a mapping table that encapsulates your rules:
OF AF Dray [PO 4PL] [PO 3PL] [CROM 4PL] CC RefId
1 "Ocean Freight" "3PL"
1 "Air_Freight" "Air Freight"
1 "Drayage Management" "3PL"
1 "PO_Management" "4PL"
1 "PO_Management" "3PL"
1 "CROM Fee" "4PL"
Add more columns to the table for the other fields you want to check.
Now a SELECT (or similar UPDATE statement) can be written that picks the RefId based on the rules in the table (warning, this is pseudocode, I don't have MS Access to test this right now):
SELECT
t.*
r.RefId
FROM
Transactions t
LEFT JOIN TransactionMappings m ON
t.CC = m.CC
AND (
(t.OF > 0 AND m.OF = 1) OR
(t.AF > 0 AND m.AF = 1) OR
(t.Dray > 0 AND m.Dray = 1) OR
(t.[PO 4PL] > 0 AND m.[PO 4PL] = 1) OR
(t.[PO 3PL] > 0 AND m.[PO 3PL] = 1)
)
Advantages would be
comparatively clean code
you can modify mapping rules without having to rewrite the SQL
a JOIN is (likely) to be faster than a nested/complex Switch() although this would need to be measured
making this more complex is comparatively easy (things like "add a numeric range to check against", or "make an exception in certain cases" come down to adding more columns to the mapping table and specifying more JOIN conditions), making the nested Switch() more complex in the same way is comparatively hard.
A:
The Too complex error occurs when you have too many arguments for a function. An easy fix is to split up the switch:
You can easily split up Switch(Compare1, Result1, Compare2, Result2, Compare3, Result3, Compare4, Result4) to Switch(Compare1, Result1, Compare2, Result2, True, Switch(Compare3, Result3, Compare4, Result4)). While we've really increased complexity, our individual switch statements take less arguments, thus Access will be less likely to complain.
For your example, splitting it in two would look like:
Switch([Steps]![OF]>0 And [Steps]![CC]="Ocean Freight","3PL",
[Steps]![AF]>0 And [Steps]![CC]="Air_Freight","Air Freight",
[Steps]![Dray]>0 And [Steps]![CC]="Drayage Management","3PL",
[Steps]![PO 4PL]>0 And [Steps]![CC]="PO_Management","4PL",
[Steps]![PO 3PL]>0 And [Steps]![CC]="PO_Management","3PL",
[Steps]![CROM 4PL]>0 And [Steps]![CC]="CROM Fee","4PL",
[Steps]![CROM 3PL]>0 And [Steps]![CC]="CROM Fee","3PL",
[Steps]![EDI 4PL]>0 And [Steps]![CC]="EDI","4PL",
True, Switch(
[Steps]![EDI 3PL]>0 And [Steps]![CC]="EDI","3PL",
[Steps]![BMF 4PL]>0 And [Steps]![CC]="Booking Management Fee","4PL",
[Steps]![BMF 3PL]>0 And [Steps]![CC]="Booking Management Fee","3PL",
[Steps]![FF 4PL]>0 And [Steps]![CC]="Forwarding Fee","4PL",
[Steps]![FF 3PL]>0 And [Steps]![CC]="Forwarding Fee","3PL",
[Steps]![Handling 4PL] >0 and [Steps]![CC]="Handling Charge","4PL",
[Steps]![Handling 3PL] >0 and [Steps]![CC]="Handling Charge","3PL"))
That's still a fair amount of arguments, so you might need to split it in 3 parts.
|
[
"superuser",
"0001394999.txt"
] | Q:
How do I run Java applets?
Is there a way to run a Java applet on Chrome or Firefox? I get the error message on the Java test page that Java won't run on Chrome or Firefox anymore because of the non-supported NPAPI.
I have an old set of *.class files with an .html to run it, and I just want to be able to run this applet somehow. But how?
A:
Is there a way to run a Java applet on Chrome or Firefox?
No. Applets are no longer supported in Firefox or Chrome.
Firefox no longer provides NPAPI support (technology required for Java applets)
As of September, 2018, Firefox no longer offers a version which
supports NPAPI, the technology required to run Java applets. The Java
Plugin for web browsers relies on the cross-platform plugin
architecture NPAPI, which had been supported by all major web browsers
for over a decade. The 64 bit version of Firefox has never supported
NPAPI, and Firefox version 52ESR is the last release to support the
technology. It is below the security baseline, and no longer
supported.
Source Java and Firefox Browser
Chrome no longer supports NPAPI (technology required for Java applets)
The Java Plugin for web browsers relies on the cross-platform plugin architecture NPAPI, which had been supported by all major web browsers for over a decade. Google's Chrome version 45 and above have dropped support for NPAPI, and therefore Java Plugin do not work on these browsers anymore.
Source Java and Google Chrome Browser
So how do I run Java applets?
Use the AppletViewer, from a JDK before Java SE 11.
The appletviewer command allows you to run applets outside of a web
browser.
SYNOPSIS
appletviewer [ options ] urls ...
DESCRIPTION
The appletviewer command connects to the documents or resources
designated by urls and displays each applet referenced by the
documents in its own window. Note: if the documents referred to by
urls do not reference any applets with the OBJECT, EMBED, or APPLET
tag, then appletviewer does nothing. For details on the HTML tags that
appletviewer supports, see AppletViewer Tags.
Note: The appletviewer is intended for development purposes only.
Source appletviewer - The Java Applet Viewer
Alternatively read the Oracle White Paper (pdf) Migrating from Java Applets to plugin free Java technologies, which recommends Java Web Start:
Java Web Start has been included in the Oracle JRE since 2001 and is
launched automatically when a Java application using Java Web Start
technology is downloaded for the first time. The conversion of an
applet to a Java Web Start application provides the ability to launch
and update the resulting application without relying on a web browser
See What is Java Web Start and how is it launched? for more information.
Note that both Java Applets and Java Web Start were removed completely in
Java SE 11 (release September 2018). From that version on there is no (supported) way to run Applets or Web Start applications.
A:
If you already have the files on your machine, you can try the appletviewer that (used to? still does?) ships with the JDK (Java Development Kit).
|
[
"stackoverflow",
"0017316832.txt"
] | Q:
Trouble with a Conditional Operator and IF...ELSE Statement
Trying to get a default string of text to show after all subsections are closed using the following Conditional Operator inside an onclick event. I have 3 other Conditional Operators in the event and they all work fine. This is the only one with a Logical Operator in it. Everything else after this line is never rendered.
document.getElementById('itemMain1').className=(document.getElementById('subItem1A').className=='hidden' && document.getElementById('subItem1B').className=='hidden')?'block':'hidden'
I have also tried it with an IF Statement.
if (document.getElementById('subItem1A1').className=='hidden'
&& document.getElementById('subItem1B').className=='hidden')
{
document.getElementById('itemMain1').className='block';
}
else
{
document.getElementById('itemMain1').className='hidden';
}
Would love to know what I am doing wrong here.
A:
Turns out to realy easy and just a slight variation on the code used to make the sub headings text visable and is as follows.
document.getElementById('itemMain1').className=(document.getElementById('subItem1A').className!='block')?'block':'hidden'
This goes in the onclick for subItem1A and again for each subItem1, changing the subItem1 reference.
|
[
"stackoverflow",
"0014006263.txt"
] | Q:
Sonatype nexus admin login
I'm having a problem which I can't solve.
I have bought a cheap vps, with ubuntu 12.10 then installed the tomcat7, maven, and nexus. All of them are the latest. This is a fresh install from everything. I started and deployed the nexus, no errors in catalina, no errors in nexus, and when I tried to login with admin/admin123, I have failed.
I'll show you any of my log file what you need, please help me with this.
EDIT: nexus is 2.2-01
EDIT2: this is a cheap server with 512 ram, running without X
My security-configuration.xml is this:
<?xml version="1.0"?>
<security-configuration>
<version>2.0.3</version>
<enabled>true</enabled><!-- was true -->
<anonymousAccessEnabled>true</anonymousAccessEnabled>
<anonymousUsername>anonymous</anonymousUsername>
<anonymousPassword>{1FH7iFzhCukHI3ISkjq+AuQZb+bOMrB70bGqF2y6fNE=}</anonymousPassword>
<realms>
<realm>XmlAuthenticatingRealm</realm>
<realm>XmlAuthorizingRealm</realm>
</realms>
<securityManager>default</securityManager>
</security-configuration>
A:
Do the following:
stop Nexus
change the enabled in the xml to false
restart Nexus
get in without anything as admin that way
reset the admin password
enable the security again in the user interface
Oh and btw. I would suggest to run Nexus from the bundle with Jetty on a VPS and not with the war file in Tomcat so you get more performance out of it.
Update: Security can no longer be disabled in Nexus 2.7 and up. You have to insert an admin user into the xml as documented in this support page.
|
[
"cs.stackexchange",
"0000045478.txt"
] | Q:
Multi object image segmentation methods
I want to survey the state-of-the-art in image segmentation. Is there a paper on how to segment multiple objects in an image? The segmentation papers I read, normally identify one of the many objects in the image. This object is typically in the center of the frame, or most-foreground compared to other objects in the image. I'm looking for a way to identify all different objects in an image simultaneously. Is there any recent work on this?
A:
I think that you are interested in MFC - Multiple Foreground Cosegmentation.
MFC article
Awesome material:
Articulated Motion and Deformable Objects : 5th International Conference, AMDO 2008, Port d'Andratx, Mallorca, Spain, July 9-11, 2008, Proceedings
In advance I warn you, these are not all out of the box working solutions, but with small changes all can do segmentation so if first link is what you have expected treat these below as starting point to work your own way.
Every "one object only in the foreground" can start iterative scheme: find first, cut, colour it with backgroundish color (presumably gray darker then mean luminance of the image), cut next.
When some classifiers are used, or patterns are known it is easier, but when background is complex and you want just regions there are oldschool ways: use Otsu, or modified K-Means clustering, try region growing, edge detection, make decision from colour or shadows.
In the end you will find yourself in situation that there are some components, now it is time to decide which is background (darker; connected with all edges of the image; bigger? Every of those could fail).
Image segmentation is based on some kind of connecting self similar regions, cuts them by edges even follows perspective lines if possible.
Foreground - Background separation is the most of the time asuming that foreground is smaller, better exposed. Very rarely there is perspective reconstruction and from one image it is virtually impossible (for street views we can detect lines and follow to horizon, but other types fail).
Below are additional links. If nothing helped write me a comment and describe more details.
Examples are connected in this paper but it works for separate objects: automatic foreground extraction
Foreground prediction
Must read:
Otsu image segmentation
Region-based
Graph based
When you expect some objects:
Blobs
Sequence of images:
Kalman filter
Background/Foreground
|
[
"stackoverflow",
"0021581590.txt"
] | Q:
Using scanf to read a file in C
I'm in my first-ever class about C, and I'm having some trouble with a program we're writing. Here's the program:
#include <stdio.h> //printf
#include <stdlib.h>
//Prototypes
void readScores(int* scores, int* actualCount);
void displayScores(int* scores, int* actualCount);
int main() {
int arrayCount = 100;
int scores[arrayCount];
int actualCount = 0;
readScores(scores, &actualCount);
displayScores(scores, &actualCount);
}
void readScores(int* scores, int* actualCount) {
for(int i = 0; i <= 10; i++) {
*actualCount = *actualCount + 1;
scanf("%d", scores);
}
}
void displayScores(int* scores, int* actualCount) {
for(int i = 1; i < *actualCount; i++) {
printf("score %d: \t%d\n", i, *scores);
}
}
My goal is to simply read scores from the scores.txt text file and add them to the scores[] array. Our goal in this assignment is to learn more about pointers, so don't bother trying to make my code more "efficient" or anything.
My problem is that when I compile the code with
gcc histogram.c -std=c99 -o histogram
and run it with
./histogram <scores.txt
my output looks like this:
score 1: 7
score 2: 7
score 3: 7
score 4: 7
score 5: 7
score 6: 7
score 7: 7
score 8: 7
score 9: 7
score 10: 7
How can I modify my code so that it reads the text file correctly? This problem has been bothering me all week and a solution would be much appreciated!
Thanks!
A:
Since scores is an array, you can use more than the first element:
void readScores(int* scores, int* actualCount) {
for(int i = 0; i <= 10; i++) {
*actualCount = *actualCount + 1;
scanf("%d", scores + i); // Read into element i of array scores
}
}
void displayScores(int* scores, int* actualCount) {
for(int i = 1; i < *actualCount; i++) {
printf("score %d: \t%d\n", i, *(scores + i)); // Print element i of array scores
}
}
|
[
"electronics.stackexchange",
"0000455830.txt"
] | Q:
Unkown DIP IC (OP277PA)
I stumbled upon the most uncommon thing. An 8-PIN DIP IC with part number (OP277PA.)
However, the datasheet is nowhere to be found and I've done my research.
Is it possible that it is an OPA277P, or is it its own thing?
Does anybody have a clever way to test if it is a low noise precision opamp?
A:
The OP277PA is an earlier version of the OPA277 precision opamp (Datasheet)
If you scroll down to the orderable information you will find this the orderable part number for the device in a plastic DIP package.
The final A refers to the die revision.
|
[
"stackoverflow",
"0012538562.txt"
] | Q:
Removing trailing backslash in a batch file
I am writing a short batch script which attempts to take the first line of text file and remove the trailing backslash if present. This will then repeat for the remaining lines of the input file. However, when I run the script the line does not remove the backslash. My guess is this is a simple fix, but I have tried several troubleshooting methods with no luck. The code is posted below for reference. Thanks!
@echo on
setLocal EnableDelayedExpansion
::set firstline to firstline of test.txt
set /p firstline=<test.txt
::Remove trailing slash if present
IF !firstline:~-1!==\ SET firstline=!firstline:~0,-1!
::Output firstline (without \ to new txt file)
echo !firstline!>test2.txt
endlocal
test.txt file:
C:\Desktop\example\path\
C:\Desktop\example\path\2\
C:\Desktop\example\path\test\
A:
You are correct that it's a simple problem. In fact it is solved with the introduction of 4 characters, all the same one.
Exposing a backslash in batch is a bad idea, it's likely to confuse the interpreter into thinking that it's dealing with an actual file or path. You should surround it with double-quotes. This is true with other characters, including !%&*?/\<>,. mostly special characters that deal with paths and files. There may be more, and some are only occasionally a problem. Even 1 and 2 can cause trouble when next to a redirection character.
Just to let you know, neither setlocal or the delayed expansion notation ! are necessary here, although there is no harm in it. (Except that the delayed notation makes debugging harder as it did in this case, by showing, when ECHO is On, the variable name rather than it's contents.)
Also, when trimming the last character of a string you don't need 0 in :~0,-1, using :~,-1 is sufficient. Though there is no harm in doing it either way.
And lastly, the echo on is unnecessary, as echo on is the default. Echo On is more for just displaying small parts of the execution of a long file. Though, once again, there is no harm in it.
@echo on
setLocal EnableDelayedExpansion
::set firstline to firstline of test.txt
set /p firstline=<test.txt
::Remove trailing slash if present
::Your error is below this
IF "!firstline:~-1!"=="\" SET firstline=!firstline:~,-1!
::Output firstline (without \ to new txt file)
echo !firstline!>test2.txt
endlocal
|
[
"stackoverflow",
"0056086830.txt"
] | Q:
How to convert "Flutter for Web" into "Flutter for Mobile"?
I found how to migrate "Flutter for mobile" to "Flutter for web".
https://github.com/flutter/flutter_web/blob/master/docs/migration_guide.md
but, I need the opposite way.
I had tried only "flutter run", and of course, it doesn't run well.
I don't understand where to replace.
name: my_app
version: 1.0.0
dependencies:
## REPLACE
## Update your dependencies to use `flutter_web`
#flutter:
# sdk: flutter
flutter_web: any
dev_dependencies:
## REPLACE
## Same goes for test packages
#flutter_test:
# sdk: flutter
flutter_web_test: any
## ADD
## Add these dependencies to enable the Dart web build system
build_runner: ^1.2.2
build_web_compilers: ^1.1.0
test: ^1.3.4
## REMOVE
## For the preview, assets are handled differently. Remove or comment
## out this section. See `Assets` below for more details
# flutter:
# uses-material-design: true
# assets:
# - asset/
#
# fonts:
# - family: Plaster
# fonts:
# - asset: asset/fonts/plaster/Plaster-Regular.ttf
## ADD
## flutter_web packages are not published to pub.dartlang.org
## These overrides tell the package tools to get them from GitHub
dependency_overrides:
flutter_web:
git:
url: https://github.com/flutter/flutter_web
path: packages/flutter_web
flutter_web_ui:
git:
url: https://github.com/flutter/flutter_web
path: packages/flutter_web_ui
I hope there is a way to migrate, even if that's complicated.
What I am trying to migrate is below.
https://github.com/flutter/flutter_web/tree/master/examples/gallery
A:
Yes, a think you can do it.
First thing you should do is: update your pubspec.yaml file to something like this:
dependencies:
flutter:
sdk: flutter
dev_dependencies:
flutter_test:
sdk: flutter
flutter:
uses-material-design: true
After running flutter packages get errors will come from everywhere so,
you have to update every import in each file under lib directory.
It will be something like this:
import 'package:flutter_web/material.dart' to import 'package:flutter/material.dart';
After that you will have to create again the project in order to get the android and ios projects directories.
To do that, you have to go to terminal, the navigate to the root directory of your project and run this command flutter create .
And finally, just run your project.
|
[
"stackoverflow",
"0003021333.txt"
] | Q:
Can I use memcpy in C++ to copy classes that have no pointers or virtual functions
Say I have a class, something like the following;
class MyClass
{
public:
MyClass();
int a,b,c;
double x,y,z;
};
#define PageSize 1000000
MyClass Array1[PageSize],Array2[PageSize];
If my class has not pointers or virtual methods, is it safe to use the following?
memcpy(Array1,Array2,PageSize*sizeof(MyClass));
The reason I ask, is that I'm dealing with very large collections of paged data, as decribed here, where performance is critical, and memcpy offers significant performance advantages over iterative assignment. I suspect it should be ok, as the 'this' pointer is an implicit parameter rather than anything stored, but are there any other hidden nasties I should be aware of?
Edit:
As per sharptooths comments, the data does not include any handles or similar reference information.
As per Paul R's comment, I've profiled the code, and avoiding the copy constructor is about 4.5 times faster in this case. Part of the reason here is that my templated array class is somewhat more complex than the simplistic example given, and calls a placement 'new' when allocating memory for types that don't allow shallow copying. This effectively means that the default constructor is called as well as the copy constructor.
Second edit
It is perhaps worth pointing out that I fully accept that use of memcpy in this way is bad practice and should be avoided in general cases. The specific case in which it is being used is as part of a high performance templated array class, which includes a parameter 'AllowShallowCopying', which will invoke memcpy rather than a copy constructor. This has big performance implications for operations such as removing an element near the start of an array, and paging data in and out of secondary storage. The better theoretical solution would be to convert the class to a simple structure, but given this involves a lot of refactoring of a large code base, avoiding it is not something I'm keen to do.
A:
According to the Standard, if no copy constructor is provided by the programmer for a class, the compiler will synthesize a constructor which exhibits default memberwise initialization. (12.8.8) However, in 12.8.1, the Standard also says,
A class object can be copied in two
ways, by initialization (12.1, 8.5),
including for function argument
passing (5.2.2) and for function value
return (6.6.3), and by assignment
(5.17). Conceptually, these two
operations are implemented by a copy
constructor (12.1) and copy assignment
operator (13.5.3).
The operative word here is "conceptually," which, according to Lippman gives compiler designers an 'out' to actually doing memberwise initialization in "trivial" (12.8.6) implicitly defined copy constructors.
In practice, then, compilers have to synthesize copy constructors for these classes that exhibit behavior as if they were doing memberwise initialization. But if the class exhibits "Bitwise Copy Semantics" (Lippman, p. 43) then the compiler does not have to synthesize a copy constructor (which would result in a function call, possibly inlined) and do bitwise copy instead. This claim is apparently backed up in the ARM, but I haven't looked this up yet.
Using a compiler to validate that something is Standard-compliant is always a bad idea, but compiling your code and viewing the resulting assembly seems to verify that the compiler is not doing memberwise initialization in a synthesized copy constructor, but doing a memcpy instead:
#include <cstdlib>
class MyClass
{
public:
MyClass(){};
int a,b,c;
double x,y,z;
};
int main()
{
MyClass c;
MyClass d = c;
return 0;
}
The assembly generated for MyClass d = c; is:
000000013F441048 lea rdi,[d]
000000013F44104D lea rsi,[c]
000000013F441052 mov ecx,28h
000000013F441057 rep movs byte ptr [rdi],byte ptr [rsi]
...where 28h is the sizeof(MyClass).
This was compiled under MSVC9 in Debug mode.
EDIT:
The long and the short of this post is that:
1) So long as doing a bitwise copy will exhibit the same side effects as memberwise copy would, the Standard allows trivial implicit copy constructors to do a memcpy instead of memberwise copies.
2) Some compilers actually do memcpys instead of synthesizing a trivial copy constructor which does memberwise copies.
A:
Let me give you an empirical answer: in our realtime app, we do this all the time, and it works just fine. This is the case in MSVC for Wintel and PowerPC and GCC for Linux and Mac, even for classes that have constructors.
I can't quote chapter and verse of the C++ standard for this, just experimental evidence.
A:
You could. But first ask yourself:
Why not just use the copy-constructor that is provided by your compiler to do a member-wise copy?
Are you having specific performance problems for which you need to optimise?
The current implementation contains all POD-types: what happens when somebody changes it?
|
[
"stackoverflow",
"0038265035.txt"
] | Q:
How to change tooltip background color?
I have created a tooltip in my html file like this:
<span title="" class="duedate-warning-msg"></span>
And I set the CSS property like this:
.duedate-warning-msg{
background: url("/static/img/error.png") no-repeat scroll 14px 12px white;
border-color: #f5c7c7;
padding: .5em 0.25em 0.5em 2.5em;
title:"my tooltip";
color: #D11006;
}
span:hover{
position:relative;
}
span[title]:hover:after {
content: "This statement is past due.";
padding: 4px 8px;
color: white;
font-size:16px;
font-family: "open_sansregular";
background: #0679ca;
position: absolute;
left: 0;
top: 100%;
white-space: nowrap;
z-index: 20px;
-moz-border-radius: 5px;
-webkit-border-radius: 5px;
border-radius: 5px;
-moz-box-shadow: 0px 0px 4px #222;
-webkit-box-shadow: 0px 0px 4px #222;
box-shadow: 0px 0px 4px #222;
background-image: -moz-linear-gradient(top, #eeeeee, #cccccc);
background-image: -webkit-gradient(linear,left top,left bottom,color-stop(0, #eeeeee),color-stop(1, #cccccc));
background-image: -webkit-linear-gradient(top, #eeeeee, #cccccc);
background-image: -moz-linear-gradient(top, #eeeeee, #cccccc);
background-image: -ms-linear-gradient(top, #eeeeee, #cccccc);
background-image: -o-linear-gradient(top, #eeeeee, #cccccc);
}
The above style created tooltip perfectly. Only problem which I am facing is that I want background color of tooltip is blue. But it always shows gray. Why it is happening so?
A:
Remove the background-image and add background, like so:
span[title]:hover:after {
content: "This statement is past due.";
padding: 4px 8px;
color: white;
font-size:16px;
font-family: "open_sansregular";
background: #0679ca;
position: absolute;
left: 0;
top: 100%;
white-space: nowrap;
z-index: 20px;
-moz-border-radius: 5px;
-webkit-border-radius: 5px;
border-radius: 5px;
-moz-box-shadow: 0px 0px 4px #222;
-webkit-box-shadow: 0px 0px 4px #222;
box-shadow: 0px 0px 4px #222;
background: #00F;
}
|
[
"stackoverflow",
"0017304246.txt"
] | Q:
Fortran 4.7.2/4.8.1 error: There is no specific subroutine for the generic 'vode' at (1)
I try to compile the fortran program which my advisor gave me.
It doesn't want to compile when I'm doing this with gfortran 4.7.2 on Mac OS X 10.8.4 and with gfortran 4.8.1 on Arch Linux x64.
I've built the minimal working example which replays the error. Unfortunately, it's quite big anyway, so I've put it on the github: https://github.com/kabanovdmitry/vode-test
I can compile this code under Ubuntu 12.04 with gfortran 4.6.3.
I've checked press releases for GCC 4.7 and found nothing that could give me a clue.
Could you please shed some light why gfortran doesn't want to compile this code?
Sorry, initially forgot to put the errors here:
main.f90:8.75:
call vode(istate, lambda_fcn, dummy_jac, lambda, x_tmp, x_end, tol, pm)
1
Error: There is no specific subroutine for the generic 'vode' at (1)
make: *** [all] Error 1
A:
Your problem is covered by my answer and its comments in the question referenced by george. The type, kind and rank of arguments match exactly. To add something new, I suggest you to try to call the specific procedure directly. The type checker will then complaint for bad actual arguments and you will see more details.
In your case
generic2.f90:81.24:
call d_vode(istate, lambda_fcn, dummy_jac, lambda, x_tmp, x_end, tol, pm)
1
Error: Interface mismatch in dummy procedure 'f' at (1): Shape mismatch in dimension 1 of argument 'y'
Which is rather self-explaining. You dummy procedures are not compatible with your interfaces. You are mixing assumed-size and constant-size and explicit size arrays. You must follow the interface exactly.
|
[
"stackoverflow",
"0049666087.txt"
] | Q:
Best practise to populate somedata in a existing database in rails
Problem statement:
Let's say I created a new column column_1 in a table table_1 through rails migration. And now I want to populate the data into column_1 doing some computation but it's a one time task.
Approaches:
Is it preferred to do it through migration. or should I add it in seeds and then populate it and remove it back again
or is there any other best practise.
A:
Even though there are different approaches, it is better to do that in in Rake or Migrations but not seed.
Rake tasks:
Rake tasks are generally preferred to do maintenance or data migration jobs over a collection of data.
Example of rake:
lib/tasks/addData.rake
desc "TODO"
task :my_task1 => :environment do
User.update_all(status: 'active')
end
Example of doing it in migration:
If you add status field to user:
class AddStatusToUser < ActiveRecord::Migration
def up
add_column :users, :status, :string
User.update_all(status: 'active')
end
def down
remove_column :users, :status
end
end
Why Not seeds:
Seeds are generally like database Dump, seed files are used to fill the data into Database for the first time when the application is started. So that the application is kickstarted with different data.
Examples are Application settings, Application Configurations, Admin users etc.
|
[
"workplace.stackexchange",
"0000012239.txt"
] | Q:
Should I list salary expectations on my resume?
In this question about resumes, one hiring manager brought up that he/she wanted to see Salary Requirements. There was a brief discussion on whether this was important information or not.
Unfortunately, I worry that the range of salaries for software jobs is extremely high and that putting a number out first can put me in a weaker position.
Should I list salary expectations on my resume? What are the benefits or drawbacks of doing so?
(As for my personal context, I have a Ph.D degree but I am considering transitioning into industry, possibly into software development. Some people assume that a Ph.D command some ridiculously high price, but that could not be farther from the truth, so in this situation listing a salary range that is within lines of an entry-level position may help reorient the recruiter's expectations.)
A:
In a word - "No".
One way to look at this: In a sense, a resume is a sales brochure. The "customer" is the organization from which you seek employment. The "product" is you, or more precisely, your ability to do work the organization wants done. Putting a salary on your resume is similar to putting a price on a sales brochure. Usually only commodity products list prices on their sales brochures. Thus, unless you want potential employers to treat you like a commodity, it seems best to leave a desired salary off your resume.
My experience creating resumes for myself and occasionally helping friends and family now goes back over 30 years. I've heard and read countless pieces of advice about resumes. Never once do I remember hearing or reading anything that recommends putting salary information of any kind on your resume. What I do remember is frequent advice (including this Q&A site) that you avoid being the first to give a number in salary negotiations. If you give out a number on your resume, you have violated this "rule" and (most likely) set the maximum salary you will get from any organization who sees that resume; even if the organization had more money budgeted for the position, they have no need to offer more now that you have said what you would settle for.
If you find you are not getting interviews from your resume, you might consider an objective statement in your resume or mentioning in your cover letters the level of work you are seeking. For example, your resume might have "Objective: An entry level position in software development", or your cover letter might say something like "Now that I have completed my Ph.D., I seek an entry level position in the software development field." You might then go on to explain why your Ph.D. will help you (either through "transferable skills", or maybe by entering a domain related to your Ph.D., or both).
Note: Generally, I don't think objective statements on a resume are that useful, but this is a situation where I think one could prove beneficial.
A:
It depends.
By and large, I agree with the accepted answer. For the most part, I'd default to "no". Particularly in the case where you are new to this particular market, and I'd be doubtful that you know exactly what range to quote. For example - yes you may be entry level, but the depth and experience of graduate studies mean that in some industries, you are above the general cut of "entry level" salaries. When you really don't know the value, and are testing the opportunities, state the expectation in a way that doesn't lock you down. Saying in your cover letter that you are changing industries, and thus interested in entry level positions gives you a way to set expectations without locking yourself down to a number.
I do get tired, however, of seeing "don't be the first to set an expectation" as the defacto advice to negotiation. Someone has to start. It's not just a blanket rule - we're not playing tic tac toe (where the person who starts has a distinct advantage, and there are no secrets - the next move is entirely discernable!). In a negotiation, the person with most power is the person with the most information, and the person most able to find creative ways to get what they want at the least cost to the other party.
The best time to quote numbers is to save yourself, and the other party, some time. That's the big thrust of the answer referenced in the question - that resumes and cover letters are terse, because time for review is tight, and managers appreciate any way of narrowing down the pile. That's absolutely true - so quote a pricetag when you want the pile to be narrowed down and you're willing to accept that you might not be part of it.
That sounds awful in a case where you're looking for every and any opportunity. But it isn't if you have a great job, and you're just looking for an even dreamier position. At that point, giving the minimum you'd ever consider saves you and the company time so you can continue with your realistic career options.
I don't think that's the case here.
|
[
"stackoverflow",
"0013370058.txt"
] | Q:
Simplest, cheapest way to set up SMTP server on Linux?
In the past I have tried following this guide for setting up a mail server on Ubuntu (going with Postfix, Dovecot, and Squirrelmail) and have been unsuccessful. I seemed to have been doing everything right, but the mail was not going through.
Anyway, it's been a while, and I would like to start over from scratch. What is the simplest, cheapest (preferably free if I already have a domain name + server) way I can set up an SMTP server on Linux?
My end goal is to be able to send simple, short emails to my cell phone (from the command line) as reminders. That's all I really need.
A:
sudo apt-get install mailutils
Then set up a gmail account and use that to send email with it. Works really easily. I've done this for seeing who's logged in to my minecraft server so my son can jump in when his friend goes on line: http://dymitruk.com/blog/2012/07/20/scripting-for-fun/
|
[
"skeptics.stackexchange",
"0000010653.txt"
] | Q:
Is "co-sleeping" (infant sleeps in bed with parents) safe?
Among some circles, co-sleeping is highly advocated for newborns/infants due to ease of breastfeeding and potential developmental bonding to the mother (for more examples, read about proposed advantages here). The immediate and obvious red flag that came to mind was rolling over one's baby. A defense I've heard is that you... just won't:
[In response to inquiry about the safety of co-sleeping] You won't squash him. You couldn't even comfortably roll onto a teddy bear in your sleep, let alone your own baby that your subconscious is ALWAYS aware of. (source)
you wont roll onto your baby. there is no way you forget they are there even when you're dead to the world. (source)
Physically and psychologically it is HIGHLY unlikely that you will roll over onto your baby when co-sleeping. It’s an evolutionary, parental instinct sorta thing (don’t remember the exact name for this but definitely learned about it in my psych classes). Essentially as a parent your conscious and unconscious self is aware that protecting your baby is a priority and you won’t smush or smother them. (source)
However, according to the Consumer Product Safety Commission, via Kid's Health:
According to the CPSC, at least 515 deaths were linked to infants and toddlers under 2 years of age sleeping in adult beds from January 1990 to December 1997:
121 of the deaths were attributed to a parent, caregiver, or sibling rolling on top of or against a baby while sleeping
more than 75% of the deaths involved infants younger than 3 months old
Here's another pro-co-sleeping advocate (emphasis mine):
At the University of Notre Dame's Mother-Baby Behavioral Sleep Laboratory, our studies of breastfeeding mothers who sleep with their 2- to 4-month-olds reveal that both mothers and their babies are extremely sensitive throughout the night to each other's shifting position in the bed.
During my many years of studying sleep-sharing, I've never heard of a single instance in which, under safe conditions, it was proven that a mother suffocated her child. Notice that I said safe conditions: Babies can and do accidentally suffocate when one or both parents doesn't know a baby is in the bed, is drunk or desensitized by drugs, or is indifferent to the baby's presence.
So, to follow in the vein of the last bit:
Is co-sleeping safe when parents are aware of the baby's presence, are not under the influence of substances, and are not indifferent to the baby's presence?1
In layman's terms: is co-sleeping safe under normal conditions?
1 Is that criterion last just a catch-all no true Scotsman variant? If the first two conditions aren't true then the parents must have been indifferent at some level?
A:
No.
Co-sleeping is unsafe, particularly when compared to placing a child into a suitable cot / crib. Most of this risk comes from the bed and bedding not being suitable for infants, but suffocation by overlying is also a significant risk.
The only safe place for an infant to sleep is on its back in a crib/bed that meets relevant standards.
A retrospective review of death-scene and medical reports for SIDS and related deaths showed that children sleeping in adult beds increases their risk of death by at least a factor of 20:
Using cribs as the reference group and adjusting for potential confounders, the multivariate ORs showed that ... the risk of suffocation was approximately 40 times higher for infants in adult beds compared with those in cribs. The increase in risk remained high even when overlying deaths were discounted (32 times higher) or the estimate of rates of bedsharing among living infants doubled (20 times higher).
Instances of overlying (suffocation by a person sharing the bed) were rare, and the data is not as conclusive:
The diagnosis by medical examiners and coroners that overlying of an infant while sharing an adult bed was the “cause of death” remains controversial. More overlying deaths were reported by medical examiners and coroners in the 1990s (70 deaths) than in the 1980s (7 deaths). In approximately 40.3% of the cases (31 of 77), the narratives reported that a third party found the infant covered by an adult or a child, there were compression marks on the infant, or other findings suggesting the likelihood of overlying (eg, infant sleeping in twin bed with 2 adults). In both decades, overlying deaths were associated with very young infants, with an average age of 1.9 months. Only 1 overlying death occurred after 6 months of age, a report of a 10-month-old found with another child over him.
That said, this is not the major risk with co-sleeping, it's suffocation by other means - bedding, soft mattresses, a child getting wedged between the headboard/footboard/bedframe and the mattress.
Another population-based death-scene study of SIDS and related deaths found that for their selected records (all sleep-based deaths of children under two in a particular geographical area), nearly half involved bed sharing:
Deaths Occurring While Sharing a Bed or Other Sleep Surface
Nearly one half (56) of the infants (47.1%) died while sharing a sleep surface with one or more bedmates (1.4 ± .7; range: 1–4 bedmates; Table 3). For the majority, deaths while bedsharing were diagnosed as SIDS, but for 13 the diagnoses were suffocation or undetermined (23.2% of bedsharing deaths). All deaths occurred on sleep surfaces that were not designed specifically for infant sleep. In 13 cases (23.2%), the scene investigation showed evidence for entrapment of the infant, either by a bedmate or by the sleep surface. In 18 cases (33.0%), the bedsharing infant was found dead on a pillow or comforter, items specifically identified in earlier studies as bedding that increases risk for sudden death when used by infants.18,36,38 The pillows and comforters were on the shared sleep surface and the infant had been placed on top of them.
Controlling for smoking is also an important factor when assessing the risks of bedsharing, but it appears there is a definite effect when this is taken into account:
The impact of bedsharing on risk for sudden infant death remains controversial. Three case–control studies suggest that bedsharing increases risk for sudden death,24,26,45 but the risk is lessened when the high rate of maternal smoking in these studies is considered. In England, in particular, the rate of smoking among mothers whose infants died while bedsharing is so high that the risk for nonsmoking mothers cannot be calculated from the data.24 In the United States, a case–control study27 from Washington, DC showed increased risk especially when black infants bedshare. Finally, preliminary results from the Chicago Infant Mortality Study, a large, recent case–control study, strongly indicate an effect of bedsharing that is independent of smoking. 46 There are no recent published results addressing risk for infants sleeping alone outside of cribs, but data from the US Consumer Products Safety Commission suggest that the risk may be high...
With specific regard to alcohol consumption and bed sharing, a nationwide case-control study in New Zealand found that maternal alcohol consumption did not increase the risk of death while bed sharing, though it should be noted that there doesn't appear to be any conclusive research on this matter - there are studies whose conclusions are directly contradictory.
Neither maternal alcohol consumption nor the thermal resistance of the infant's clothing and bedding interacted with bed sharing to increase the risk of sudden infant death, and alcohol was not a risk factor by itself.
CONCLUSION--Infant bed sharing is associated with a significantly raised risk of the sudden infant death syndrome, particularly among infants of mothers who smoke.
A:
I think it's worth providing an alternative viewpoint to this question.
Outside of the Western, developed world co-sleeping is the standard way for parents to sleep with young children, not the exception.
This article's fonts makes me want to burn my eyes out, but the final paragraph has a very interesting graphic created from NIMH data:
In China, where I live, co-sleeping is so common as to be the assumed situation. To do otherwise is strange and discussion-worthy.
This NYT article from 2007 says data is inconclusive.
Another article mentions author Margot Sunderland quoting:
“In the UK, 500 children a year die of Sids,” Sunderland writes. “In China, where it [co-sleeping] is taken for granted, Sids is so rare it does not have a name.”
She seems to have decent credentials.
|
[
"askubuntu",
"0000295813.txt"
] | Q:
Problem installing latest vim on ubuntu 12.10
I am trying to install the latest version of vim on Ubuntu 12.10. But when I use this command
sudo apt-get install vim
It gives me such an error
How to solve this?
A:
I am assuming vim works, and you would like a newer version. Therefore, my answer deliberately bypasses the method you are trying to use -- sudo apt-get -- which is fine for most things.
However, the fact you have a message "already at the latest level" is telling me you want a newer version.
I suggest you create a scratch directory under ~ after you log in.
1) mkdir vim_install
2) Then download the latest *nix version of vim -- The runtime and source files together: vim-##.tar.bz2 vim-7.3.tar.bz2 -- and unpack it in vim_install. A directory vim73 should be created.
3) Follow the instructions in vim73/README.txt. Almost immediately, you'll be told to read another README that corresponds to your architecture/OS.
4) At some point you'll build and/or distribute as root.
I've done this on several Linux systems we have in production, and downloading vim has never caused a problem for anything else I have installed, because it is built from its sources to run compatibly with your Linux system.
|
[
"stackoverflow",
"0033041573.txt"
] | Q:
Android: Create a background thread that runs periodically and does UI tasks?
OK, so I know how to do a backround task, I know how to do a periodic task (using handle postdelayed and runnable), I also know how to do UI task from background thread (via handler) but I am not able to execute a periodic background task that does some action on the UI thread.
I am trying to execute some background task every minute in which I have to make a network call. After the call is over, depending on the output I have to update the UI. I tried to do something like this
private void DoTask() {
Thread thread = new Thread() {
public void run() {
Looper.prepare();
final Handler handler = new Handler();
handler.post(netRunnable);
Looper.loop();
}
};
thread.start();
}
Runnable netRunnable = new Runnable() {
@Override
public void run() {
handler.getLooper().prepare();
final Handler handler1 = new Handler(Looper.getMainLooper());
if ( do background work and check result){
handler1.post(new Runnable() {
@Override
public void run() {
//Do UI Task
}
});
}
handler.getLooper().loop();
handler.postDelayed(netRunnable, 60000);
}
}
I understand that there might be some fundamental flaws with my implementation but I do not know how to do this task properly. Right now it is giving the error that Only one Looper may be created per thread.I get what it is trying to say. But can anyone please suggest to do this the right way.
A:
You could use Async Tasks. These are designed for it :
http://developer.android.com/reference/android/os/AsyncTask.html
It allows you to execute a network call in the background, then when you get the result, execute an action on the UI thread
Declaration :
private class MyTask extends AsyncTask<Input, Void, Output> {
protected Output doInBackground(Input... inputs) {
// do something on the network
return myOutput;// use this to transmit your result
}
protected void onPostExecute(Output result) {
// do something on UI thread with the result
}
}
If you want to repeat it, just create a runnable to launch it, and after every call, schedule the next one :
MyTask myTask;
Handler handler = new Handler();
Runnable myRunnable = new Runnable() {
@Override
public void run() {
MyTask myTask = new MyTask();
myTask.execute(myArg);
handler.postDelayed(netRunnable, 60000); // schedule next call
}
}
To launch it for the first time :
handler.postDelayed(myRunnable, 60000);
Or, if you want to launch it immediately :
handler.post(myRunnable);
Do not forget to cancel the Task when your activity is destroyed :
myTask.cancel(true);
A:
Maybe you are better of, creating a seperate (Intent)Service and calling it periodically with postDelayed. Create a BroadcastReceiver in your Activity and handle UI changes there.
Another hint for handling UI changes from other threads: It is not possible. Therefore you need to call runOnUiThread. Here is how to use it
|
[
"gaming.stackexchange",
"0000285865.txt"
] | Q:
What does this anti-crown button do?
I've tried pressing it and it doesn't appear to do anything. Does anyone know what it does?
The update just came out today (at least for me it did).
A:
Judging from the patch notes and the Rethinking Emotes blog post, it appears to be the button for muting the emotes of the other player.
|
[
"tex.stackexchange",
"0000276364.txt"
] | Q:
How to set up latex-suite to compile asymptote in tex files?
I am using vim with latex-suite to compile to pdf.
With the following in filename.tex:
\usepackage{asymptote}
...
\begin{asy}
...
\end{asy}
...
The file filename-1.asy is produced (and the log warns that filename-1.tex could not be found), but I can't work out how to add a compiler rule to latex-suite to also compile anything in the working directory matching filename-*.asy with asymptote.
Does anyone know how to set it up to also compile asymptote files automatically with \ll ?
Update
As in Marijn's answer, Tex_CompileRule_ is what I was missing.
Here is a full MWE as requested:
test.tex:
\documentclass{article}
\usepackage{asymptote}
\begin{document}
\begin{asy}
settings.outformat = "pdf";
label("Hello world!");
\end{asy}
\end{document}
~/.vim/ftplugin/tex.vim:
let g:Tex_DefaultTargetFormat = 'pdf'
let g:Tex_MultipleCompileFormats='pdf, aux'
let g:Tex_CompileRule_asy = 'asy %:r-*.asy'
let g:Tex_FormatDependency_pdf = 'asy,pdf'
A:
You need to add a compile rule and a dependency to ~/.vim/ftplugin/tex.vim (assuming Linux). This file may not exist, in which case you should create it. Any other configuration file that latex-suite can find may also work. Add the following:
let g:Tex_CompileRule_asy = 'asy %:r-*.asy'
let g:Tex_FormatDependency_pdf = 'asy,pdf'
Optionally, you can add some rules to facilitate processing:
let g:Tex_DefaultTargetFormat = 'pdf'
let g:Tex_ViewRule_asy = 'evince -1'
let g:Tex_MultipleCompileFormats = 'pdf'
Then in vim you can compile the following MWE:
\documentclass{article}
\usepackage{asymptote}
\begin{document}
\begin{asy}
settings.outformat = "pdf";
label("Hello world!");
\end{asy}
\end{document}
Possibly you have to press \ll twice because latex-suite does not understand that Asymptote requires recompilation.
Also: please update your MWE to make it compile (replace / with \ as a start).
|
[
"stackoverflow",
"0052500422.txt"
] | Q:
Why compare using bitwise AND in golang?
I am reading a piece of code like this (taken from fsnotify):
type Op uint32
const (
Create Op = 1 << iota
Write
Remove
Rename
Chmod
)
...
func (op Op) String() string {
var buffer bytes.Buffer
if op&Create == Create {
buffer.WriteString("|CREATE")
}
if op&Remove == Remove {
buffer.WriteString("|REMOVE")
}
if op&Write == Write {
buffer.WriteString("|WRITE")
}
if op&Rename == Rename {
buffer.WriteString("|RENAME")
}
if op&Chmod == Chmod {
buffer.WriteString("|CHMOD")
}
if buffer.Len() == 0 {
return ""
}
return buffer.String()[1:]
}
My newbie question is why someone would use a bitwise AND operation like op&Remove == Remove to actually make a comparison.
Why not just compare the op and (Create|Remove|...) values?
A:
This is an example of bit masking. What they're doing is defining a series of masks (Create, Remove, Write) that are integers 1,2,4,8,16,32,etc. You pass in a single op value, which can have multiple operations, and it figures out which operation to perform based on which bit is flipped. This makes more sense if you think about these numbers in a bitwise pattern. 4 == 00000100, the value for Remove. If you pass in an op code of say, 6, when you compare 00000110 && 00000100 == 00000100 you get true, because the bit that is specific to Remove, the third least significant bit, is 1.
In a less jargony and specific way, this is basically a way to pass in multiple opcodes with one byte. The reason they're doing a bitwise AND and then comparing is because it allows them to check if that specific bit is flipped, while ignoring the rest of the bits.
|
[
"rpg.stackexchange",
"0000027216.txt"
] | Q:
Software tools for Savage Worlds: Showdown unit building?
I recently got into this version of Savage Worlds, and it seems helluva fun to play with friends, as sometimes we have wanted to make our very own minis wargame... However the rules on how to calculate unit points are convoluted and hard to use.
I've been surfing the webs and found a "handly" Excel-based troop builder, but whenever values are inserted it won't calculate them – I have to fill in the troop cards manually. So that's not so handy.
Is there any working software for building troops on a PC?
A:
As of the time of this writing, the only tool for building Savage Worlds Showdown characters is the official Excel spreadsheet from Pinnacle.
However, Wild Card Creator, a Savage Worlds character creator program, had a Kickstarter stretch goal for adding support for Savage Worlds Showdown at some point after the final release (the date listed in the stretch goal was rather optimistic). So there will be another option, but for now, the official Excel spreadsheet is it.
(Full disclsoure: I am the author of Wild Card Creator)
|
[
"stackoverflow",
"0012263974.txt"
] | Q:
Class member of Interface and base class
I've got a base class called Graph and an interface called IDataTip.
I've got a number of classes that implement both of these, for example:
class TreeGraph : Graph, IDataTip
{
//interface implementation
}
I was wondering if there was a way to declare a member of another class such that the type of the member requires classes that match both the abstract class and the interface?
For example, in the following class:
class GraphExporter
{
public Object GraphWithDataTip {set; get;}
}
I would like to be able to replace the Object type with something that indicates that GraphWithDataTip should be inherit from Graph and implement IDataTip. Is there any way to do this? Or if not, could somebody recommend a more sensible design?
Thanks in advance!
A:
You could use generic constraints:
public class FooClass<T> where T: Graph, IDataTip
{
public T Foo { get; set; }
}
A:
It sounds as though you want either:
a new base type (abstract class thing : Graph, IDataTip) to be used for your parameter
a generic method of the form void MyMethod<T>(T thing) where T : Graph, IDataTip
Alternatively, you could cast the parameter within your method and throw an exception if it's not suitable, but this would be a runtime-only check.
|
[
"stackoverflow",
"0018801665.txt"
] | Q:
Will declaring a function generate any issues in the code
In my project i have a Header file common.h which include many headers in it.Some of the files include Common.h and some other header which are already present in Common .h So In the Pre-Processing stage many functions get prototyped twice(Once from the Included header and other from Gui.h).I was wondering is this would cause any issue in long run.
Please suggest..Thanks in advance..
A:
Headers should have include guards so that they are only processed once:
#ifndef SOME_UNIQUE_STRING
#define SOME_UNIQUE_STRING
// Everything else here
#endif
By "Everything" I mean "everything", starting with your #includes if any.
SOME_UNIQUE_STRING could be the name of the module as long as it is unlikely to coincide with another define somewhere else.
If you look in your library headers, you will notice they use include guards like this.
|
[
"ell.stackexchange",
"0000130496.txt"
] | Q:
What is this large mammal with antlers called: a moose or an elk?
Is it a moose or an elk in the picture attached? It's from this Wikipedia article. I've always called this animal an elk.
Wiki says:
The moose (North America) or elk (Eurasia), Alces alces, is the
largest extant species in the deer family.
So these two words are synonyms.
On the other hand, Encyclopaedia Britannica gives the following information:
*Elk (Cervus elaphus canadensis), also called wapiti, the largest and most advanced subspecies of red deer (Cervus elaphus), found in North
America and in high mountains of Central Asia.
Moose (Alces alces), the largest member of the deer family Cervidae
(order Artiodactyla).*
As you see, these are two different species, the moose and the wapiti (or the American elk).
What is the animal in the picture called?
A:
The or in
The moose (North America) or elk (Eurasia)
implies that moose is used in North America, whereas elk is used in Eurasia.
In fact, the article goes on to mention:
Alces alces is called a "moose" in North American English, but an "elk" in British English; its scientific name comes from its name in Latin.
So what you call it depends on the kind of English you follow.
Here in the US, we'd call the animal in the OP a moose. To confuse matters more, in the US, we'd call the smaller Cervus canadensis an elk:
The elk, or wapiti (Cervus canadensis), is one of the largest species within the deer family, Cervidae, in the world, and one of the largest land mammals in North America and Eastern Asia. This animal should not be confused with the still larger moose (Alces alces) to which the name "elk" applies in British English and in reference to populations in Eurasia.
(Wikipedia)
|
[
"stackoverflow",
"0028150858.txt"
] | Q:
T-SQL Replace Multiple Values with Wildcards
I want to replace characters , and /. I can do this with:
DECLARE @OMG VARCHAR(200)
SET @OMG = 'ABC,DE/F'
SELECT REPLACE(REPLACE(@OMG,'/','|') ,',','|')
The second query does not work however. Is there just typo or the task cannot be achieved with this code? I wanted to use wildcard for set of characters that should be replaced.
SELECT REPLACE(@OMG,[,/],'|')
It returns:
Msg 207, Level 16, State 1, Line 7
Invalid column name ',/'.
A:
You can define all the variables to be replaced and the replacement inside a table use it.
create TABLE #ReplaceStrings (symb VARCHAR(5),replace_char varchar(5))
INSERT INTO #ReplaceStrings
(symb,replace_char)
VALUES ('/','|'),(',','|')
DECLARE @OMG VARCHAR(200)
SET @OMG = 'ABC,DE/F'
SELECT @OMG = Replace(@OMG, symb, replace_char)
FROM #ReplaceStrings
select @OMG
Here in a single replace you can replace all the unwanted characters.
Update: to replace data from table
create TABLE ReplaceStrings (symb VARCHAR(5),replace_char varchar(5))
create table #table (String varchar(500))
insert into #table values ('ABC,DE/F'),('AB,C,DE/F/')
INSERT INTO ReplaceStrings VALUES ('/','|'),(',','|')
Scalar Function
Create function replacechar(@Ip_String varchar(500))
returns varchar(500)
begin
SELECT @Ip_String=Replace(@Ip_String, symb, replace_char)
FROM ReplaceStrings
return @Ip_String
end
Execute the function
select String,dbo.replacechar(String) Replaced_String from #table
|
[
"stackoverflow",
"0006189481.txt"
] | Q:
Is there a way to send low res binary images to mobile phones using the USSD protocol?
Is there a way to send low res binary images to mobile phones using the USSD protocol? I know that OTA bitmap allows us to send similar images through SMS; however, I could not find anything on USSD that details this.
A:
No. USSD does not have any defined methods to specify content type, nor does it really support segmentation and reassembly of even small binary data sequences. It might be possible given operator cooperation to do something that worked but itwould be a bad hack and probably not very widely distributable
|
[
"stackoverflow",
"0052120924.txt"
] | Q:
Vim: need a function to switch to an active buffer by name
Say I have a terminal buffer open on some window with needed buffer dimensions etc.
I'd like to switch to the window where it is opened with a hotkey.
I can make it with some big 'Denite' plugin:
function! FocusBufOrDo(arg,cmd)
if buflisted(bufname(a:arg))
" exec 'buffer ' . a:arg
exec 'Denite buffer -default-action=switch -mode=normal -immediately-1 -input=' . a:arg
elseif !empty(a:cmd)
" echo 'No such buffer'
exec a:cmd
endif
endfunc
nnoremap <Leader>c :call FocusBufOrDo('/usr/bin/bash','term')<CR>
nnoremap <Leader>gi : call FocusBufOrDo('gist:','tabe \| Gist bf39XXXXXXXXXXXXXXXXX5')<CR>
Now I want a dedicated function to do the switch.
Tselectbuffer or tlib plugins have that functionality but I am not able to rip it out. Would be very grateful when you do it for me =)
A:
" Run through the list of buffers,
" match buffer's filename with the argument,
" switch to the 1st window, if found.
function! GotoWindowByFileName(name)
for b in getbufinfo()
if b.name =~ a:name
call win_gotoid(b.windows[0])
return
endif
endfor
endfunction
|
[
"math.stackexchange",
"0000846360.txt"
] | Q:
Valuation associated to a non-zero prime ideal of the ring of integers
I have a question from Frohlich & Taylor's book 'Algebraic Number Theory', p.64. I will keep the notation used there.
Let $K$ be a number field, $\mathcal o$ its ring of integers. Let $\mathfrak p$ be a non-zero prime ideal of $\mathcal o$ and $v=v_\mathfrak p$ the valuation of $K$ associated to $\mathfrak p$. Suppose that $\rho$ is a field automorphism of $K$ such that $v(x^\rho)=v(x)$ for all $x\in K$.
Why is this equivalent to the condition $\mathfrak p^\rho=\mathfrak p$?
Would it help to show that $(x\mathfrak o)^\rho=x^\rho\mathfrak o$ for all $x\in K$? - although I am not sure if this is even true.
Thanking you in advance.
A:
$v(x)$ is the number $n$ such that $x \in \mathfrak{p}^n-\mathfrak{p}^{n+1}$ now $v(x)\geq 1$ iff $x \in \mathfrak{p}$.
So we see that if $v(x^{\rho})=v(x)$ then
$$x^{\rho} \in \mathfrak{p} \Leftrightarrow x \in \mathfrak{p}$$
which means that $\mathfrak{p}^{\rho}=\mathfrak{p}$.
Conversely if $\mathfrak{p}^{\rho}=\mathfrak{p}$ then it is easy to see
$$x^{\rho} \in \mathfrak{p}^n-\mathfrak{p}^{n+1}\Leftrightarrow x \in \mathfrak{p}^n-\mathfrak{p}^{n+1}$$
and so $v(x^{\rho})=v(x)$.
|
[
"vi.stackexchange",
"0000027016.txt"
] | Q:
Installing vim-plug
My vimrc:
set rnu
set nu
set autoindent
set noerrorbells
set colorcolumn=80
set tabstop=2
set bg=light
colorscheme elflord
set guifont=Courier:h14
set spelllang=en,en_us
set spell
iwr -useb https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim |`
ni $HOME/vimfiles/autoload/plug.vim -Force
see https://github.com/junegunn/vim-plug#Installation
which gives:
Error detected while processing C:\Users\maria\vi
mfiles\vimrc:
line 12:
E492: Not an editor command: iwr -useb
https://raw.githubusercontent.com/junegunn/vim-
plug/master/plug.vim |`
line 13:
E492: Not an editor command: ni $HOME/vimfi
les/autoload/plug.vim -Force
Here is my autoload directory:
Edit:
what do i do now.
btw I renamed vim-plugin-master(1) in autoload to vim-plugin-master
A:
As mentioned in the installation instructions of vim-plug, you should run that specific command in a PowerShell window.
iwr -useb https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim |`
ni $HOME/vimfiles/autoload/plug.vim -Force
If you're not very familiar with PowerShell in general, note that you can also download the plug.vim file directly with your browser and create the directory structure (the vimfiles directory under your home directory, then the autoload inside it) and store the file at the appropriate location.
The PowerShell command is there to help you get the correct file into the correct location with the correct name with a single copy & paste, but you don't have to use it if you find that a different way works better for you.
btw I renamed vim-plugin-master(1) in autoload to vim-plugin-master
This doesn't really work, since Vim depends on that file being named exactly plug.vim, since it will only load *.vim files from that directory, and the plug part of the name is also important since it will auto-load it when functions with the plug#... prefix are used, such as call plug#begin() which is what you need to add to your vimrc to activate vim-plug.
|
[
"pt.stackoverflow",
"0000148241.txt"
] | Q:
@media screen não funcionando corretamente
Estou estudando css e html ainda, então posso estar cometendo até algum erro banal, enfim não estou conseguindo resolver sozinho. O problema é o seguinte eu defini no @media screen uma largura minima para ele mudar a logo da img.
Código com logo tamanho original :
.logo {
width: 56px;
height: 56px;
float: left;
background: url('../img/logo-mobile.png') center center/56px no-repeat;
font-size: 0;
}
Com @media screen
@media screen and (min-height: 480px){
.logo {
width: 214px;
background: url('../img/logo.png') center center/214px no-repeat;
font-size: 0;
}
.btn {
font-size: 2em;
}
}
O @media até muda a logo, porém com uma largura minima menor que 480px ela deveria voltar para a original, só que isso não acontece.
Onde estou errando ?
A:
Você não definiu a largura, você definiu a altura, repara:
min-height: 480px
substitua por:
min-width: 480px
que vai rodar perfeitamente.
@media screen and (min-width: 480px){
.logo {
width: 214px;
background: url('../img/logo.png') center center/214px no-repeat;
font-size: 0;
}}
|
[
"stackoverflow",
"0027527121.txt"
] | Q:
Javascript setInterval() not working on function call
I need to refresh a div on 3 sec so I tried a setInterval method as follows using javaScript:
<button onclick="myFunction()">Try it</button>
<script>
function hello(){
alert("vannallo");
}
function myFunction() {
setInterval(hello(), 3000);
}
</script>
This is not working, But when I tried like thje following it works:
function myFunction() {
setInterval(function hello(){
alert("vannallo");
}, 3000);
}
I need to work setInterval continuosly on my first buttonclick , How can I acheive it? Please help guys!!
A:
You aren't passing your function to setInterval but what it returns (that is undefined).
Change
setInterval(hello(), 3000);
to
setInterval(hello, 3000);
|
[
"stackoverflow",
"0038562177.txt"
] | Q:
JMeter does not record any client server activity
I am trying to record any client-server activity in JMeter as a start for learning. I am doing this from a company network and hence have to go through proxy. Below are the steps I took for recording.
I read in JMeter documentation that for recording through proxy I have to invoke from command by giving the proper proxy. I used below code to invoke JMeter.
jmeter -H "proxy address taken from Internet settings" -P "8080" -u "Domain*UserName*" -a "password".
With this JMeter invokes fine and then I start the normal process of creating a thread group and a recording controller.
Now I am changing the internet setting in IE to local host and mentioning the port as given in JMeter (8080). I am keeping the proxy used as it is because without proxy no site will open except for the internal company sites.
Also I have imported the certificate which JMeter uses for the browsers.
A:
This is now resolved
I added : JAVA_HOME variable in User environment variables with value pointing to jre installed on my machine C:\Program Files\Java\jdk1.8.0_60\jre
Also I had to select Java under HTTP Sampler Setting in HTTP(S) Test Script Recorder. (Though I don't get the purpose of this setting).
HTTP Sampler setting changed to Java
After doing above changes script is recording fine under company proxy.
|
[
"stackoverflow",
"0025881710.txt"
] | Q:
packing unsigned char bytes into an unsigned int in c
I have an assignment in which I have to pack the bytes from 4 unsigned char into an unsigned int.
the code goes as following:
#include <stdio.h>
int main (){
//Given this
unsigned char a = 202;
unsigned char b = 254;
unsigned char c = 186;
unsigned char d = 190;
//Did this myself
unsigned int u = a;
u <<=8;
u |= b;
u <<=8;
u |= c
u <<=8;
U |= d;
}
I know that:
u <<=8;
Shifts the bits in u to the left 8. But I am confused as to what the lines like u |= b;do?
Simply, I am trying to better understand what the code I came up works into packing the bytes from 4 unsigned char into an unsigned int. I came up with this solution in a brute type of way. I was just trying to pack bytes in different ways, and this way worked. But I am not really sure why.
Thank you in advance.
A:
a which is 202 in binary would be 11001010
b which is 254 in binary would be 11111110
c which is 186 in binary would be 10111010
d which is 190 in binary would be 10111110
unsigned int u = a;
u <<= 8; // now u would be 11001010 00000000
u |= b; // now u would be 11001010 11111110
u <<= 8; // now u would be 11001010 11111110 00000000
u |= c; // now u would be 11001010 11111110 10111010
u <<= 8; // now u would be 11001010 11111110 10111010 00000000
u |= d; // now u would be 11001010 11111110 10111010 10111110
// This is how a b c d
// are packed into one integer u.
|
[
"tex.stackexchange",
"0000310293.txt"
] | Q:
How does \if actually get used?
I apologize if this is too newbie or a duplicate, but I've banged my head against it long enough, and searched without success. :(
I am trying to use an \if to test for an empty string without success and have traced it to my understanding of the \if itself. I have added sample code showing how I'm misusing \if in a real simple way:
\def\tmpOne{hello}
\tmpOne %shows that \tmpOne produces hello
\if{\tmpOne}{hello}
goodbye %This is never reached.
\fi
Closer to my actual application is:
\newcount\tmpInd
\def\funcOne{%
\advance\tmpInd by 1
\if{\testEmpty{\readHistory{\value\tmpInd}}}{<NOT EMPTY>}
\funcOne
\fi
}
%where \testEmpty returns either <EMPTY> or <NOT EMPTY>
%and \readHistory is my own previously defined function
%(They work as expected outside of this context)
Any help for my obviously weak understanding would be appreciated. :)
Edit:
Due to information in one of the comments, I have added some extra details of my particular use case...
The definition for \testEmpty is:
\newcommand{\testEmpty}[1]{\setbox0=\hbox{#1}\ifdim\wd0=0pt<EMPTY>\else<NOT EMPTY>\fi}
The definition for \readHistory is:
\newcommand{\readHistory}[1]{\getdata[#1]\myChangeHistory}
The definition for \getData is:
\def\getdata[#1]#2{\csname data:\string#2:#1\endcsname}
The data is stored with \storeData:
\def\storedata#1#2{\tmpnum=0 \edef\tmp{\string#1}\storedataA#2\end\expandafter\def\csname data:\tmp:0\endcsname{\tmpcnt}}
\def\storedataA#1{\advance\tmpnum by1
\ifx\end#1\else
\expandafter\def\csname data:\tmp:\the\tmpnum\endcsname{#1}%
\expandafter\storedataA\fi
}
Note: I have patched these together by trawling the internet, and only partly understand them (but they all do what I want).
A:
Your code is doomed to failure. The conditional \if tests character code equality of the next two unexpandable tokens it finds after it, performing expansion until unexpandable tokens remain. So
\if{\tmpOne}{hello} goodbye \fi
sees {, which is unexpandable and then expands \tmpOne; after this expansion the input stream is
\if{hello}{hello} goodbye \fi
and, since { and h aren't the same unexpandable token as far as \if is concerned (their character codes differ), the conditional returns false and TeX ignores everything up to the first matching \else or \fi. No \else is found, so what remains is
\fi
which has empty expansion; next TeX goes on.
Without knowing more about \readHistory it's impossible to suggest alternative strategies.
|
[
"stackoverflow",
"0040145302.txt"
] | Q:
R: sum by count over multiple columns
this is related to this question that I've looked at How to summarize by group?, however, it seems that my data is a little different that makes things weird.
I have a data.frame DF like so:
X Y1 Y2 Y3 Y4
3 A A B A
2 B B A A
1 B A A A
I want to make a sort of weighted sum of each unique factor in Y by its numeric value in X, such that the output is:
Y Y1 Y2 Y3 Y4
A 3 4 3 6
B 3 2 3 0
I had tried using a for loop to iterate over the indices of the columns, but I wasn't able to pass the number of the Y's correctly, and it didn't seem like the R way of doing this efficiently, for many more columns and rows.
It looks like according to the linked question, this is the right approach, however, when I try to extend to do the same across all the columns, via group_by and summarise_each, I get errors as the Y's are factors. Should I be using 'apply' instead? The logic of this seems straight forward but I've been stumped in its implementation.
aggregate(X~Y1,DF,sum)
A:
I don't think this is straightforward, and will require melting and reshaping. Here's an attempt in data.table:
setDT(df)
dcast(melt(df, id.vars="X", value.name="Y")[,.(X=sum(X)), by=.(variable,Y)], Y ~ variable)
#Using 'X' as value column. Use 'value.var' to override
# Y Y1 Y2 Y3 Y4
#1: A 3 4 3 6
#2: B 3 2 3 NA
Or maybe even just use xtabs if you want to avoid most of the data.table code:
xtabs(X ~ Y + variable, melt(df, id.vars="X", value.name="Y"))
Or a variation using only base R:
xtabs(X ~ ., cbind(df[1], stack(lapply(df[-1],as.character))) )
|
[
"stackoverflow",
"0045211965.txt"
] | Q:
Cassandra for datawarehouse
Is Cassandra a good alternative for Hadoop as a data warehouse where data is append only and all updates in source databases should not overwrite the existing rows in the data warehouse but get appended. Is Cassandra really ment to act as a data warehouse or just as a database to store the results of batch / stream queries?
A:
Cassandra can be used both as a data warehouse(raw data storage) and as a database (for final data storage). It depends more on the cases you want to do with the data.
You even may need to have both Hadoop and Cassandra for different purposes.
Assume, you need to gather and process data from multiple mobile devices and provide some complex aggregation report to the user.
So at first, you need to save data as fast as possible (as new portions appear very often) so you use Cassandra here. As Cassandra is limited in aggregation features, you load data into HDFS and do some processing via HQL scripts (assume, you're not very good at coding but great in complicated SQLs). And then you move the report results from HDFS to Cassandra in a dedicated reports table partitioned by user id.
So when the user wants to have some aggregation report about his activity in the last month, the application takes the id of active user and returns the aggregated result from Cassandra (as it is simple key-value search).
So for your question, yes, it could be an alternative, but the selection strategy depends on the data types and your application business cases.
You can read more information about usage of Cassandra
here
|
[
"stackoverflow",
"0030648832.txt"
] | Q:
How should I register a new user(custom) in django rest framework?
When ever I create a new User, the corresponding BaseUser is not getting created. The signal is working correctly as I have checked without the attributes department and user_type how do I go about registering the BaseUser?
# models.py
from django.db import models
from django.conf import settings
from django.contrib.auth.models import User
class BaseUser(models.Model):
# TODO: Add a class with static method to return the list of departments(choices)
# TODO: (REMINDER) To add placeholder that username is rollnumber
USER_TYPES = (
(0, 'Student'),
(1, 'Professor'),
(2, 'Guest'),
)
user = models.OneToOneField(User, primary_key=True,
related_name='user')
department = models.CharField(max_length=3, default='CSE')
user_type = models.IntegerField(choices=USER_TYPES, default=0)
REQUIRED_FIELDS = [
'department',
'email',
'first_name',
'user_type'
]
def get_user_type(self):
return self.USER_TYPES[self.user_type][1]
def __unicode__(self):
return str(
dict((
self.user.user_name,
self.get_user_type(),
))
)
# serializers.py
from django.contrib.auth.models import User
from rest_framework import serializers
from authentication.models import BaseUser
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = (
'id',
'username',
'first_name',
'email'
)
class BaseUserSerializer(serializers.ModelSerializer):
user = UserSerializer()
user_type = serializers.SerializerMethodField()
class Meta:
model = BaseUser
fields = (
'user',
'user_type',
'department',
)
def get_user_type(self, obj):
return obj.get_user_type()
# views.py
from django.shortcuts import render
from django.contrib.auth.models import User
from rest_framework import viewsets
from rest_framework.decorators import detail_route, list_route
from authentication.models import BaseUser
from authentication.serializers import BaseUserSerializer, UserSerializer
class UserViewSet(viewsets.ModelViewSet):
queryset = User.objects.all()
serializer_class = UserSerializer
EDIT: I tried to follow this approach, but I got an error that User doesn't have those attributes(i.e 'Some fields are empty!'
# signals.py
# I have another signal in signals.py and it is working fine.
@receiver(post_save, sender=User)
def create_user_handler(sender, instance, created, **kwargs):
if created:
attrs_needed = ['department', 'user_type']
if all(hasattr(instance, attr) for attr in attrs_needed):
base_user = BaseUser(
user=instance,
department=instance.department,
user_type=instance.user_type,
)
base_user.save()
else:
print "Some fields are empty!"
# views
class UserViewSet(viewsets.ModelViewSet):
queryset = User.objects.all()
serializer_class = UserSerializer
@list_route(methods=['post'])
def create_baseuser(self, request):
print request.DATA['username']
user = User.objects.create_user(
username=request.DATA['username'],
password=request.DATA['password'],
email=request.DATA['email'],
first_name=request.DATA['first_name']
)
user.department = request.DATA['department']
user.user_type = request.DATA['user_type']
user.save()
return Response(data='{"detail":"created"')
A:
If all users are going to have these types I'd just suggest extending with a custom user model for less serializer related headaches (especially if you want to be able to just POST/PUT to /user/ to change these fields and don't want to go through the exercise of extending your serializers) https://docs.djangoproject.com/en/1.8/topics/auth/customizing/#specifying-a-custom-user-model
But if it's not working on your script either... maybe your signal isn't set up right? I don't see any post_save signal in your original code
|
[
"superuser",
"0001163552.txt"
] | Q:
How to do a clean install of windows 10 using cmd?
My HD is starting to get corrupted, and some system files are gone. But Windows still works.
The problem is that now the usual method to reset Windows is not possible for me.
My taskbar, start menu, explorer and the new windows configurations menu are gone. So far I can start the task manager and from there I can access the execute function to open cmd and the old control panel.
I just bought an SSD to replace my broken HD but I want to do a clean install without losing my Windows license.
Can anyone help me?
A:
I have to make an assumption here. That assumption is that you have a free upgrade license. If this is not the case please comment and I can update the answer accordingly.
Since the Windows 10 Anniversary Update your free license is associated with you Microsoft account. This is assuming you have linked your computer with you MS account. If you have you will simply just need to reinstall and Head to Settings > Update & Security > Activation and you’ll see a “Troubleshoot” option if activation failed. Click that option and sign in with the Microsoft account you associated your license with. You’ll be able to tell Windows that you “changed hardware on this device recently” and select your PC from a list of devices associated with your Microsoft account.
I have personally used this process and it does work.
Please note that I referenced and used some information from this article on howtogeek.com
If you have not associated your PC with you MS account then hopefully you system is okay enough to run start ms-settings from the command prompt and open the setting window (omit start if you are using the run dialog).
Happy installation.
|
[
"stackoverflow",
"0045932604.txt"
] | Q:
Error: Gradle DSL method not found: 'google()'
I'm trying to add an external library to my existing project. I created a libs folder and added my MaterialDrawer library in the root directory. Here is my settings.gradle file:
include ':app'
include 'libs:MaterialDrawer'
But gradle sync is failed and getting following error:
Error:Gradle DSL method not found: 'google()'
I couldn't find any solution in SO regarding my problem. Anyone would be kind enough to help?
Here is the build.gradle (Project):
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.3.1'
classpath 'com.google.gms:google-services:3.0.0'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
maven { url "https://maven.google.com" }
}
}
Here is the build.gradle (app):
android {
compileSdkVersion 25
buildToolsVersion '25.0.0'
defaultConfig {
applicationId "com.myapp"
minSdkVersion 15
targetSdkVersion 25
versionCode 38
versionName "2.1.8"
generatedDensities = []
}
dexOptions {
javaMaxHeapSize "4g"
}
aaptOptions {
additionalParameters "--no-version-vectors"
}
buildTypes {
release {
shrinkResources true
minifyEnabled true
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
tasks.whenTaskAdded { task ->
if (task.name.equals("lint")) {
task.enabled = false
}
}
repositories {
mavenCentral()
maven { url "https://jitpack.io"}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile project(":libs:MaterialDrawer")
}
apply plugin: 'com.google.gms.google-services'
A:
Method google() was added in Gradle 4.0
You should use maven { url 'https://maven.google.com' } on old versions
Also remove repositories section from your app build.gradle (you may merge it with root build.gradle)
Read how to add library to build.gradle here
compile('com.mikepenz:materialdrawer:5.9.5@aar') {
transitive = true
}
|
[
"stackoverflow",
"0042254362.txt"
] | Q:
InfluxDB installation issue on Windows
In reference to
How to install InfluxDB in Windows
I've followed all the steps upto:
go get -u -f ./...
But am facing an issue as below
# github.com/influxdata/influxdb/services/precreator
services\precreator\service.go:32: undefined: zap.NullEncoder
services\precreator\service.go:32: cannot use zap.New(zap.NullEncoder()) (type *zap.Logger) as type zap.Logger in field value
services\precreator\service.go:40: cannot use log.With(zap.String("service", "shard-precreation")) (type *zap.Logger) as type zap.Logger in assignment
# github.com/influxdata/influxdb/services/admin
services\admin\service.go:36: undefined: zap.NullEncoder
services\admin\service.go:36: cannot use zap.New(zap.NullEncoder()) (type *zap.Logger) as type zap.Logger in field value
services\admin\service.go:85: cannot use log.With(zap.String("service", "admin")) (type *zap.Logger) as type zap.Logger in assignment
# github.com/influxdata/influxdb/influxql
influxql\query_executor.go:184: undefined: zap.NullEncoder
influxql\query_executor.go:184: cannot use zap.New(zap.NullEncoder()) (type *zap.Logger) as type zap.Logger in field value
influxql\query_executor.go:219: cannot use log.With(zap.String("service", "query")) (type *zap.Logger) as type zap.Logger in assignment
influxql\task_manager.go:45: undefined: zap.NullEncoder
influxql\task_manager.go:45: cannot use zap.New(zap.NullEncoder()) (type *zap.Logger) as type zap.Logger in field value
How can I fix this?
A:
Solution
Refer to https://github.com/influxdata/influxdb/issues/8016
Go version - 1.7.5
Git version - 2.11.1
Hg version - 3.7.1
cd c:\go
mkdir projects
set "GOPATH=C:\Go\projects"
cd %gopath%
git config --global http.proxy http://user:enc_pw@IP:port
set https_proxy=https://user:enc_pw@IP:port
go get github.com/sparrc/gdm
go get github.com/influxdata/influxdb
cd src\github.com\influxdata\influxdb
go get -v -u -f ./...
C:\Go\projects\bin\gdm restore
go install ./...
go build ./...
This should help to effectively solve all the problems and you may continue with original link
|
[
"stackoverflow",
"0004853542.txt"
] | Q:
Boost/std bind how to solve such errors? (binding function from one class to another class)
So I am triing to create a simple graph connecting one class to another...
#include "IGraphElement.h"
#include <boost/bind.hpp>
class simpleRendererGraphElement : public IGraphElementBase, public simpleRendererLibAPI
{
public:
IGraphElement<ExtendedCharPtr>* charGenerator;
// we owerrite init
void Init(IGraphElement<ExtendedCharPtr>* CharGenerator, int sleepTime)
{
charGenerator = CharGenerator;
charGenerator->Add(boost::bind(&simpleRendererGraphElement::renderCastedData, this, std::placeholders::_1)); // line (**)
SetSleepTime(sleepTime);
}
void renderCastedData(ExtendedCharPtr data) // our event system receives functions declared like void FuncCharPtr(char*, int) ;
{
DWCTU();
renderChar(data.data);
}
};
But it gives me C3083 and C2039 on line (**)... I use vs 2008 so I can not se std to bind... how to solve such issue?
BTW #include "IGraphElement.h" looks like
#include "IGraphElementBase.h"
// parts of c++0x std
#include <boost/bind.hpp>
#include <boost/function.hpp>
#ifndef _IGraphElement_h_
#define _IGraphElement_h_
using namespace std ;
template <typename DataType >
class IGraphElement : public IGraphElementBase{
typedef boost::function<void(DataType)> Function;
typedef std::vector<Function> FunctionSequence;
typedef typename FunctionSequence::iterator FunctionIterator;
private:
DataType dataElement;
FunctionSequence funcs;
public:
void InitGet(DataType DataElement)
{
dataElement = DataElement;
}
// Function for adding subscribers functions
// use something like std::bind(¤tClassName::FunctionToAdd, this, std::placeholders::_1) to add function to vector
void Add(Function f)
{
funcs.push_back(f);
}
};
#endif // _IGraphElement_h_
A:
If you are using boost::bind you need to use boost::placeholders. You can't mix-and-match.
(Visual C++ 2008 doesn't even include std::placeholders, though; the TR1 service pack includes std::tr1::placeholders, but the C++0x std::placeholders wasn't introduced until Visual C++ 2010.)
|
[
"tex.stackexchange",
"0000478194.txt"
] | Q:
Using \StrSubstitute in a macro definition
I would like to create a macro that typesets the argument using \emph and also creates a label with that name. My names contain underscores so I tried to use \StrSubstitute from the xstring package:
\documentclass{article}
\usepackage{xstring}
\newcommand{\tactic}[1]{
\label{\StrSubstitute{#1}{\_}{}}
\emph{#1}
}
\begin{document}
\tactic{EXISTS_TAC}
\end{document}
but I get
! Use of \@xs@StrSubstitute@@ doesn't match its definition.
\kernel@ifnextchar ...d@d =#1\def \reserved@a {#2}
\def \reserved@b {#3}\futu... l.12
\tactic{EXISTS_TAC}
How can I fix this?
A:
You actually have the reverse problem: an input such as \emph{EXISTS_TAC} will raise an error, whereas \label{EXISTS_TAC} is completely safe.
\documentclass{article}
\usepackage{xstring}
\newcommand{\tactic}[1]{%
\emph{\noexpandarg\StrSubstitute{#1}{_}{\_}}\label{#1}%
}
\begin{document}
\tactic{EXISTS_TAC}
It was on page~\pageref{EXISTS_TAC}
\end{document}
|
[
"stackoverflow",
"0022546662.txt"
] | Q:
how to get one row data value into two row data value using SQL server?
could you help me
select emp_id Emp ID,start_time timing,last_time timing
from data
where (emp_id=5500 and date_id='3/18/2014')
Actual Output is,
Emp ID timing timing
5500 03/18/2014 18:30:08 03/18/2014 19:23:09
I need the following output
Emp ID timing
5500 03/18/2014 18:30:08
5500 03/18/2014 19:23:09
how to get this output? please help me, i dont know how to get this data. thanks for advance!!!
A:
You could just union two selects:
select emp_id Emp ID, start_time timing
from data
where (emp_id=5500 and date_id='3/18/2014')
union all
select emp_id Emp ID, last_time timing
from data
where (emp_id=5500 and date_id='3/18/2014')
|
[
"math.stackexchange",
"0003156481.txt"
] | Q:
After multiplying a positive definite matrix several times to 'a vector A', still less than 90 degree between the 'vector A' and the 'mapped vector'?
My question
Would the $\theta$ be still less than 90 degrees in vT * Mk v = ||v|| * ||Mk v|| * cos $\theta$, if the matrix M is positive definite?
Background Information
Let's suppose that v (original vector: v) is a non-zero vector and M is a positive definite matrix. I multiply M several K times to a vector v (mapped vector : Mk v ) and then inner product between those two vectors. The mathematical equation is as follows :
vT * Mk v = ||v|| * ||Mk v|| * cos $\theta$
vT * M v >0 : This is a sure thing because M is a positive definite matrix. As such, the angle $\theta$ between the v (original vector) and the Mv (the mapped vector) is less than 90 degree, since the inner product should be positive.
I am curious to know if the $\theta$ in the equation below is also between 0 degree and 90 degree.
vT * Mk v = ||v|| * ||Mk v|| * cos $\theta$
I got motivated to ask this question from the Towards Data Science article 'What is a Positive Definite Matrix?'. The quote is as follows:
Wouldn’t it be nice in an abstract sense… if you could multiply some matrices multiple times and they won’t change the sign of the vectors? If you multiply positive numbers to other positive numbers, it doesn’t change its sign. I think it’s a neat property for a matrix to have.
A:
If $M$ is a positive definite matrix, then so is $M^k$, for all integers $k$ (note that negative powers of $M$ also exist since $|M| \ne 0$).
This is easy to see. A matrix $A$ is positive definite if and only if all its eigenvalues are positive [additionally, $A$ may also be required to be symmetric, in the more common definition]. If $\lambda_1, \ldots, \lambda_n$ are all the eigenvalues of a matrix $M$, then eigenvalues of $M^k$ are exactly $\lambda_1^k, \ldots, \lambda_n^k$ (this also holds for negative $k$ if $M$ is invertible, which is true if every $\lambda_i$ is non-zero). Thus, if $M$ is positive definite, each $\lambda_i > 0$, which implies $\lambda_i^k > 0$ as well, and therefore $M^k$ is also positive definite [additionally, if $M$ is symmetric, so is $M^k$].
Thus, $v^T M^k v > 0$ for all $v \ne 0$, which proves (as shown in the question itself) that the angle between $v$ and $M^k v$ is acute.
We can also see this geometrically. Now I will assume that $M$ is indeed a (real) symmetric $n \times n$ positive definite matrix. Being real and symmetric guarantees that it is diagonalisable, or equivalently (what is important for us), that it has a set of $n$ eigenvectors that form a basis for $\mathbb R^n$. Indeed, there is an orthonormal basis of $\mathbb R^n$ whose elements are eigenvectors of $M$. Let this orthonormal basis be $B = \{x_1, \ldots, x_n\}$, and let $\lambda_i$ be the eigenvalue corresponding to the eigenvector $x_i$, $i = 1, \ldots, n$. Thus, $M x_i = \lambda_i x_i$.
Thus, given any vector $v \in \mathbb R^n$, we can decompose it along the basis vectors as, say, $$v = \alpha_1 x_1 + \cdots + \alpha_n x_n.$$
Now, if $M$ is applied to $v$, each component in the above representation gets scaled by the corresponding eigenvalue (because each component is an eigenvector). That is,
\begin{align*}
Mv &= M(\alpha_1 x_1 + \cdots + \alpha_n x_n)\\
&= \alpha_1 (M x_1) + \cdots + \alpha_n (M x_n)\\
&= \alpha_1 \lambda_1 x_1 + \cdots + \alpha_n \lambda_n x_n\\
&= \lambda_1 (\alpha_1 x_1) + \cdots + \lambda_n (\alpha_n x_n).
\end{align*}
Since each $\lambda_i$ is positive, all the components get scaled in their current direction (up, down, or not at all, according to the eigenvalue being greater than, less than, or equal to $1$). This makes it obvious that the direction of none of the components is reversed (and therefore the direction of the whole vector is also not reversed). Furthermore, since no eigenvalue is zero, no "projection" occurs. Thus, $Mv$ cannot be orthogonal to $v$. Indeed, the angle between $Mv$ and $v$ will be acute.
Consider the orthonormal basis $B$ consisting of eigenvectors of $M$. This orthonormal system defines its own $2^n$ orthants in $\mathbb R^n$ (not the standard orthants). The vector $v$ lies in one of these, or possibly between some of these (if its components along some of the eigenvectors are zero). If it lies between some orthants, then then we may simply consider the lower dimensional subspace spanned by its non-zero components and all the arguments below will hold in this subspace.
Consider the example shown below in $\mathbb R^2$. The light blue vector is $v$ and the shaded region is the orthant containing it in the coordinate system defined by the red axes defined by the two eigenvectors of $M$. The components of $v$ along the axes are also shown in light blue.
Each time $M$ is applied (to the result of the previous application), each component gets scaled by the corresponding eigenvalue. The diagram shows $Mv$ and $M^2 v$ and their respective components (the darker blue vectors). Thus, higher and higher powers of $M$ applied to $v$ produce vectors where the components of $v$ along the eigenvectors corresponding to the highest eigenvalues have been scaled abnormally high (assuming these eigenvalues are greater than $1$). On the other hand, the components corresponding to eigenvalues less than $1$ if any will get scaled down, closer and closer to zero (none such in the example shown).
However, since the scaling never reverses any component, all the vectors from the successive applications remain in the same orthant as the original vector. Any two vectors strictly inside one octant have an acute angle between them.
|
[
"stackoverflow",
"0061691129.txt"
] | Q:
Send data to Spring Batch Item Reader (or Tasklet)
I have the following requirement:
An endpoint http://localhost:8080/myapp/jobExecution/myJobName/execute wich receives a CSV and use univocity to apply some validations and generate a List of some pojo.
Send that list to a Spring Batch Job for some processing.
Multiple users could do this.
I wanna know if with Spring Batch I can achive this?
I was thinking to use a queue, put the data and execute a Job that pull objects from that queue. But how can I be sure that if other person execute the endpoint and other Job is executing, Spring Batch Knows wich Item bellongs to a certain execution?
Thanks in advance.
A:
You can use a queue or go ahead to put the list of values that was generated after the step with validations and store it as part of job parameters in the job execution context.
Below is a snippet to store the list to a job context and read the list using an ItemReader.
Snippet implements StepExecutionListener in a Tasklet step to put List which was constructed,
@Override
public ExitStatus afterStep(StepExecution stepExecution) {
//tenantNames is a List<String> which was constructed as an output of an evaluation logic
stepExecution.getJobExecution().getExecutionContext().put("listOfTenants", tenantNames);
return ExitStatus.COMPLETED;
}
Now "listOfTenants" are read as part of a Step which has Reader (To allow one thread read at a time), Processor and Writer. You can also store it as a part of Queue and fetch it in a Reader. Snippet for reference,
public class ReaderStep implements ItemReader<String>, StepExecutionListener {
private List<String> tenantNames;
@Override
public void beforeStep(StepExecution stepExecution) {
try {
tenantNames = (List<String>)stepExecution.getJobExecution().getExecutionContext()
.get("listOfTenants");
logger.debug("Sucessfully fetched the tenant list from the context");
} catch (Exception e) {
// Exception block
}
}
@Override
public synchronized String read() throws Exception {
String tenantName = null;
if(tenantNames.size() > 0) {
tenantName = tenantNames.get(0);
tenantNames.remove(0);
return tenantName;
}
logger.info("Completed reading all tenant names");
return null;
}
// Rest of the overridden methods of this class..
}
|
[
"stackoverflow",
"0012060224.txt"
] | Q:
Unable to resolve memory leak
i am unable to resolve a memory-leak in my little program. Some of the code was originally created in Java, so i was "converting" it to c++ (some of those things might seem strange, so if you have a better solution, please let me know - im quite new to OOP in C++).
My intention is to create a random heightmap generator.
There are 2 memory leaks (found with Visual Leak Detector):
The first one gets triggered here:
-> Mountain* mount = new Mountain(size, Utils::powerOf2Log2(size) - 6, 0.5f, seed);
ChannelClass* height = mount->toChannel();
Because of this in the "Mountain" class constructor:
channel = new ChannelClass(size, size);
I was trying to use a shutdown method like so:
mount->ShutDown();
delete mount;
mount = 0;
With Shutdown() defined as such:
if(channel){
channel->ShutDown();
delete channel;
channel = 0;
}
The ShutDown() method of "ChannelClass" is deleting an float array. My initial thought was that maybe "ChannelClass* height = mount->toChannel()" is causing problems there.
If you need more code please let me know! Thanks in advance for any one willing to help!
A:
OK, so without more code this is going to be pretty general. These are guidelines (not rules) with the most preferred first.
First, a quick note on C++11: if you don't have it, either replace std::unique_ptr below with std::auto_ptr (although it's deprecated for a reason, so careful with that), or use boost::scoped_ptr instead.
1. Don't use new
If you need to create a (single) mountain and don't need to keep it alive outside the scope where it's declared, just use it as a regular variable with automatic scope:
void automatic_scope(int size, double seed)
{
Mountain hill(size, Utils::powerOf2Log2(size) - 6, 0.5f, seed);
// ... mountainous operations happen here ...
} // hill is destroyed here - is that ok for you?
Similarly, if a mountain owns a single ChannelClass, which ought to live exactly as long as the mountain which owns it, just do:
class Mountain
{
ChannelClass channel;
public:
Mountain(int size, int powerthing, double something, double seed)
: channel(size, size) // initialize other members here
{
// any more initialization
}
ChannelClass& toChannel() { return channel; }
};
Now the ChannelClass will live exactly as long as the Mountain, everything is destroyed automatically, and no explicit shutdown is needed.
2. Don't use new[]
Similarly, if you need several mountains with only limited scope, just use
void automatic_scope_vector(int size, double seed)
{
std::vector<Mountain> hills;
hills.push_back(Mountain(size, Utils::powerOf2Log2(size) - 6, 0.5f, seed));
// ... mountainous operations happen here ...
} // hills are all destroyed here
3. OK, use new after all
Obviously there are valid reasons for using new: one is mentioned already (you need to keep your mountains around longer than the block where you create them).
The other is if you need runtime polymorphism, for example if you have multiple subclasses of Mountain or ChannelClass, but you want to deal in the base classes.
We can illustrate both with a polymorphic factory function:
class Molehill: public Mountain { ... };
class Volcano: public Mountain { ... };
std::unique_ptr<Mountain> make_mountain(int size, double seed, bool is_molehill)
{
std::unique_ptr<Mountain> result;
if (is_molehill)
result.reset(new Molehill(size, size/2, 0.01f, seed));
else
result.reset(new Volcano(size, size*2, 0.5f, seed));
return result;
}
void automatic_scope_polymorphic(int size, double seed, bool is_molehill)
{
std::unique_ptr<Mountain> hill = make_mountain(size, seed, is_molehill);
// ... polymorphic mountainous operations happen here ...
} // hill is destroyed here unless we gave the unique_ptr to someone else
Similarly, if the mountain's ChannelClass needs to be created dynamically, store that in a unique_ptr.
It may also sometimes be helpful where you'd otherwise need to copy objects to pass them around, copying is very expensive, and you can't rely on (or don't yet have) RVO or move semantics. This one's an optimisation though, so don't worry about it unless profiling shows it's a problem.
Philosophy
These C++ idioms are all based on deterministic destruction, and the goal is to avoid writing explicit cleanup code at all.
Delegating memory management to containers (like std::vector) and smart pointers (like std::unique_ptr) avoids the memory leaks that Java tackles with garbage collection. However, it generalises powerfully to RAII where similar automatically-scoped guard objects can automate management of all resources, not just memory. For example, std::lock_guard makes sure mutex locks are correctly released even if a function has multiple return paths, may throw exceptions, etc.
If you do need to write explicit cleanup code: don't write custom shut-down methods you have to call, just put it in the destructor. If possible, push this into low-level guard objects too.
|
[
"stackoverflow",
"0041924226.txt"
] | Q:
Code Climate - Too Complex Error
I'm using code climate on one of my projects, and I'm getting an error for have "too complex" of code. I'm not sure how to make the code it's calling out less complex? Here it is:
Method:
def apply_json
{
total_ticket_count: payment_details.tickets.count,
subtotal: payment_details.subtotal.to_f,
discount: payment_details.discount.to_f,
fees: payment_details.fees_total.to_f,
total: payment_details.total_in_dollars.to_f,
coupon: {
amount: payment_details.coupon.amount,
type: payment_details.coupon.coupon_type,
name: payment_details.coupon.name,
valid: payment_details.coupon.valid_coupon?,
}
}
end
It's just JSON that I tucked away in a model. Everything on my branch is great expect for this? I'm not sure what to do? Any ideas on how I can make this less complex?
A:
I wouldn't care too much if Code Climate thinks something is too complex, but actually is easy to understand. Code Climate should help you to write better, easy to read code. But it doesn't provide hard rules.
If you really want to change something, you might want to move the generation of the coupon sub hash to the Coupon model, because it only depends on values provided by the coupon association:
def apply_json
{
total_ticket_count: payment_details.tickets.count,
subtotal: payment_details.subtotal.to_f,
discount: payment_details.discount.to_f,
fees: payment_details.fees_total.to_f,
total: payment_details.total_in_dollars.to_f,
coupon: payment_details.coupon.as_json
}
end
# in coupon.rb
def as_json
{
amount: amount,
type: coupon_type,
name: name,
valid: valid_coupon?
}
end
A similar refactoring can be done with payment_details but it is not sure, where this attribute is coming from and if it is an associated model.
A:
Please just ignore the complexity warnings.
They are misguided.
These warnings are based on fake science.
Cyclomatic complexity has been proposed in 1976 in an academic journal and has alas been adopted by tool builders because it is easy to implement.
But that original research is flawed.
The original paper proposes a simple algorithm to compute complexity for Fortran code but does not give any evidence that the computed number actually correlates to readability and understandability of code. Nada, niente, zero, zilch.
Here is their abstract
This paper describes a graph-theoretic complexity measure and
illustrates how it can be used to manage and control program
complexity. The paper first explains how the graph-theory concepts
apply and gives an intuitive explanation of the graph concepts in
programming terms. The control graphs of several actual Fortran
programs are then presented to illustrate the correlation between
intuitive complexity and the graph-theoretic complexity. Several
properties of the graph-theoretic complexity are then proved which
show, for example, that complexity is independent of physical size
(adding or subtracting functional statements leaves complexity
unchanged) and complexity depends only on the decision structure of a
program.
The issue of using non structured control flow is also
discussed. A characterization of non-structured control graphs is given
and a method of measuring the "structuredness" of a program is
developed. The relationship between structure and reducibility is
illustrated with several examples.
The last section of this paper
deals with a testing methodology used in conjunction with the
complexity measure; a testing strategy is defined that dictates that a
program can either admit of a certain minimal testing level or the
program can be structurally reduced
Source http://www.literateprogramming.com/mccabe.pdf
As you can see anecdotical evidence only is given "to illustrate the correlation between intuitive complexity and the graph-theoretic complexity" and the only proof is that code can be rewritten to have a lower complexity number as defined by this metric. Which is a pretty non-sensical proof for a complexity metric and very common for the quality of research from that time. This paper would not be publishable by today's standards.
The authors of the paper have not done an user research and their algorithm is no grounded in any actual evidence. And no research has been able to prove a link between cyclomatic complexity and code comprehension since. Not to mention that this complexity metric was proposed for Fortran rather than modern high level languages.
The best way to ensure code comprehension is code review. Just simply ask another person to read your code and fix whatever they don't understand.
So just turn these warning off.
|
[
"physics.stackexchange",
"0000176303.txt"
] | Q:
General Relativity visualization software
As I am approaching the study GR, I was wondering if there are softwares that allow a quick visualization of custom metrics, curvature, and particle motion even in the limited context of 2D space.
Playing with equations is fun, but it would be more fun if I could play with various parameters and see the outcome.
Obviously free would be better, but I am open to commercial programs.
A:
I've been looking at this Java archive
General Relativity (GR) Package written by Wolfgang Christian, Mario
Belloni, and Anne Cox
It includes a lot of simple programs about Newtonian mechanics, special relativity and general relativity, including the aforementioned GROrbits.
It doesn't permit custom metrics - you are limited to Schwarzschild (regular and rain co-ordinates) and Kerr black holes.
A:
For particle/light motion in 2D space, my nomination would be GROrbits
It's free and requires a JVM to run, there is also a web start version for the brave ;)
Sorry but I've never found anything aimed at visualizing metrics or curvature (apart from plotting programs of course).
|
[
"math.stackexchange",
"0002941824.txt"
] | Q:
Hausdorff measure on non separable spaces
In his book Geometry of Sets and Measures in Euclidean Spaces, Pertti Mattila defines the Hausdorff measures via the Carathéodory's construction (chap.4). My doubt is that the Carathéodory's construction is done on a general metric space, while the definition of Hausdorff measure is given starting from a separable space. I personally don't see where separability comes into play. Is it necessary for some reason or we could just keep reasoning on a general metric space?
A:
In your estimates of $s$-dimensional Hausdorff measure of a set $E$, you have to do this: For any $\epsilon > 0$, choose a countable cover $\{U_k\}$ of $E$ with $\text{diam}\; U_k < \epsilon$ for all $k$. Then your estimate is
$$
\sum_k \left(\text{diam}\; U_k\right)^s
\tag{$*$}
$$
You take the infimum over all covers. Then the limit as $\epsilon \to 0$. This is the Hausdorff measure $\mathcal H^s(E)$.
Now suppose $E$ is non-separable. Then for small enough $\epsilon > 0$ there is no countable cover by sets of diameter ${}\lt \epsilon$. Then you have an infimum over the empty set of the numbers ($*$), and we get $+\infty$. And so limit for $\epsilon \to 0$ is also $+\infty$.
Result. For any non-separable set $E$ and for any $s \in [0,+\infty)$, we have Hausdorff measure $\mathcal H^s(E) = +\infty$. And therefore the Hausdorff dimension of $E$ is $+\infty$.
So, if you like, you can define Hausdorff measures for non-separable sets. But it turns out to be uninteresting.
|
[
"math.stackexchange",
"0001375202.txt"
] | Q:
Proving the Fibonacci sum $\sum_{n=1}^{\infty}\left(\frac{F_{n+2}}{F_{n+1}}-\frac{F_{n+3}}{F_{n+2}}\right) = \frac{1}{\phi^2}$ and its friends
In this article, (eq.92) has,
$$\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{F_{n+1}F_{n+2}} = \frac{1}{\phi^2}\tag1$$
and I wondered if this could be generalized to the tribonacci numbers. It seems it can be. Given the Fibonacci, tribonacci, tetranacci (in general, the Fibonacci k-step numbers) starting with $n=1$,
$$F_n = 1,1,2,3,5,8\dots$$
$$T_n = 1, 1, 2, 4, 7, 13, 24,\dots$$
$$U_n = 1, 1, 2, 4, 8, 15, 29, \dots$$
and their limiting ratios, $x_k$, the root $x_k \to 2$ of,
$$(2-x)x^k = 1$$
with Fibonacci constant $x_2$, tribonacci constant $x_3$, etc, it can be empirically observed that,
$$\sum_{n=1}^{\infty}\left(\frac{F_{n+2}}{F_{n+1}}-\frac{F_{n+3}}{F_{n+2}}\right) = \frac{1}{x_2^2}\tag2$$
$$\sum_{n=1}^{\infty}\left(\frac{T_{n+2}}{T_{n+1}}-\frac{T_{n+3}}{T_{n+2}}\right) = \frac{1}{x_3^3}$$
$$\sum_{n=1}^{\infty}\left(\frac{U_{n+2}}{U_{n+1}}-\frac{U_{n+3}}{U_{n+2}}\right) = \frac{1}{x_4^4}$$
and so on. Q: How do we rigorously prove the observation indeed holds for all integer $k\geq2$?
Edit:
To address a comment that disappeared, to transform $(1)$ to $(2)$, we use a special case of Catalan's identity,
$$F_{n+2}^2-F_{n+1}F_{n+3} = (-1)^{n+1}$$
so,
$$\begin{aligned}
\frac{(-1)^{n+1}}{F_{n+1}F_{n+2}}
&= \frac{F_{n+2}^2-F_{n+1}F_{n+3}}{F_{n+1}F_{n+2}}\\
&= \frac{F_{n+2}}{F_{n+1}} - \frac{F_{n+3}}{F_{n+2}}
\end{aligned}$$
hence the alternating series $(1)$ is equal to $(2)$.
A:
Let's start off with the case of the Fibonacci sequence. We have
$$\sum_{n = 1}^k \left( \frac{F_{n+2}}{F_{n+1}} -\frac{F_{n+3}}{F_{n+2}} \right) = \left(\frac{F_3}{F_2} - \frac{F_4}{F_3}\right) + \left(\frac{F_4}{F_3} - \frac{F_5}{F_4}\right) + \cdots + \left(\frac{F_{k+2}}{F_{k+1}}- \frac{F_{k+3}}{F_{k+2}}\right)\\
= \frac{F_3}{F_2} - \frac{F_{k+3}}{F_{k+2}} = 2 - \frac{F_{k+3}}{F_{k+2}}.$$
Setting the limit as $k$ approaches infinity, we get
$$2 - \lim_{k\to\infty} \frac{F_{k+3}}{F_{k+2}} = 2- x = \frac{1}{x^2}$$
If you want more rigor, you can easily transform this into an induction argument.
Since $(2 - x)(x^2) = 1 \implies 2 - x = \frac{1}{x^2}$, the result follows. The proof generalizes really easily.
|
[
"pt.stackoverflow",
"0000191000.txt"
] | Q:
Tem como espaçar dois textos em centímetros em uma mesma célula do Excel?
Conforme o exemplo da figura abaixo. eu preciso tabular em uma única célula dois textos de forma que de uma célula para a outra (da mesma coluna) os textos da direita fiquem perfeitamente alinhados (veja a coluna D).
Na parte superior da figura: exemplo do resultado que tenho com o desalinhamento dos números das páginas (coluna D).
Na parte inferior: o mesmo trecho, mas com as fórmulas aplicadas.
Na célula D2 está informado o número de espaços da distância em caracteres que o segundo texto deveria ficar alinhado. Aqui o ideal é que fossem "centímetros" ou algo assim e não espaços.
Sei que existem inúmeras formas simples para fazer isso, como usar duas colunas ou uma fonte com caracteres de mesmo tamanho em pixels, mas no meu caso eu preciso resolver assim, em uma única célula e com uma fonte qualquer que se queira aplicar.
Este procedimento deve ser adicionado a uma planilha complexa que manipula dinamicamente um relatório que tem vínculos com várias outras planilhas e funcionalidades. O tratamento do texto é dinâmico, ou seja, se marco um tópico ou desmarco, toda o relatório é ajustado automaticamente, os números dos tópicos são modificados e as páginas em que são apresentados podem ser alteradas também. Por exemplo, um tópico "4. CUSTOS" que está na página 34, dependo do que for feito, pode virar o tópico "6. CUSTOS" e constar da página 59, e são inúmeros itens, com tópicos e subtópicos. O tamanho das células e suas colunas, preferencialmente não devem ser modificados.
Tentei algumas formas para obter o resultado, pesquisei, mas sem sucesso.
Caso tenha como fazer, ainda que seja complexo, deverá ser melhor e mais rápido que estudar e reestruturar esta planilha.
Agradeço desde já as colaborações ou comentários.
A:
A solução que apresento não é exatamente como gostaria, mas resolveu o problema de forma simples e direta.
Adicionei um TextBox (controle ActiveX) no final e sobre cada linha da página inicial que pode ter texto a referenciar a página (com o mesmo espacejamento em quantidade de linhas no meu caso, entre os controles).
Nas propriedades passei para transparente, alinhei todos e alterei a fonte para a mesma ativa no Excel (pois acredito que não há como fazer isso automaticamente, reconhecer a fonte da célula linkada), após isso, agrupei tudo para poder ajustar o bloco para os lados com facilidade para quando for o caso.
Na propriedade LinkedCell apontei para a célula com o texto do número da página a ser apresentado.
Com isso, as linhas que apresentam número de página têm sues TextBox atualizadas dinamicamente, assim como as demais ficam "vazias" e não aparecem quando não apresentam número de página.
Veja o exemplo abaixo:
Na primeira linha destaquei a TextBoxque cada linha tem no final, assim como nas duas últimas linhas, para mostrar que todas estão lá, as outras linhas aparecem como devem aparecer para impressão (os controles também estão lá). Por serem transparentes e não terem texto, na verdade as duas linhas finais aparecem "em branco" como devem aparecer, aqui destaquei para exemplificar melhor.
É isso!
|
[
"stackoverflow",
"0003851545.txt"
] | Q:
Java: Thread safety in class with synchronized methods
I read that the following class is not thread safe since threads could read inconsistent data as there is a chance for a thread to read scaled version of real and unscaled version of imaginary. But I did not understand how.
I was under the impression that if a thread acquires a lock and is in scale() method, no other thread can be in getReal() or getImaginary() methods at the same time so that other threads cannot read 'half scaled' complex numbers. Is it not correct?
class Complex
{
double real;
double imaginary;
synchronized void scale(double scaleFactor)
{
real = real * scaleFactor;
imaginary = imaginary * scaleFactor;
}
synchronized double getReal()
{
return real;
}
synchronized double getImaginary()
{
return imaginary;
}
}
A:
Consider the following scenario:
Thread A calls getReal()
Thread B calls scale()
Thread A calls getImaginary()
This way Thread A can indeed get inconsistent real and imaginary values.
The solution would be either to
create a common synchronized getter method to return both values at once, or
make the class immutable, as Vivien suggested.
A:
Not really a direct answer, but in your case, the best option is to make your class immutable. Each instance of Complex can't be changed after initialisation.
In this case, your scale method creates and returns a new Complex object with the new values.
Note this is how all JVM Number type works.
|
[
"serverfault",
"0000011250.txt"
] | Q:
Running Python scripts in linux
I'm trying to run Python scripts with a shebang on Ubuntu. When I create a python script
#! /usr/bin/env python
import sys
... and run it I get a shell error:
root@host:/home/user# ./test.py
: No such file or directory
How can I make it work?
Solution: Remove '\r's from line endings with dos2unix.
A:
I assume the script is executable? Also, check for carriage returns -- maybe windows got its dirty little hands on it? You can check this with 'cat -vE test.py' and look for '\r'.
A:
You probably have windows line endings on your file. Please try running dos2unix on it.
|
[
"stackoverflow",
"0023049757.txt"
] | Q:
Connect to an MVC application from a winforms application
I have written a C# MVC4 internet application and have a question in relation to calling some of the ActionResult methods.
How can I call any of the ActionResult methods from a different application other than the MVC application?
What I am wanting to do is create a Winforms application, connect to the MVC application and then call some of the ActionResult methods.
Is this possible? How should I do this? What resources should I research into?
Thanks in advance
A:
It's not ideal to use MVC 4 as a restful host because it is designed to be rendered to HTML.
You will instead want to use Web API. It's designed to be consumed by clients.
You can abstract the logic from the MVC project to a shared project and re-use the functions for the Web API.
Here is a great article about writing a client to interact with Web API: http://www.asp.net/web-api/overview/web-api-clients/calling-a-web-api-from-a-net-client
|
Subsets and Splits