text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
rvest: follow different links with same tag
I'm doing a little project in R that involves scraping some football data from a website. Here's the link to one of the years of data:
http://www.sports-reference.com/cfb/years/2007-schedule.html.
As you can see, there is a "Date" column with the dates hyperlinked, this hyperlink takes you to the stats from that particular game, which is the data I would like to scrape. Unfortunately, a lot of games take place on the same dates, which means their hyperlinks are the same. So if I scrape the hyperlinks from the table (which I have done) and then do something like:
url = 'http://www.sports-reference.com/cfb/years/2007-schedule.html'
links = character vector with scraped date links
for (i in 1:length(links)) {
stats = html_session(url) %>%
follow_link(link[i]) %>%
html_nodes('whateverthisnodeis') %>%
html_table()
}
it will scrape from the first link corresponding to each date. For example there were 11 games that took place on Aug 30, 2007, but if I put that in the follow_link function, it grabs data from the first game (Boise St. Weber St.) every time. Is there any way I can specify that I want it to move down the table?
I have already found a workaround by finding out the formula for the urls to which the date hyperlinks take you, but it's a pretty convoluted process, so I thought I'd see if anyone knew how to do it this way.
A:
This is a complete example:
library(rvest)
library(dplyr)
library(pbapply)
# Get the main page
URL <- 'http://www.sports-reference.com/cfb/years/2007-schedule.html'
pg <- html(URL)
# Get the dates links
links <- html_attr(html_nodes(pg, xpath="//table/tbody/tr/td[3]/a"), "href")
# I'm only limiting to 10 since I rly don't care about football
# enough to waste the bandwidth.
#
# You can just remove the [1:10] for your needs
# pblapply gives you a much-needed progress bar for free
scoring_games <- pblapply(links[1:10], function(x) {
game_pg <- html(sprintf("http://www.sports-reference.com%s", x))
scoring <- html_table(html_nodes(game_pg, xpath="//table[@id='passing']"), header=TRUE)[[1]]
colnames(scoring) <- scoring[1,]
filter(scoring[-1,], !Player %in% c("", "Player"))
})
# you can bind_rows them all together but you should
# probably add a column for the game then
bind_rows(scoring_games)
## Source: local data frame [27 x 11]
##
## Player School Cmp Att Pct Yds Y/A AY/A TD Int Rate
## (chr) (chr) (chr) (chr) (chr) (chr) (chr) (chr) (chr) (chr) (chr)
## 1 Taylor Tharp Boise State 14 19 73.7 184 9.7 10.7 1 0 172.4
## 2 Nick Lomax Boise State 1 5 20.0 5 1.0 1.0 0 0 28.4
## 3 Ricky Cookman Boise State 1 2 50.0 9 4.5 -18.0 0 1 -12.2
## 4 Ben Mauk Cincinnati 18 27 66.7 244 9.0 8.9 2 1 159.6
## 5 Tony Pike Cincinnati 6 9 66.7 57 6.3 8.6 1 0 156.5
## 6 Julian Edelman Kent State 17 26 65.4 161 6.2 3.5 1 2 114.7
## 7 Bret Meyer Iowa State 14 23 60.9 148 6.4 3.4 1 2 111.9
## 8 Matt Flynn Louisiana State 12 19 63.2 128 6.7 8.8 2 0 154.5
## 9 Ryan Perrilloux Louisiana State 2 3 66.7 21 7.0 13.7 1 0 235.5
## 10 Michael Henig Mississippi State 11 28 39.3 120 4.3 -5.4 0 6 32.4
## .. ... ... ... ... ... ... ... ... ... ... ...
| {
"pile_set_name": "StackExchange"
} |
Q:
auto height for a html div auto scrolling - Div to take remaining height
i have a very simple but confusing problem what issue i am having is that i have some div. first div which is fixed to top right with height 100% and this div has 2 more div inside div2 and div3 both should be scrolled .
one the uper div2 varies height from 100px to 200px after that it should be scrolling and the div with id div3 should take the remaining height and should be scrolling if data is increased.
i can achieve till div2 but the div3 is not taking the remaining height
my code is
<div style="width:200px;height:100%;position:fixed;top:0px;right:0px;background:red;overflow:auto;">
<div style="width:100%;min-height:100px;max-height:200px;background:blue;overflow:auto;float:left;">
ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>
</div>
<div style="width:100%;height:300px;background:yellow;float:left;overflow:auto;">
ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/>ggdhhd<br/><br/>
</div>
i need something which is browser friendly..
if you can suggest me something it would be very helpful..
this is demo http://www.reurl.in/f84acc961
https://jsfiddle.net/fy727tLL/
A:
Here is the updated fiddle:
Fiddle
<div style="width:200px;height:100%;position:fixed;top:0px;right:0px;background:red;overflow:auto;">
<div style="width:100%;height:20%;background:blue;overflow:auto;float:left;">
ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>
</div>
<div style="width:100%;height:80%;background:yellow;float:left;overflow:auto;">
ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>ggdhhd
<br/>
<br/>
</div>
</div>
| {
"pile_set_name": "StackExchange"
} |
Q:
Multiple CSS collapse/expand in same page
I've CSS collapse/expand code refereed from @ https://codepen.io/peternguyen/pen/hICga/ . But the problem with my code is that they won't work in same page. I intended to use it at Wordpress page.
Have a look at my code. Thanks.
input {
display: none;
visibility: hidden;
}
label {
font-weight: bold;
font-size: 15px;
display: block;
color: #666;
text-decoration: underline;
}
label:hover {
color: #000;
}
#expand {
height: 0px;
overflow: hidden;
transition: height 0.5s;
color: #000;
}
section {
padding: 0 20px;
}
#toggle:checked ~ #expand {
height: auto;
}
<input id="toggle" type="checkbox">
<label for="toggle">Hidden Kitten</label>
<div id="expand">
<section>
<p>mew</p>
</section>
</div>
<input id="toggle" type="checkbox">
<label for="toggle">Hidden Kitten</label>
<div id="expand">
<section>
<p>mew</p>
</section>
</div>
A:
I made all id-specific parts independent and changed ~ to +.
Working codepen
Code:
@import url(https://fonts.googleapis.com/css?family=Open+Sans:400,700);
body {
font-family: "Open Sans", Arial;
background: #CCC;
}
main {
background: #EEE;
width: 600px;
margin: 20px auto;
padding: 10px 0;
box-shadow: 0 3px 5px rgba(0, 0, 0, 0.3);
}
h2 {
text-align: center;
}
p {
font-size: 13px;
}
input {
display: none;
visibility: hidden;
}
label {
display: block;
padding: 0.5em;
text-align: center;
border-bottom: 1px solid #CCC;
color: #666;
}
label:hover {
color: #000;
}
label::before {
font-family: Consolas, monaco, monospace;
font-weight: bold;
font-size: 15px;
content: "+";
vertical-align: text-top;
display: inline-block;
width: 20px;
height: 20px;
margin-right: 3px;
background: radial-gradient(ellipse at center, #CCC 50%, transparent 50%);
}
.expand {
height: 0px;
overflow: hidden;
transition: height 0.5s;
background: url(http://placekitten.com/g/600/300);
color: #FFF;
}
section {
padding: 0 20px;
}
.toggle:checked+label+.expand {
height: 250px;
}
.toggle:checked+label::before {
content: "-";
}
<main>
<h2>CSS Expand/Collapse Section</h2>
<input id="toggle" type="checkbox" checked class="toggle">
<label for="toggle">Hidden Kitten</label>
<div class="expand">
<section>
<p>mew</p>
</section>
</div>
<section>
<h3>Other content</h3>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas porta non turpis faucibus lobortis. Curabitur non eros rutrum, gravida felis non, luctus velit. Ut commodo congue velit feugiat lobortis. Etiam nec dolor quis nulla bibendum blandit
vitae nec enim. Maecenas id dignissim erat. Aenean ac mi nec ante venenatis interdum quis vel lacus.
</p>
<p>Aliquam ligula est, aliquet et semper vitae, elementum eget dolor. In ut dui id leo tristique iaculis eget a dui. Vestibulum cursus, dolor sit amet lacinia feugiat, turpis odio auctor nisi, quis pretium dui elit at est. Pellentesque lacus risus, vulputate
sed gravida eleifend, accumsan ac ante. Donec accumsan, augue eu congue condimentum, erat magna luctus diam, adipiscing bibendum sem sem non elit.</p>
</section>
<input id="toggle2" type="checkbox" checked class="toggle">
<label for="toggle2">Hidden Kitten 2</label>
<section class="expand">
test
</section>
</main>
| {
"pile_set_name": "StackExchange"
} |
Q:
Uniqueness of eigenvector representation in a complete set of compatible observables
Sakurai states that if we have a complete, maximal set of compatible observables, say $A,B,C...$ Then, an eigenvector represented by $|a,b,c....>$, where $a,b,c...$ are respective eigenvalues, is unique. Why is it so? Why can't there be two eigenvectors with same eigenvalues for each observable? Does maximality of the set has some role to play in it?
A:
Assume that you have a maximal set $A,B,C,\ldots$ and two states $\phi_1$ and $\phi_2$ with the same set of eigenvalues in that set. Then construct the operator $Z = |\phi_1\rangle\langle\phi_1|$. Convince yourself that it would distinguish between $\phi_1$ and $\phi_2$, and that it would commute with all of $A,B,C,\ldots$ --- i.e. your original set was not maximal.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I connect an iOS app to Google Cloud SQL?
I had been building my database using Cloud Firestore because this was the easiest to implement. However, the querying capabilities of Firestore are insufficient for what I want to build, mostly due to the fact it can't handle querying inequalities on multiple fields. I need a SQL database.
I have an instance of Google Cloud SQL set up. The integration is far harder than Firebase where you just need to add a Cocoapods Pod. From my research it looks like I need to set up a Cloud SQL proxy, although if there is a simpler way of connecting it, I'd be glad to hear about it.
Essentially, I need a way for a client on the iOS to read and write to a SQL database. Cloud SQL seemed like the best, most scalable option (though I'd be open to hearing about alternatives that are easy to implement).
A:
You probably don't want to configure your application to rely on connecting directly to an SQL database. Firestore is a highly scalable database that can handle thousands of connections - MySQL and Postgres do not scale as cleanly.
Instead, you should consider constructing a simple front end service that can be used to query the database and return formatted results. There are a variety of benefits to structuring this way, including being able to further optimize or distribute your queries. Google AppEngine and Google Cloud Functions can both be used to stand up such a service quickly, and both provide easy connection options to Cloud SQL.
| {
"pile_set_name": "StackExchange"
} |
Q:
Move all files from multiple subfolders into the parent folder
I would normally open the parent folder and search for * in order to select all of the files in subfolders, but in this instance, I have over 1,000,000 files that I need to sort through, so my explorer often crashes when trying to copy that many files through the GUI. I am not sure how much more effective this will be through the command prompt or a batch file, but it is worth a try, I suppose.
What I need to do is make it so that
|parent
| |123
| | 123abc.png
| |456
| | 456def.png
| |789
| | 789ghi.png
becomes
|parent
| 123abc.png
| 456def.png
| 789ghi.png
Yes, my actual file structure has the first 3 characters of the file name given to the folder name, if that can help at all in sorting these.
A:
Use FOR /R at the command prompt:
[FOR /R] walks down the folder tree starting at [drive:]path, and executes the DO statement against each matching file.
First create a staging folder outside of the parent folder you're moving files from. This will avoid possible circular references.
In your case the command would look something like this:
FOR /R "C:\Source Folder" %i IN (*.png) DO MOVE "%i" "C:\Staging Folder"
If you want to put this into a batch file, change %i to %%i.
Note the double-quotes are important, don't miss any of them out. They ensure any filenames containing spaces are dealt with correctly.
Once the move is complete, you can rename/move the staging folder as required.
TIP: If you have hard drive space to burn and time on hand, you may want to play it safe and copy the files rather than moving them, just in case something goes wrong. Just change MOVE to COPY in the above command.
A:
This is some sample code:
:loop
for /d %%D in (%1\*) do (move "%%D\*" %1\ && rmdir "%%D")
SHIFT
set PARAMS=%1
if not %PARAMS%!==! goto loop
With this version you drag the folder from which you wish to remove the subfolder unto the batch and it will move all files from the subfolders into the parent folder. I use it for downloaded archives files which randomly have or haven't subfolder. Mind you, It was made with a single subfolder in mind, as specific for my case.
'Shift' is to move to the next argument, when you drag many subfolder at once on the script.
| {
"pile_set_name": "StackExchange"
} |
Q:
React: To put simple logic in Container or Presentational component?
I have a container component which passes an array of objects down to a presentational component to output.
In the presentational component, I need to display the count of a number of these objects that meet certain criteria. Is it best practice to perform the count in the container component and pass it down to the presentational component or is it OK to do this count in the presentational component.
ie:
export class ResultsPage extends React.Component {
constructor(props){
super(props);
}
countSexyObjects(){
const matching = this.props.allObjects.filter((obj)=>{
return obj.sexy === true;
});
return matching.length
}
render(){
return (
<PresentationalComponent allObjects={this.props.allObjects}
numberOfSexyObjects={this.countSexyObjects()} />
);
}
}
let PresentationalComponent = (props) => {
return (
<div>
There are {props.numberOfSexyObjects} sexy objects
</div>
);
};
OR
export class ResultsPage extends React.Component {
constructor(props){
super(props);
}
render(){
return (
<PresentationalComponent allObjects={this.props.allObjects} />
);
}
}
let PresentationalComponent = (props) => {
const countSexyObjects = () => {
const matching = this.props.allObjects.filter((obj)=>{
return obj.sexy === true;
});
return matching.length
};
return (
<div>
There are {countSexyObjects()} sexy objects
</div>
);
};
A:
Ideally state is considered an evil in React. I understand that React is built upon the concept of state but less state is more preferred, which means try to structure the code with mostly functions that are pure in nature.
IMHO in your first example is more correct. The ResultsPage is your Container Component(smart component) while the other is dumb. Dumb component doesn't manage state and just takes care of how the UI looks. You can put all the html, bootstrap logic in there.
The reason why this pattern is good is because lets say now you want to fetch the matching criteria from an XHR call, your code in the second case would be
export class ResultsPage extends React.Component {
constructor(props){
super(props);
}
getSexyMatcher() {
/* make ajax call here */
return results;
}
render(){
return (
<PresentationalComponent allObjects={this.props.allObjects} sexyMatcher={getSexyMatcher()}/>
);
}
}
let PresentationalComponent = (props) => {
const countSexyObjects = () => {
const matching = this.props.allObjects.filter((obj)=>{
return obj.sexy.match(props.sexyMatcher)
// return obj.sexy === true;
});
return matching.length
};
return (
<div>
There are {countSexyObjects()} sexy objects
</div>
);
};
Notice how you had to change two components for the same business logic? Much worse, what if someone else used that PresentationalComponent elsewhere in the codebase?
In the first case things are much simpler. Just have to add the ajax function in the smart component and pass down the results to the UI component.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to install supervisor in a docker container?
I need to use supervisord in a docker container.
I want to keep the size of the container as small as possible.
Supervisord can be installed either using apt-get or python-pip.
Which method is recommended? and what should be thinking process while making these kind of decisions?
P.S Need supervisor because of legacy code. Can't do without it.
Supervisord version is not important.
A:
Mostly depends on the version you want to install (if that relevant to you). apt-get's version are usually behind pip's version.
Also apt's version is tested and compatible with any other system dependency. Installing with pip could cause some conflicts with other already installed dependencies (most likely of your base OS is old)
If your goal is to keep image size small, make sure you install supervisor without leaving any cache (I.e: delete apt indices and /var/cache directory) or unwanted files (I.e: remove unneeded packages, use apt's install --no-install-recommends, use pip's install --no-cache) in a single Dockerfile RUN statement.
| {
"pile_set_name": "StackExchange"
} |
Q:
Any way to break if statement in PHP?
Is there any command in PHP to stop executing the current or parent if statement, same as break or break(1) for switch/loop. For example
$arr=array('a','b');
foreach($arr as $val)
{
break;
echo "test";
}
echo "finish";
in the above code PHP will not do echo "test"; and will go to echo "finish";
I need this for if
$a="test";
if("test"==$a)
{
break;
echo "yes"; // I don't want this line or lines after to be executed, without using another if
}
echo "finish";
I want to break the if statement above and stop executing echo "yes"; or such codes which are no longer necessary to be executed, there may be or may not be an additional condition, is there way to do this?
Update: Just 2 years after posting this question, I grew up, I learnt how code can be written in small chunks, why nested if's can be a code smell and how to avoid such problems in the first place by writing manageable, small functions.
A:
Don't worry about other users comments, I can understand you, SOMETIMES when developing this "fancy" things are required. If we can break an if, a lot of nested ifs won't be necessary, making the code much more clean and aesthetic.
This sample code illustrate that CERTAINS SITUATIONS where breaked if can be much more suitable than a lot of ugly nested ifs... if you haven't faced that certain situation does not mean it doesn't exists.
Ugly code
if(process_x()) {
/* do a lot of other things */
if(process_y()) {
/* do a lot of other things */
if(process_z()) {
/* do a lot of other things */
/* SUCCESS */
}
else {
clean_all_processes();
}
}
else {
clean_all_processes();
}
}
else {
clean_all_processes();
}
Good looking code
do {
if( !process_x() )
{ clean_all_processes(); break; }
/* do a lot of other things */
if( !process_y() )
{ clean_all_processes(); break; }
/* do a lot of other things */
if( !process_z() )
{ clean_all_processes(); break; }
/* do a lot of other things */
/* SUCCESS */
} while (0);
As @NiematojakTomasz says, the use of goto is an alternative, the bad thing about this is you always need to define the label (point target).
A:
Encapsulate your code in a function. You can stop executing a function with return at any time.
A:
proper way to do this :
try{
if( !process_x() ){
throw new Exception('process_x failed');
}
/* do a lot of other things */
if( !process_y() ){
throw new Exception('process_y failed');
}
/* do a lot of other things */
if( !process_z() ){
throw new Exception('process_z failed');
}
/* do a lot of other things */
/* SUCCESS */
}catch(Exception $ex){
clean_all_processes();
}
After reading some of the comments, I realized that exception handling doesn't always makes sense for normal flow control. For normal control flow it is better to use "If else":
try{
if( process_x() && process_y() && process_z() ) {
// all processes successful
// do something
} else {
//one of the processes failed
clean_all_processes();
}
}catch(Exception ex){
// one of the processes raised an exception
clean_all_processes();
}
You can also save the process return values in variables and then check in the failure/exception blocks which process has failed.
| {
"pile_set_name": "StackExchange"
} |
Q:
Increasing the Accuracy of NMaximize
I have the following graph:
Based on theoretical reasons, I expect the the global maximum of the graph to be in the range $[0, 1)$. The graph seems to suggest that is the case. When I run,
NMaximize[{f[α, χ, 1], 0 <= α <= 2 π,
0 <= χ <= π}, {{α, 1, 2}, {χ, 1, 2}},
WorkingPrecision -> 15, PrecisionGoal -> 5],
I get
-0.0138260893013031
as my global maximum, which is significantly different from zero.
How can I increase the working accuracy/power/precision of the NMaximize command to (hopefully) obtain the desired resul -- $0$ in this case.
A:
Families of functions follow Tolstoy's dictum: "All happy families are alike. All unhappy families are unhappy in their own way."
Functions from happy families cheerfully yield their optima without the user having look up method options in the docs. When this does not happen, you suspect your function is from an unhappy family and wonder what is the particular problem with this function. What method or strategy would be appropriate to apply to this problem? It's hard to say, if you do not know what the function is, which is the situation the rest of here are in, since the OP does not post the function.
Nonetheless, here are some possibilities...
There is no error.
If you look at the graph, the maximum looks like it should be around
-0.0138260893013031
which is what the OP reports was the answer. It is no way suggests to me that the answer should lie in the interval $[0,1)$ mentioned by the OP. Further, the plot looks like a function from a fairly happy family. Having a few local maxima might fool FindMaximum[], but I doubt NMaximize failed. Most likely, the OP made a mistake.
But there's no way to check this, since the OP did not share the function.
If you can plot the function, use the plot.
It's a fairly simple looking plot, and just by looking one might come up with up to four likely starting points near local maxima. It may be clear that some peaks are not the global maximum and can be ignored. One can easily choose a point {α0, χ0} near each local maximum and feed it to FindMaximum:
FindMaximum[{f[α, χ, 1], 0 <= α <= 2 π, 0 <= χ <= π}, {{α, α0}, {χ, χ0}}]
From the results you get, it's not hard to choose the maximum.
This is a brute-force approach. For a good (i.e. accurate) plot that has too many maxima to read by eye, one can use Cases[] to get the computed points from the plot. The points with the greatest z-coordinates can be used as starting points. This is shown in another Q&A that I cannot find.
Try to avoid the brute-force approach
The brute-force approach is not pretty. There are reasons the function's family is unhappy, and if it were to submit to analysis one might find them out. A thorough analysis of the function needs to be complemented by delving into the docs and getting a thorough understanding of the suboptions of all the optimization methods available in Mathematica. While this can take a lot of time, compared to just getting the answer and moving on, the understanding one obtains can be quite beautiful, with time spent enjoyably in the pursuit of knowledge.
Whether you think I'm speaking facetiously depends on your point of view. Personally, I'm more interested in mathematics than whether -0.0138260893013031 is correct or not. Others to whom answers have significance for their own projects are often more interested in their projects. They might want to take a few minutes to do the brute-force method, instead of spending hours finding a way that takes only a second or two. (The choice also depends on whether what you learn will be, or might be, useful in the future.)
| {
"pile_set_name": "StackExchange"
} |
Q:
How to put button and some text in one row, and make text vertical-align: middle;?
I can make it in one line ( If there is any better solution, please tell me ), but I cannot make the text vertical-align in the middle.
#info{
vertical-align: middle;
}
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet"/>
<div class="container">
<div class="row">
<div class="col-md-3 col-md-offset-3">
<div id="save">
<div class="pull-left">
<button type="button" class="btn btn-default">
save
</button>
</div>
<div class="pull-right" id="info">info</div>
<div class="clearfix"></div>
</div>
</div>
</div>
</div>
A:
The problem your are having to align the button and #info is that bootstrap is styling the button, but your div#info is not styled equally.
Just keep in mind to add similar styles to #info so it gets similar position. In this case all you need is:
#info {
display:inline-block;
padding: 6px 12px;
line-height:1.42857143;
}
You can see it working here.
| {
"pile_set_name": "StackExchange"
} |
Q:
Cannot create confusion matrix in lyx
I'm trying to create a confusion matrix using LyX, I've found this code to create a confusion matrix so inside my document I pressed Ctrl+l where I wanted the matrix and I pasted the code but I obtain this error: LaTeX error: Can be used only in preamble. The line with the error is the first one: \begin{document}.
What should I paste to obtain only a confusion matrix? I suppose that what I pasted was the full code of a latex page.
A:
Assuming the version with the rotated text. In Document -> Settings -> LaTeX preamble, add
\usepackage{array}
\usepackage{graphicx}
\usepackage{multirow}
\newcommand\MyBox[2]{
\fbox{\lower0.75cm
\vbox to 1.7cm{\vfil
\hbox to 1.7cm{\hfil\parbox{1.4cm}{#1\\#2}\hfil}
\vfil}%
}%
}
In the ERT:
\noindent
\renewcommand\arraystretch{1.5}
\setlength\tabcolsep{0pt}
\begin{tabular}{c >{\bfseries}r @{\hspace{0.7em}}c @{\hspace{0.4em}}c @{\hspace{0.7em}}l}
\multirow{10}{*}{\rotatebox{90}{\parbox{1.1cm}{\bfseries\centering actual\\ value}}} &
& \multicolumn{2}{c}{\bfseries Prediction outcome} & \\
& & \bfseries p & \bfseries n & \bfseries total \\
& p$'$ & \MyBox{True}{Positive} & \MyBox{False}{Negative} & P$'$ \\[2.4em]
& n$'$ & \MyBox{False}{Positive} & \MyBox{True}{Negative} & N$'$ \\
& total & P & N &
\end{tabular}
Note that when you paste code into ERTs, use Ctrl + Shift + V (Edit -> Paste special), otherwise line breaks in the code aren't preserved.
In general for such examples, things that are between \documentclass and \begin{document} go in the preamble. The part between (but not including) \begin{document} and \end{document} go in ERTs.
| {
"pile_set_name": "StackExchange"
} |
Q:
Get data from Quote_items table in Magento 2
I am new to Magento 2 so I do not understand much about select data from the table in Localhost also which file I need to create
I want to display data from Quote_item table
in the .phtml file, and What I need to do in Block and xml file
If anyone can help me step by step, it's very useful for me
A:
You can get data of quote_item table using Magento\Quote\Model\ResourceModel\Quote\Item\Collection collection class like this..
$objectManager = \Magento\Framework\App\ObjectManager::getInstance();
$quoteItemCollection = $objectManager->create('\Magento\Quote\Model\ResourceModel\Quote\Item\Collection');
foreach ($quoteItemCollection as $quoteItem)
{
echo $quoteItem->getName();
echo $quoteItem->getProductId();
......
}
I am not recommend you to use object manager instead inject this collecton class in block class of this phtml file and use it here.
| {
"pile_set_name": "StackExchange"
} |
Q:
The multiemail validation method is not working, if we call the prototype.js on the page?
I have created a add email method (jquery) to validate a multiple emails for recipient text box. it's working fine when prototype.js is not declared on the page. To get rid of the $ conflict i also incorporated the $ noconflict() method measure measure. The other field validations are working in this scenario, except the receipient email validation field. AS per my finding "jQuery.validator.methods.email.call(this, value, element)" line no 50 of the page is not working and hence the method is not firing . I need to call the prototype.js as well. Please see the following code for a clearer understanding.......Thanks in advance.
Please see the code below:
Multi Email Validation
var JQ = jQuery.noConflict();
JQ(document).ready(function() {
// Handler for .ready() called.
JQ("#email-form").validate({
rules : {
email : {
required : true,
email : true
},
recipientEmail : {
multiemail: true,
required : true
// email : true
}
},
messages: {
email: {
required: "Please enter your email address.",
email: "Please enter a valid email address"
},
recipientEmail: {
multiemail: "One or more of your recipient email addresses needs correction.",
required: "Please enter the recipient's email address."
//email: "Please enter a valid email address"
}
}
});
});
JQ.validator.addMethod("multiemail", function(value, element) {
if (this.optional(element)) // return true on optional element
return true;
// var emails = value.split( new RegExp( "\s*,\s*", "gi" ) );
var emails = value.split( new RegExp( "\s*,\s*", "gi" ) );
valid = true;
maxEmaillength = emails.length;
for(var i in emails)
{
value = emails[i];
valid = valid && jQuery.validator.methods.email.call(this, value, element);
// Maximum email length validation
if(maxEmaillength > 5)
{
JQ('label.error:first').html("Please enter only 5 mail IDs at a time");
JQ('label.error:first').css(display, block);
setTimeout(alert("Please enter only 5 mail IDs at a time"), 5);
}
}
return valid;
}, 'One or more email addresses are invalid');
</head>
<body>
<form action="" method="get" name="email-form" id="email-form">
<label for="email">email</label>
<input type="text" name="email" id="email" style="width:200px" />
<br />
<label for="recipientEmail">Recipient Email</label>
<input type="text" name="recipientEmail" id="recipientEmail" style="width:500px" /><br />
<input type="submit" name="Submit" id="Submit" value="Submit" />
</form>
</body>
</html>
A:
I have just change the approach a little bit, as jQuery.validator.methods.email.call(this, value, element) was not working in the previous custom method. Although i could not find the exact reason, why that was not working with prototype.js and what the exact solution for that problem. But the following code snippet is working as desired. Just replace that previous jquery custom email method with the following one.
function validateEmail(field) {
var regex=/\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b/i;
return (regex.test(field)) ? true : false;
}
JQ.validator.addMethod("multiemail", function(value, element)
{
var result = value.split(",");
for(var i = 0;i < result.length;i++)
if(!validateEmail(result[i]) || result.length > 5)
return false;
return true;
},'One or more email addresses are invalid');
| {
"pile_set_name": "StackExchange"
} |
Q:
Genexus 15 SD - Problema al actualizar base de datos Offline en Android
Nosotros tenemos una aplicación Android (disponible en Play Store) la cual fue construida por Genexus Evolution 3 (Evo3). Ahora hemos migrado dicha aplicación a Genexus15 (GX15) con éxito.
El inconveniente radica que cuando se actualiza la aplicación desde la versión realizada con Evo 3 hasta la versión realizado con GX 15 al parecer la base no se reconstruye correctamente. La actualización se efectúa correctamente, pero al querer abrir la aplicación se "crashea" debido a que quiere realizar una consulta a una tabla no existente (el típico error "No such table"). NOTA: Al borrar datos de la aplicación, esta si funciona. Pero no queremos dicho comportamiento con nuestro usuario final.
Cabe recalcar:
En medio de la migración de Evo3 a GX15 se crearon nuevas tablas tanto en Evo 3 como en GX15.
Hemos intentado hacer Rebuild, Create Offline Database, crear nuevas Tablas, entre otras.
Otro punto importante a recalcar es que la base de datos si tiene algunas tablas creadas, pero aparentemente las tablas creadas entre la versión de la Play Store (hecho con Evo3) hasta las nueva versión (hecha con GX15) no son creadas correctamente.
¿Hay alguna forma de forzar que se reconstruya la base de datos en esta nueva versión (en teoría debe reconstruir todas las tablas automáticamente pero no lo hace)? También verificamos que si se ejecuta el "OnCreate" del DatabaseHelper (del FlexibleCliente) pero al parecer no quiere crear las nuevas tablas en la actualización de la aplicación.
Quedamos a la espera de cualquier ayuda disponible.
A:
La creación de la BD offline en el dispositivo ocurre cuando la estructura de la BD de la aplicación cambio.
En particular en el Flexible Client se verifica que el hash de la estructura de BD anterior es distinto al de la nueva y son la misma aplicación.
Por lo que comentas, todo esto es así en este caso. Ya que se produjeron cambios en la BD (entre Ev3 y v15) y la aplicación es la misma que estas actualizando en el Play Store.
De cualquier manera puede estar ocurriendo alguna de estas cosas:
Es probable que alguna de las 2 aplicaciones no haya generado correctamente el md5 o que no las reconozca como la misma app, el md5 de la versión anterior se guarda en las internal preferences de la app que dependen del nombre del main.
Si tienes disponible la versión anterior y la actual. Sería bueno comparar los archivos MainApplication.java de cada una ubicados en:
{main}\src\main\java{package}\ MainApplication.java
Estos archivos deben tener los 2 un md5 generador en el método setReorMD5Hash() y deben ser diferentes.
Ademas deben coincidir el identificador de la app o sea los métodos setName() y setAppEntry() deben recibir el mismo parámetro.
Si todo lo anterior está correcto, también puedes prender el Log Level=Debug en la propiedades del main de la app y ver que ocurre al iniciar cuando debería hacer el create database.
Nota: Puedes forzar el create offline database con el método resetOfflineDatabase(), pero no es la idea. Esta creación se debe hacer automáticamente en un update de la app.
Espero tus comentarios. Saludos.
| {
"pile_set_name": "StackExchange"
} |
Q:
SSRS: IF Else inside Switch not working
Here im trying to do switch then if else in it. I dont know where i missed but this doesnt work.
for first condition in switch, 'WEEKLY', WW2 and WW1 is in number form but of text/string datatype. so after cast them to integer/number datatype then those are use in iff operation.
for second condition in switch, 'MONTHLY', WW2 and WW1 is month name. so the objective is to get month number from month name. After that, then WW1,WW2 are use in iff operation.
=SWITCH(Parameters!date_range_type.Value = "WEEKLY", IIF((cInt(Parameters!WW2.Value) - cInt(Parameters!WW1.Value)) > 10, Parameters!WW1.Value,IIF(cInt(Parameters!WW2.Value) < 11,1,cInt(Parameters!WW2.Value) - 10)),
Parameters!date_range_type.Value = "MONTHLY", IIF((cInt(MONTH(datepart("YYYY",today())& "-" & Parameters!WW2.Value & "-01")) - cInt(MONTH(datepart("YYYY",today())& "-" & Parameters!WW1.Value & "-01"))) > 10, MONTH(datepart("YYYY",today())& "-" & Parameters!WW1.Value & "-01"),IIF(cInt(MONTH(datepart("YYYY",today())& "-" & Parameters!WW2.Value & "-01")) < 11,1,cInt(MONTH(datepart("YYYY",today())& "-" & Parameters!WW2.Value & "-01")) - 10))
)
if i run report for 'WEEKLY' only, means doesnt need switch and the other iif condition, it working fine. so does when run on 'MONTHLY' only.
EDIT
this is how it looks like if i use iif only..
IIF(Parameters!date_range_type.Value = "MONTHLY",( IIF((cInt(MONTH(datepart("YYYY",today())& "-" & Parameters!WW2.Value & "-01")) - cInt(MONTH(datepart("YYYY",today())& "-" & Parameters!WW1.Value & "-01"))) > 10,MONTH(datepart("YYYY",today())& "-" & Parameters!WW1.Value & "-01"),IIF(cInt(MONTH(datepart("YYYY",today())& "-" & Parameters!WW2.Value & "-01")) < 11,1,cInt(MONTH(datepart("YYYY",today())& "-" & Parameters!WW2.Value & "-01")) - 10)) ),
( IIF(Parameters!date_range_type.Value = "WEEKLY",( IIF((cInt(Parameters!WW2.Value) - cInt(Parameters!WW1.Value)) > 10, Parameters!WW1.Value,
IIF(cInt(Parameters!WW2.Value) < 11,1,cInt(Parameters!WW2.Value) - 10)) ), "0") ) )
A:
End up using sql function for the job. this is the same logic as above SSRS code.
CREATE OR REPLACE FUNCTION ww3_param (ww1 IN NUMBER, ww2 IN NUMBER)
RETURN VARCHAR2
IS
ret_val VARCHAR2(10);
BEGIN
IF (ww2 - ww1) > 10 THEN
ret_val := to_char(ww1);
ELSE
IF ww2 < 10 THEN
ret_val := '1';
ELSE
ret_val := to_char(ww2 - 10);
END IF;
END IF;
RETURN ret_val;
END;
EDIT
This is the query for the dataset:
select case when :date_range_type = 'MONTHLY' then ww3_param(to_char(to_date(:WW1,'MON'),'MM'),to_char(to_date(:WW2,'MON'),'MM'))
when :date_range_type = 'WEEKLY' then ww3_param(to_number(:WW1),to_number(:WW2)) end as ww3_param_value
from dual
| {
"pile_set_name": "StackExchange"
} |
Q:
Where do I report an iproute2 bug?
I have a command in shell script:
ip neigh flush dev eth0 192.168.1.21
which cause messages (syslog) to record the following error:
Feb 2 15:53:03 rpiautomation kernel: [1324706.360319] netlink: 12 bytes leftover after parsing attributes in process `ip'.
My Raspberry runs Jessie, updated and upgraded.
The version of ip is:
ip utility, iproute2-ss140804
I assume a bug exists, and wonder where can I report it?
A:
Iproute is specific to linux based systems and I believe developed in tandem with the kernel. In any case, it's a handled on a kernel mailing list, netdev, although whether or not they welcome bug reports there I don't know (see also here).
You could instead start downstream with Debian; I think starting with Raspbian would be a waste of time unless it turns out to be non-reproducible on Debian.
| {
"pile_set_name": "StackExchange"
} |
Q:
Casting combobox objects back to their correct type
I have a combobox of objects (two types; ProductGroup and Family). I would like to use a command to find out what type of object the selected item is.
I went out on a limb and tried
if (cbFamily.getSelectedItem() instanceof ProductGroup) {
JOptionPane.showMessageDialog(mainWindow, "You have selected a ProductGroup")
}
I had no luck
Note: I am new to Java so I may need to ask for further clarification on some answers
A:
Your code should works fine. Problem will be somewhere else. Use debugger, or write
"System.out.println(cbFamily.getSelectedItem().getClass());" before your "if" to determine what class is returned from your combobox.
| {
"pile_set_name": "StackExchange"
} |
Q:
Show submenu in another block
I have a Drupal 7 CMS with a Navigation build with nice_menus (http://drupal.org/project/nice_menus).
My main menu has several links and each link has several sub links.
These sublinks should be shown to the user in a block.
Any advice on how I can achieve this?
A:
The Menu Block module should handle this - http://drupal.org/project/menu_block
Go to your blocks page, click add new menu block and fill in the form and then use it like any other block.
To show just sub menu items of a main item make a menu block of the menu and set the starting level to the level you want to show. To show JUST these items use the maximum depth field.
| {
"pile_set_name": "StackExchange"
} |
Q:
Find path in an undirected graph BFS - Java
I'm making a program to deal with an undirected graph with unweighted edges and since I'm a learner I'm having some issues.
I have to make a method (in the same class as the main) which receives the graph, a initial vertex and an end vertex. Then I have to find if there is a path from vertex1 to the vertex2 and store the intermediate vertices in a queue to then print it (it doesn't have to be the shortest, ofc it's better if that's possible but don't really need it).
Let's say I have:
Graph
And I wanna get the only ONE path from
I have implemeted a bfs method, which is the following and is used for other methods I have also, but I don't know how to start with this method I need.
My bfs method:
public static Queue<DecoratedInmate> bfs (Graph gr, Vertex<DecoratedInmate> v){
Queue<Vertex<DecoratedInmate>> vertices = new LinkedList<Vertex<DecoratedInmate>>(); //temporal queue
Queue<DecoratedInmate> traversal = new LinkedList<DecoratedInmate>(); //traversal queue
Vertex<DecoratedInmate> u; //vertex taken from queue
Vertex<DecoratedInmate> z; //opposite vertex of u
Edge e; //edge between vertices
Iterator<Edge<DecoratedInmate>> it; //to store incident edges
v.getElement().setVisited(true); //set received vertex to visited
vertices.offer(v); //add origin vertex to queue
while (!vertices.isEmpty()) { //if queue isn't empty
u = vertices.remove(); //take vertex from queue
traversal.offer(u.getElement()); //add element to list
it = gr.incidentEdges(u); //get incident edges of u
while (it.hasNext()) { //check if there are incident edges
e = it.next(); //assign the edge
z = gr.opposite(u, e); //assign opposite vertex of u
if (!z.getElement().getVisited()) { //check if the opposite is not visited
z.getElement().setVisited(true); //set to visited
vertices.offer(z); //add to queue
}
}
}
return traversal;
}
Thanks in advance
A:
My understanding of your problem is that you are trying to find a path from one node to another and not necessarily how they are visited. So here is an implementation. When running bfs, store each vertex parents i.e
public static void Bfs(Vertex source) {
vertex = GraphifyGUI.getNode();
reset();
q = new LinkedList<>(); // FIFO
source.wasVisited = true; // marked as visited
q.add(source); // put into queue
source.parent = source; // set parent
conn = new ArrayList<>();
while (!q.isEmpty()) { // source
Vertex current = q.poll(); // remove first
conn.add(current.getId());
Iterator<Vertex> currentList = current.vList().iterator();
while (currentList.hasNext()) {
Vertex next = currentList.next();
if (next.wasVisited == false) {
next.wasVisited = true;
q.add(next);
next.parent = current;
GG.printlnConsole(next.getName() + " has type of " + next.getType());
}
}
}
GG.printlnConsole("Order is " + conn);
}
And then method to get shortest path will look like this
public void shortestPath(int v, int e) {
if (e == v) {
GG.printlnConsole(v + "-->" + v);
return;
}
for (int i = e; i >= 0; i = vertex.get(i).getParent().getId()) {
if (i == v) {
break;
}
if (vertex.get(i).getParent().getId() != -1) {
set.put(vertex.get(i).getParent().getId(), i);
}
}
}
Explanation of shortestPath above
if this source is the same as destination then that is shortest path
for(i = destination; i >= 0; i = parent of i){
if(i == source) we are done;
if(parent of i is a node) add as path;
| {
"pile_set_name": "StackExchange"
} |
Q:
How to show the ActiveX Yellow bar?
I'm trying to set up a webpage that downloads the OCX and installs it with the user permission when the user right click in the yellow bar
Note: it's a business app and I know... IE, but 95% of company customers use it and it's easy for us to move from Windows > OCX first and then to full WebService
What I did was create a cab file with:
- eds.cab (signed with an SSL certificate)
|--- EDS.ocx
|--- setup.inf
the setup.inf has this code:
[version]
signature="$CHICAGO$"
[Add.Code]
EDS.ocx=EDS.ocx
[EDS.ocx]
file-win32-x86=thiscab
clsid={8EC68701-329D-4567-BCB5-9EE4BA43D358}
FileVersion=3,5,0,150
RegisterServer=yes
and then the webpage contains the tag like this:
<object
id="ActiveX"
classid="CLSID:8EC68701-329D-4567-BCB5-9EE4BA43D358"
width="14"
height="14"
codebase="http://localhost/EDS.Webservice/EDS.cab#version=3,5,0,150">
<param name="tabName" value="Stop:http://localhost/EDS.Webservice/" />
</object>
and they I navigate to the http://localhost/EDS.Webservice/
The issue is that I do not get that yellow bar, just the ACL asking me to accept it.
Does anyone know what I could have been missing?
It only shows the ACL message on Windows 7, never the yellow bar first like, Flash plugin... :-(
added
What we are after:
Thank you.
Added
Internet Explorer Settings are as Default, both Security on Advanced Tab as well Trust Domains
A:
Where are you serving your page from? If it's from localhost/inside the local network, it will have a different security policy applied - even with all settings as default. Try publishing it to an external server and see what happens (or failing that, change the settings for "trusted" site to be the same as "internet")
You should also check what's happening with regards to signing of the component - does your object have a certificate that's trusted by your domain/pc setup?
| {
"pile_set_name": "StackExchange"
} |
Q:
Get MAC address of device
I'm writing a Windows Phone 8.1 Application that discovers nearby Bluetooth Low Energy devices.
foreach (DeviceInformation device in devices)
{
BluetoothLEDevice bleDevice = await BluetoothLEDevice.FromIdAsync(device.Id);
}
Everything works fine, but the bleDevice.BluetoothAddress property contains a ulong type, while I need a string type, formatted like a Mac Address.
Example:
bleDevice.BluetoothAddress: 254682828386071 (ulong)
Desired Mac Address: D1:B4:EC:14:29:A8 (string) (that's an example of how I need it, not the actual Mac Address of the device)
Is there a way to convert the long to a Mac Address? Or is there another way to directly discover the Mac Address without conversions? I know there's a tool named In The HAnd - 32feet that could help me, but as of now Windows Phone 8.1 is not supported.
A:
There are numerous topics you can find through Google and here on StackOverflow. Anyway, here's one way to do it:
ulong input = 254682828386071;
var tempMac = input.ToString("X");
//tempMac is now 'E7A1F7842F17'
var regex = "(.{2})(.{2})(.{2})(.{2})(.{2})(.{2})";
var replace = "$1:$2:$3:$4:$5:$6";
var macAddress = Regex.Replace(tempMac, regex, replace);
//macAddress is now 'E7:A1:F7:84:2F:17'
| {
"pile_set_name": "StackExchange"
} |
Q:
Does inner join effect order by?
I have a function a() which gives result in a specific order.
I want to do:
select final.*,tablex.name
from a() as final
inner join tablex on (a.key=tablex.key2)
My question is, can I guarantee that the join won't effect the order of rows as a() set it?
a() is:
select ....
from....
joins...
order by x,y,z
A:
The short version:
The order of rows returned by a SQL query is not guaranteed in any way unless you use an order by
Any order you see without an order by is pure coincidence and can not be relied upon.
So how did I always get the correct order so far? when I did Select * from a()
If your function is a SQL function, then the query inside the function is executed "as is" (it's essentially "inlined") so you only run a single query that does have an order by. If it's a PL/pgSQL function and the only thing it does is a RETURN QUERY ... then you again only have a single query that is executed which does have an order by.
Assuming you do use a SQL function, then running:
select final.*,tablex.name
from a() as final
join tablex on a.key=tablex.key2
is equivalent to:
select final.*,tablex.name
from (
-- this is your query inside the function
select ...
from ...
join ...
order by x,y,z
) as final
join tablex on a.key=tablex.key2;
In this case the order by inside the derived table doesn't make sense as it might be "overruled" by an overall order by statement. In fact some databases would outright reject this query (and I sometime wish Postgres would do as well).
Without an order by on the **overall* query, the database is free to choose any order of rows that it wants.
So to get back to the initial question:
can I guarantee that the join won't effect the order of rows as a() set it?
The answer to that is a clear: NO - the order of the rows for that query is in no way guaranteed. If you need an order that you can rely on, you have to specify an order by.
I would even go so far to remove the order by from the function - what if someone runs: select * from a() order by z,y,x - I don't think Postgres will be smart enough to remove the order by inside the function.
| {
"pile_set_name": "StackExchange"
} |
Q:
reuse jqplot object to load or replot data
I am using JqPlot for charts , my problem is i want to load different data on different click events.
But once the chart is created and loaded with the data for the first time; i don't know then how to load data when another event fires that means i want to reuse the chart object and want to load/replot the data when events get fired something like...
chartObj.data = [graphData]
A:
That seems to work to replot data.
chartObj.series[0].data = [[0, 4], [1, 7], [2, 3]];
chartObj.replot();
Also, you can check this: https://groups.google.com/group/jqplot-users/browse_thread/thread/59df82899617242b/77fe0972f88aef6d%3Fq%3D%2522Groups.%2BCom%2522%2377fe0972f88aef6d&ei=iGwTS6eaOpW8Qpmqic0O&sa=t&ct=res&cd=71&source=groups&usg=AFQjCNHotAa6Z5CIi_-BGTHr_k766ZXXLQ?hl=en, hope it helps.
A:
Though this is an old question.
As the accepted Answer didn't work for me and i couldn't find a solution in the jqPlot docs either. I came to this solution
var series = [[1,2],[2,3]];
chartObj.replot({data:series});
Src: Taking a look at the replot function.
function (am) {
var an = am || {};
var ap = an.data || null;
var al = (an.clear === false) ? false : true;
var ao = an.resetAxes || false;
delete an.data;
delete an.clear;
delete an.resetAxes;
this.target.trigger("jqplotPreReplot");
if (al) {
this.destroy()
}
if (ap || !L.isEmptyObject(an)) {
this.reInitialize(ap, an)
} else {
this.quickInit()
} if (ao) {
this.resetAxesScale(ao, an.axes)
}
this.draw();
this.target.trigger("jqplotPostReplot")
}
The line
if (ap || !L.isEmptyObject(an)) {
this.reInitialize(ap, an)
}
shows us it needs a truthy value for ap to pass it as first parameter to the internal reinitialize function. which is defined as var ap = an.data || null;
Its as simple as this but unfortunately not documented anywhere i could find it
Note that if you want to redraw some things defined in your jqPlot options, like legend labels, you can just pass any option to the replot function. Just remember the actual series to replot has to be named "data"
var options = {
series : [{
label: 'Replotted Series',
linePattern: 'dashed'
}],
//^^^ The options for the plot
data : [[1,2],[2,3]]
//^^^ The actual series which should get reploted
}
chartObj.replot (options)
| {
"pile_set_name": "StackExchange"
} |
Q:
What's the meaning of "drapish"?
What's the meaning of "drapish"?
I saw this word in the poem "The Buddha" by Jack Kerouac.
However,I can't find the meaning on the internet.
I used to sit under trees and meditate
on the diamond bright silence of darkness
and the bright look of diamonds in space
and space that was stiff with lights
and diamonds shot through, and silence
And when a dog barked I took it for soundwaves
and cars passing too, and once I heard
a jet-plane which I thought was a mosquito
in my heart, and once I saw salmon walls
of pink and roses, moving and ululating
with the drapish
Once I forgave dogs, and pitied men, sat
in the rain countin’ Juju beads, raindrops
are ecstasy, ecstasy is raindrops – birds
sleep when the trees are giving out light
in the night, rabbits sleep too, and dogs
I had a path that I followed thru piney woods
and a phosphorescent white hound-dog named Bob
who led me the way when the clouds covered
the stars, and then communicated to me
the sleepings of a loving dog enamoured
of God
A:
"Drapish" is a word that Kerouac used to try to describe something he imagined while he was meditating.
I've found two types of usages for "drapish" when I searched. The first is a surname and the second describes clothing that "drapes." In this context "drapes" means "hangs or rests limply" or "falls or hangs in loose folds". A related word (for me) is "drapey", for example
The phrase "moving and ululating" makes me think that "drapish" is trying to describe a feeling of something hanging loosely and maybe moving back and forth, but there's no way to say for certain. Kerouac probably purposefully chose to create a word because there wasn't one in common usage that meant exactly what he wanted to express.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to match specific pattern from a string
I had return a program it will check which file is present or not and it will check no of column
create or replace procedure chkcsvfile
(P_UTLDIR VARCHAR2,
P_FILENAME VARCHAR2,
P_tabnam VARCHAR2
)
is
P_fieldel varchar2(2):= ',';
V1 VARCHAR2(32767) ;
P_errlen number :=0;
lv_a number;
lv_b number;
lv_c number;
lv_d number;
lv_check_file_exist boolean;
v_file utl_file.file_type;
cursor c1 is
select count(*) from user_tables where TABLE_NAME =P_tabnam;
cursor c2 is
select count(*) from user_tab_columns where TABLE_NAME =P_tabnam;
begin
open c1;
fetch c1 into lv_c;
if lv_c = 0 then
dbms_output.put_line('table name is invalid : ' || P_tabnam);
end if;
--'test wheather file is available or not'
dbms_output.put_line ('test wheather file is available or not');
utl_file.fgetattr (P_UTLDIR,P_FILENAME, lv_check_file_exist, lv_a, lv_b );
if lv_check_file_exist then
dbms_output.put_line('file ' ||P_FILENAME ||' exists');
v_file := utl_file.fopen(P_UTLDIR,P_FILENAME, 'R');
UTL_FILE.get_line (v_file ,V1,32767);
DBMS_OUTPUT.put_line ('V1 :' || V1);
if (REGEXP_like (V1, ',',1))
then
P_errlen := P_errlen +1 ;
dbms_output.put_line ('errrooooooooooooooooooooooooooooooooooorr');
dbms_output.put_line (P_errlen );
end if;
end if;
if not lv_check_file_exist then
dbms_output.put_line('file ' || P_FILENAME ||' does not exist');
end if;
if lv_check_file_exist is null then
dbms_output.put_line('file check null');
end if;
if lv_check_file_exist is not null then
dbms_output.put_line('file check not null');
end if;
dbms_output.put_line('lv_a-->'||lv_a);
dbms_output.put_line('lv_b-->'||lv_b);
open c2;
fetch c2 into lv_d;
dbms_output.put_line ('No of columns in a table : ' || lv_d );
end;
/
NOW my problem is i have to match " , " in a string and i want the count of it . i had written the program but it not give me specific count .
data in string is given in below format
7839,KING ,PRESIDENT,0000,17-nov-1981, 005000.00 ,000000.00,10,
please help me
thanks in advance
A:
Since you're using 11g, you can use the regexp_count function.
select regexp_count('7839,KING ,PRESIDENT,0000,17-nov-1981, 005000.00 ,000000.00,10,',',')
from dual
| {
"pile_set_name": "StackExchange"
} |
Q:
mysql indexing makes group by slow
Please refer the table strcuture below.
CREATE TABLE `oarc` (
`ID` bigint(20) NOT NULL AUTO_INCREMENT,
`zID` int(11) NOT NULL,
`cID` int(11) NOT NULL,
`bID` int(11) NOT NULL,
`rtype` char(1) COLLATE utf8_unicode_ci NOT NULL,
`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=1821039 ;
Other than the PRIMARY KEY, I have not set any index on this, and when I run the following query
select COUNT(oarc.ID) as total
from `oarc` where`oarc`.`rtype` = 'v'
group
by `oarc`.`zID`
I am getting the result in less than 1 second. But if I add index to zID it is taking more than 5 seconds.
Please see below explain result :
id | select_type | table | type | possible_keys | key | key_len | ref | row | Extra
--------------------------------------------------------------------------------------------------------
1 | SIMPLE | oarc | index | NULL | zone_ID | 4 | NULL | 1909387 | Using where
Currently the table have more than 1821039 records in it and it will increase on a hourly basis. What are the things I need to do in order to reduce the query execution time. I am expecting only something at the table and query level, nothing on my.cnf or server side because I can not do anything there.
Thanks in advance.
A:
Is this better?
CREATE TABLE `oarc` (
`ID` bigint(20) NOT NULL AUTO_INCREMENT,
`zID` int(11) NOT NULL,
`cID` int(11) NOT NULL,
`bID` int(11) NOT NULL,
`rtype` char(1) COLLATE utf8_unicode_ci NOT NULL,
`created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`ID`),
KEY(rtype,zid)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=1821039 ;
explain
select COUNT(oarc.ID) as total
from `oarc` where`oarc`.`rtype` = 'v'
group
by `oarc`.`zID`
+----+-------------+-------+------+---------------+-------+---------+-------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+-------+---------+-------+------+--------------------------+
| 1 | SIMPLE | oarc | ref | rtype | rtype | 3 | const | 1 | Using where; Using index |
+----+-------------+-------+------+---------------+-------+---------+-------+------+--------------------------+
| {
"pile_set_name": "StackExchange"
} |
Q:
Did Yoda ever say "Powerful the dark side is"?
For years I lived with the impression that I have heard that quote somewhere in the movies but I can't Google it so I am starting to think I am confused. Is there an instance where Yoda (or maybe another character) says that?
A:
The phrase you thought in scripts occur does not.
I looked for the phrase "dark side is" in all the scripts that can be downloaded from https://starwarssuperfans.wordpress.com/real-world-news/star-wars-scripts/ .
The words do occur a couple of times, but not in combination with "powerful". The only references are the ones already mentioned in comments.
| {
"pile_set_name": "StackExchange"
} |
Q:
Finding reference to DbFunction within expression tree and replacing with a different function
I would like to have some somewhat complex logic kept in a single lambda expression, which can be compiled and therefore used in Linq-To-Objects, or used as an expression to run against a database in Linq-To-Entities.
it involves date calculations, and I have hitherto been using something like (hugely simplified)
public static Expression<Func<IParticipant, DataRequiredOption>> GetDataRequiredExpression()
{
DateTime twentyEightPrior = DateTime.Now.AddDays(-28);
return p=> (p.DateTimeBirth > twentyEightPrior)
?DataRequiredOption.Lots
:DataRequiredOption.NotMuchYet
}
And then having a method on a class
public DataRequiredOption RecalculateDataRequired()
{
return GetDataRequiredExpression().Compile()(this);
}
There is some overhead in compiling the expression tree. Of course I cannot simply use
public static Expression<Func<IParticipant, DataRequiredOption>> GetDataRequiredExpression(DateTime? dt28Prior=null)
{
return p=> DbFunctions.DiffDays(p.DateTimeBirth, DateTime.Now) > 28
?DataRequiredOption.Lots
:DataRequiredOption.NotMuchYet
}
Because this will only run at the database (it will throw an error at execution of the Compile() method).
I am not very familiar with modifying expressions (or the ExpressionVisitor class). Is it possible, and if so how would I find the DbFunctions.DiffDays function within the expression tree and replace it with a different delegate? Thanks for your expertise.
Edit
A brilliant response from svick was used - a slight modification beecause difdays and date subtraction have their arguments switched to produce a positive number in both cases:
static ParticipantBaseModel()
{
DataRequiredExpression = p =>
((p.OutcomeAt28Days >= OutcomeAt28DaysOption.DischargedBefore28Days && !p.DischargeDateTime.HasValue)
|| (DeathOrLastContactRequiredIf.Contains(p.OutcomeAt28Days) && (p.DeathOrLastContactDateTime == null || (KnownDeadOutcomes.Contains(p.OutcomeAt28Days) && p.CauseOfDeath == CauseOfDeathOption.Missing))))
? DataRequiredOption.DetailsMissing
: (p.TrialArm != RandomisationArm.Control && !p.VaccinesAdministered.Any(v => DataContextInitialiser.BcgVaccineIds.Contains(v.VaccineId)))
? DataRequiredOption.BcgDataRequired
: (p.OutcomeAt28Days == OutcomeAt28DaysOption.Missing)
? DbFunctions.DiffDays(p.DateTimeBirth, DateTime.Now) < 28
? DataRequiredOption.AwaitingOutcomeOr28
: DataRequiredOption.OutcomeRequired
: DataRequiredOption.Complete;
var visitor = new ReplaceMethodCallVisitor(
typeof(DbFunctions).GetMethod("DiffDays", BindingFlags.Static | BindingFlags.Public, null, new Type[]{ typeof(DateTime?), typeof(DateTime?)},null),
args =>
Expression.Property(Expression.Subtract(args[1], args[0]), "Days"));
DataRequiredFunc = ((Expression<Func<IParticipant, DataRequiredOption>>)visitor.Visit(DataRequiredExpression)).Compile();
}
A:
Replacing a call to a static method to something else using ExpressionVisitor is relatively simple: override VisitMethodCall(), in it check if it's the method that you're looking for and if it, replace it:
class ReplaceMethodCallVisitor : ExpressionVisitor
{
readonly MethodInfo methodToReplace;
readonly Func<IReadOnlyList<Expression>, Expression> replacementFunction;
public ReplaceMethodCallVisitor(
MethodInfo methodToReplace,
Func<IReadOnlyList<Expression>, Expression> replacementFunction)
{
this.methodToReplace = methodToReplace;
this.replacementFunction = replacementFunction;
}
protected override Expression VisitMethodCall(MethodCallExpression node)
{
if (node.Method == methodToReplace)
return replacementFunction(node.Arguments);
return base.VisitMethodCall(node);
}
}
The problem is that this won't work well for you, because DbFunctions.DiffDays() works with nullable values. This means both its parameters and its result are nullable and the replacementFunction would have to deal with all that:
var visitor = new ReplaceMethodCallVisitor(
diffDaysMethod,
args => Expression.Convert(
Expression.Property(
Expression.Property(Expression.Subtract(args[0], args[1]), "Value"),
"Days"),
typeof(int?)));
var replacedExpression = visitor.Visit(GetDataRequiredExpression());
To make it work better, you could improve the visitor to take care of the nullability for you by stripping it from the method arguments and then readding it to the result, if necessary:
protected override Expression VisitMethodCall(MethodCallExpression node)
{
if (node.Method == methodToReplace)
{
var replacement = replacementFunction(
node.Arguments.Select(StripNullable).ToList());
if (replacement.Type != node.Type)
return Expression.Convert(replacement, node.Type);
}
return base.VisitMethodCall(node);
}
private static Expression StripNullable(Expression e)
{
var unaryExpression = e as UnaryExpression;
if (unaryExpression != null && e.NodeType == ExpressionType.Convert
&& unaryExpression.Operand.Type == Nullable.GetUnderlyingType(e.Type))
{
return unaryExpression.Operand;
}
return e;
}
Using this, the replacement function becomes much more reasonable:
var visitor = new ReplaceMethodCallVisitor(
diffDaysMethod,
args => Expression.Property(Expression.Subtract(args[0], args[1]), "Days"));
| {
"pile_set_name": "StackExchange"
} |
Q:
What is the best way to combine (merge) 2 JSONObjects?
What is the best way to combine (merge) two JSONObjects?
JSONObject o1 = {
"one": "1",
"two": "2",
"three": "3"
}
JSONObject o2 = {
"four": "4",
"five": "5",
"six": "6"
}
And result of combining o1 and o2 must be
JSONObject result = {
"one": "1",
"two": "2",
"three": "3",
"four": "4",
"five": "5",
"six": "6"
}
A:
I have your same problem: I can't find the putAll method (and it isn't listed in the official reference page).
So, I don't know if this is the best solution, but surely it works quite well:
//I assume that your two JSONObjects are o1 and o2
JSONObject mergedObj = new JSONObject();
Iterator i1 = o1.keys();
Iterator i2 = o2.keys();
String tmp_key;
while(i1.hasNext()) {
tmp_key = (String) i1.next();
mergedObj.put(tmp_key, o1.get(tmp_key));
}
while(i2.hasNext()) {
tmp_key = (String) i2.next();
mergedObj.put(tmp_key, o2.get(tmp_key));
}
Now, the merged JSONObject is stored in mergedObj
A:
json objects to be merge in that new json object like this.
JSONObject jObj = new JSONObject();
jObj.put("one", "1");
jObj.put("two", "2");
JSONObject jObj2 = new JSONObject();
jObj2.put("three", "3");
jObj2.put("four", "4");
JSONParser p = new JSONParser();
net.minidev.json.JSONObject o1 = (net.minidev.json.JSONObject) p
.parse(jObj.toString());
net.minidev.json.JSONObject o2 = (net.minidev.json.JSONObject) p
.parse(jObj2.toString());
o1.merge(o2);
Log.print(o1.toJSONString());
now o1 will be the merged json object.
you will get the output like this ::
{"three":"3","two":"2","four":"4","one":"1"}
please refer this link and download the smartjson library ..here is the link http://code.google.com/p/json-smart/wiki/MergeSample
hope it will help.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why pick DPP over Aperture?
My standard workflow is: take pictures in RAW -> import into Aperture -> append metadata info -> go through shots and filter out the bad ones -> do basic adjustments in Aperture -> use Nik plugins or Photoshop if extra editing is required
Would my image quality improve by using Digital Photo Professional seeing that it's Canon's software to process it's RAW files? Are there any hidden advantages that I don't see in using DPP?
A:
Are there any hidden advantages that I don't see in using DPP?
It depends on whether or not you believe the 'Canon marketing pitch.' :-) The pitch is essentially that because Canon makes the software and the hardware, their RAW processing is better than the competitors will be. Having done side-by-side comparisons I can say this... In straight apples-to-apples RAW comparison (e.g. all settings the same) DPP RAW processing produces an initial output that is slightly more contrasty by default. Is the output so much better than Aperture that it warrants a change? Are there hidden advantages to DPP? Well, I have two answers:
Answer 1: For me, the answer was 'not it isn't, and no there aren't.' I could slightly tweak my default processing settings in Aperture and get images that were indistinguishable from those made in DPP (unless, perhaps, you're a 'pixel peeper'). The difference was not worth changing up my workflow (and in the 'workflow' department, Aperture has DPP beat in spades).
Answer 2: Since DPP is free, and you already own Aperture, it wouldn't take more than installing DPP and doing some side-by-sides. I didn't see enough of a difference to warrant the change, but maybe you will? Just a thought if you're still on the fence after my first answer. :-)
In general my experience with Aperture is that it has better asset management, more features, and a better workflow through the product... But I to have to acknowledge that my feelings of a 'more intuitive' workflow may simply be due to my familiarity with Aperture, and relative lack thereof with DPP.
In many ways to me DPP felt like a 'lite' product, and Aperture was the 'full' version (as much as that comparison can be made for 2 different pieces of software from 2 different companies). The bottom line for me was that DPP wasn't a bad product at all, but it didn't offer enough of a difference or improvement to warrant shaking up my whole workflow over. But again I will say that if you still find yourself on the fence after my 'thumbnail opinion/review,' the good news is that it's free (except for the hour or two it will take to install it and play with it a bit) in order to test DPP out and see for yourself if it is better enough for you to be worth making a change...
A:
The reason camera makers bundle software with the camera is to cover those that are not using tools like Aperture or Photoshop. This isn't to suggest DPP is bad, not at all, but it's basically there to help the basic consumer do what they need to do after the image is off the camera. Canon, and other camera makers, are not expending the same effort into software development as Adobe or Apple will, so I seriously doubt that DPP would provide gain or have some hidden thing that Canon wouldn't share with these companies. After all, if you want to sell your gear to pros, you'd best be prepared to play nice with the software that the pros will demand.
A:
Canon will almost certainly always have a version of Digital Photo Professional to include with their cameras. Apple may not always support Aperture.
(Yeah, I know. Hindsight's always 20/20)
Additionally, it has been well over five years since the question was asked and most of the other answers written. Digital Professional 4 is a far different application than version 2 that was available in 2011.
DPP now offers even more things that none of the third party raw converters do.
The Digital Lens Optimizer is far superior at lens correction with the lenses for which Canon has produced highly detailed profiles - so much so that even the effects of diffraction can be minimized.
The fine adjustments for color temperature/white balance and the sliders in the HSL tool allow far more precise control of color than Adobe Camera Raw.
DPP continues to apply the in-camera settings individually to each raw when first opened, rather that a rigid default profile or the same single batch profile to all of the files imported at the same time. If you change camera settings while shooting, those changes are reflected when the images taken before and after the changes are opened.
The biggest thing DPP still lacks is the ability to tag images as
you import or as you edit them.
| {
"pile_set_name": "StackExchange"
} |
Q:
'Node' does not name a type Linked List
I'm trying to implement a singly linked list. I'm getting 'Node' does not name a type at line 8 and line 15. Also at lines 33 and 34 I'm getting first and last was not declared in this scope. This is probably a result of Node not being recognized.
#include <iostream>
template <typename T>
class Linked_List {
public:
Linked_List();
void print_list();
Node* find_kth(int k); // Line 8
void insert_front(T d);
void insert_back(T d);
void delete_front();
void delete_back();
private:
int length;
Node* first, last; // Line 15
class Node {
public:
Node(Node* n = NULL, T d = T{});
private:
Node* next;
int data;
};
};
template <typename T>
typename Linked_List<T>::Node(Node* n, T d) {
next = n;
data = d;
}
template <typename T>
Linked_List<T>::Linked_List() {
first = NULL; // Line 33
last = NULL; // Line 34
length = 0;
}
template <typename T>
void Linked_List<T>::print_list() {
Node* temp = first;
if (length > 0) {
std::cout << first->data << std::endl;
}
for (int i = 1; i < length; ++i) {
temp = temp->next;
std::cout << temp->data << std::endl;
}
}
template <typename T>
void Linked_List<T>::insert_front(T d) {
Node* new_node = new Node{first, d};
Node* temp = new_node;
first = temp;
}
int main() {
Linked_List<int> l_l;
//l_l.insert_front(10);
// l_l.print_list();
return 0;
}
A:
The last two errors are the result of he first two and will disappear once you solve them. The first two errors are because you reference the symbol Node before declaring it.
Also, the way you are implementing the constructor of Node is incorrect.
class Linked_List {
class Node { // <-- move it here
public:
Node(Node* n = NULL, T d = T{});
private:
Node* next;
int data;
};
public:
Linked_List();
void print_list();
Node* find_kth(int k);
void insert_front(T d);
void insert_back(T d);
void delete_front();
void delete_back();
private:
int length;
Node* first, last;
};
The constructor of Node that you are trying to implement outside of the class's body should be like this:
template <typename T>
Linked_List<T>::Node::Node(Node* n, T d) {
next = n;
data = d;
}
As you see here Linked_List<T>::Node is the name of the class, then the second Node is the constructor's name, and the inner Node is the parameter's type but in the latter we don't need to say again Linked_List<T>::Node because the scope of the name was already determined.
| {
"pile_set_name": "StackExchange"
} |
Q:
Delete records with a column value from an array in Ruby on Rails
I have a column called school_id in a table called records.
Now I have an array of school_ids and I want to delete all the records that has the following ids that belong to the array.
The question is, is there a way to do it with just one line of command? Like:
Record.delete_all(:school_id => [1, 2, 3, 4])?
Right now I'm doing a looping here and as much as possible I'm trying to simplify that part. TIA
A:
You can use your way. I hope you are using Rails 3.x
Record.delete_all(:school_id => [1, 2, 3, 4])
It generates SQL
DELETE FROM "records" WHERE "records"."school_id" IN (1, 2, 3, 4)
| {
"pile_set_name": "StackExchange"
} |
Q:
Can you vote to kick yourself in CS:GO?
When a vote is called to kick you, can you vote positive? (aka "yes" to kicking yourself)
A:
While in game open your developer console by pressing the ~ key on your keyboard. This is located on the top left hand side of your keyboard under the escape button. If this does not open the developer console you will need to enable it in the settings menu.
Type in status and press enter
Copy the 2 numbers next to your name.
Type callvote kick and then paste those numbers in. Then you will see the votekick box appear letting others know a votekick has started by you to kick you.
It should look like this:
\
status
### ## "NAME"
callvote kick ### ##
What's much more important is the above process works on a BOT. We've all had that moment where the bot survives with an AWP while everyone else is broke with 1150$. The bot can be kicked following the above process.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is there an error “invalid data input” when I try to test a successfully deployed ML model?
I followed a sample notebook to create a IBM cloud machine learning model using scikit-learn. The tutorial can be found here.
Every cell runs correctly and the model is deployed successfully, but when I click into the model and try to make a prediction, an error “Invalid input data” shows up. Why does this issue occur and how should I solve this?
wml_credentials = {
"username": "****",
"password": "****",
"instance_id": "****",
"url": "https://ibm-watson-ml.mybluemix.net”
}
When creating the API client, I tried changing the url from "https://ibm-watson-ml.mybluemix.net” to "https://us-south.ml.cloud.ibm.com”.
I also tried adding the access key like:
wml_credentials = {
"access_key”: "****",
"username": "****",
"password": "****",
"instance_id": "****",
"url": "https://ibm-watson-ml.mybluemix.net”
}
Nothing helped.
A:
Instead of passing individual values for f0,f1,f2..... You can pass the input as JSON shown below
{
"fields" : [ "f0", "f1", "f2", "f3", "f4", "f5", "f6", "f7", "f8", "f9", "f10", "f11", "f12", "f13", "f14", "f15", "f16", "f17", "f18", "f19", "f20", "f21", "f22", "f23", "f24", "f25", "f26", "f27", "f28", "f29", "f30", "f31", "f32", "f33", "f34", "f35", "f36", "f37", "f38", "f39", "f40", "f41", "f42", "f43", "f44", "f45", "f46", "f47", "f48", "f49", "f50", "f51", "f52", "f53", "f54", "f55", "f56", "f57", "f58", "f59", "f60", "f61", "f62", "f63" ],
"values" : [ [ 0.0, 5.0, 12.0, 13.0, 16.0, 16.0, 2.0, 0.0, 0.0, 11.0, 16.0, 15.0, 8.0, 4.0, 0.0, 0.0, 0.0, 8.0, 14.0, 11.0, 1.0, 0.0, 0.0, 0.0, 0.0, 8.0, 16.0, 16.0, 14.0, 0.0, 0.0, 0.0, 0.0, 1.0, 6.0, 6.0, 16.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.0, 16.0, 3.0, 0.0, 0.0, 0.0, 1.0, 5.0, 15.0, 13.0, 0.0, 0.0, 0.0, 0.0, 4.0, 15.0, 16.0, 2.0, 0.0, 0.0, 0.0 ] ]
}
Attaching is a screenshot on how to pass JSON input under deployment
I just verified the notebook and you don't have to change the ML URL.
| {
"pile_set_name": "StackExchange"
} |
Q:
Changing from string to Int in Datatable with .Select statement cannot convert string to int
I have a DataTable that I clone and I end up adding in a lot of model backed properties.
One thing I noticed when I was doing sandbox/playground work with dotnetfiddle.net was that I was not using my important ID column which is of data type int.
https://dotnetfiddle.net/dRrlVu
Now that I have added it in
I'm changing from
foreach (string id in distinctRows)
to
foreach (Int32 id in distinctRows)
I really do NOT understand this line of code at all
var rows = dt.Select("ID = '" + id + "'");
Of course now that it went from a string to an int that is going to be a problem, but how do I even fix it.
public int id { get; set; }
Code that is in the dotnetfiddle https://dotnetfiddle.net/dRrlVu
DataTable dt = new DataTable();
dt.Columns.Add("ID", typeof (int));
dt.Columns.Add("Name", typeof (string));
dt.Columns.Add("Result", typeof (string));
dt.Columns.Add("other", typeof (string));
dt.Rows.Add(1, "John", "1,2,3,4,5", "b");
dt.Rows.Add(2, "Mary ", "5,6,7,8", "d");
dt.Rows.Add(3, "John", "6,7,8,9", "a");
DataTable dtRsult = dt.Clone();
var distinctRows = dt.DefaultView.ToTable(true, "ID").Rows.OfType<DataRow>().Select(k => k[0] + "").ToArray();
//foreach (string id in distinctRows)
foreach (Int32 in distinctRows)
{
var rows = dt.Select("ID = '" + id + "'");
//var rows = dt.Select("ID = '" + id + "'");
string value = "";
string other = "";
foreach (DataRow row in rows)
{
value += row["Result"] + ",";
other += row["other"];
}
value = value.Trim(',');
dtRsult.Rows.Add(id, value, other);
value = "";
other = "";
}
var output = dtRsult;
foreach (DataRow dr in dtRsult.Rows)
{
Console.WriteLine(dr[0] + " --- " + dr[1] + "---- " + dr[2]);
}
A:
This piece of your code
....Select(k => k[0] + "").....
Takes the first column of the enumerated row (the ID field of type integer) and use the + operator with an empty string. The + operator when applied to an integer and a string transforms everything in a string. The resulting string is then added to the enumeration to be returned and transformed in an array
So the resulting var distinctRows is just an array of strings.
Of course you can't use a foreach with an Int32 to traverse your collection of strings
If you remove that operator and the empty string your code will work
var distinctRows = dt.DefaultView
.ToTable(true, "ID")
.Rows.OfType<DataRow>()
.Select(k => k[0])
.ToArray();
| {
"pile_set_name": "StackExchange"
} |
Q:
An exclamation "By George"
Here's a sentence:
By George, I'll see this case through to a finish!
These are the words of a detective (written in 1912)
I am translating a story and there is that phrase, I can't be sure what emotions that man showed (put) when was saing it. An astonishment or approval?
Please, can someone explain it to me, maybe rephrase it for I could catch the real meaning in a simple way.
Thank you in advance
A:
By George! is an exclamation of determination, it is antiquated now but you can see/hear it's usage in movies from the 30s and 40s where elderly people sometimes say it. It's always used in the "I'll do it, I'll get it done, I will." sense. By golly has similar antiquated usage. Without doing any research, I suspect its etymology has some relation to the Kings George of England.
Edit: Etymology. "By (God and Saint) George" is an old English oath invoked immediately before charging into battle as late as World War I. A version of the oath can be found in Shakespeare's Henry VI (part I), written circa 1589 and set in 1431. https://en.wiktionary.org/wiki/by_George
| {
"pile_set_name": "StackExchange"
} |
Q:
xsl:apply-templates: match not being respected
I'm having a problem with xsl:apply-templates. I am attempting to apply a template to one particular tag, but I am seeing text from other tags. A simple xml file:
<?xml version="1.0"?>
<!-- execute with xsltproc foo.xsl foo.xml -->
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform" >
<xsl:output method="text" />
<xsl:template match="/foo">
<xsl:for-each select="bar">
<xsl:value-of select="grill"/>
<xsl:apply-templates match="baz"/>
</xsl:for-each>
</xsl:template>
<xsl:template match="foo">[<xsl:value-of select="." />|http://example.com/<xsl:value-of select="." />]</xsl:template>
</xsl:stylesheet>
The input is:
<?xml version="1.0"?>
<foo>
<bar>
<baz>a <foo>b</foo> c</baz>
<grill>grill</grill>
</bar>
</foo>
The output is:
grill
a [b|http://example.com/b] c
grill
I was expecting the output to be
grill
a [b|http://example.com/b] c
(I don't care about the spacing problems for now)
I can get around the problem with wrapping the xsl:apply-templates with a xsl:for-each:
<xsl:for-each select="grill">
<xsl:apply-templates match="grill"/>
</xsl:for-each>
But I really don't like this solution. Is there a better way?
A:
The attribute match isn't allowed on the xsl:apply-templates element. Change match to select in the xsl:apply-templates and try it again.
| {
"pile_set_name": "StackExchange"
} |
Q:
Django - Render CheckboxSelectMultiple() widget individually in template (manually)
I have those two models:
models.py
class App(models.Model):
app_name = models.SlugField(max_length=50)
options_loaded = models.ManyToManyField(Option)
created_by = models.ForeignKey(User)
def __unicode__(self):
return self.name
class Option(models.Model):
option_name = models.SlugField(max_length=50)
condition = models.BooleanField('Enable condition')
option = models.BooleanField('Enable option1')
created_by = models.ForeignKey(User)
def __unicode__(self):
return self.option_name
I would like to render a form that would look like this, where checkboxes are from different models (first column from the M2M field with CheckboxSelectMultiple() widget), and Option_name could be <a href="/link/">Option_name</a>
Is it possible?
A:
This is my simple solution: render CheckboxSelectMultiple() manually in template
<table>
<thead>
<tr>
<td> </td>
<td>V</td>
<td>S</td>
</tr>
</thead>
{% for pk, choice in form.options.field.widget.choices %}
<tr>
<td><a href="/link/{{ choice }}">{{ choice }}</a></td>
<td>
<label for="id_options_{{ forloop.counter0 }}">
<input {% for m2moption in model.m2moptions.all %}{% if option.pk == pk %}checked="checked"{% endif %}{% endfor %} type="checkbox" id="id_options_{{ forloop.counter0 }}" value="{{ pk }}" name="options" />
</label>
</td>
</tr>
{% endfor %}
</table>
A:
http://dev.yaconiello.com/playground/example/one/
Firstly, I'd restructure your models like so. The way you are currently set up, the option/app checkbox relationship would behave poorly. Each Option would only be able to have a single boolean checked value that it shared with ALL App objects.
models
from django.db import models
from django.utils.translation import ugettext as _
class Option(models.Model):
condition = models.CharField(
verbose_name = _(u'Condition Text'),
max_length = 255,
)
option = models.CharField(
verbose_name = _(u'Option Text'),
max_length = 255,
)
def __unicode__(self):
return self.condition
class App(models.Model):
title = models.CharField(
verbose_name = _(u'App Name'),
max_length = 255
)
slug = models.SlugField(
max_length = 50,
unique = True
)
activated = models.BooleanField(
verbose_name = _(u'Activated'),
default = False,
)
options = models.ManyToManyField(
Option,
through="AppOption"
)
def __unicode__(self):
return self.title
class AppOption(models.Model):
app = models.ForeignKey(
App,
verbose_name = _(u'App'),
)
option = models.ForeignKey(
Option,
verbose_name = _(u'Option'),
)
condition_activated = models.BooleanField(
verbose_name = _(u'Condition Activated'),
default = False,
)
option_activated = models.BooleanField(
verbose_name = _(u'Option Activated'),
default = False,
)
class Meta:
unique_together = (("app", "option"),)
def __unicode__(self):
return "%s %s (%s | %s | %s)" % (self.app, self.option, self.app.activated, self.option_activated, self.condition_activated)
secondly, you should use model formsets and model forms with custom logics inside...
forms
from django.forms.models import modelformset_factory
from django import forms
class AppOptionForm(forms.ModelForm):
class Meta:
model = AppOption
fields = ("app", "option", "condition_activated", "option_activated")
AppOptionFormSet = modelformset_factory(AppOption, form=AppOptionForm)
class AppForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(AppForm, self).__init__(*args, **kwargs)
if self.instance:
self.appoptions_prefix = "appoptions-%s"%self.instance.pk
self.appoptions_formset = AppOptionFormSet(prefix=self.appoptions_prefix,
queryset=AppOption.objects.filter(app=self.instance).order_by('option'))
class Meta:
model = App
fields = ("id", "activated",)
AppFormSet = modelformset_factory(App, form=AppForm)
Ok so what just happened is we created a modelform for AppOption and then turned it into a modelformset.
THEN, we created a modelform for App that has an overridden init method that instantiates an AppOption formset for the App model form's instance.
Lastly, we created a modelformset using the App modelform.
this is a view that saves all of the apps and appoptions
def one(request):
if request.method == 'POST':
formset = AppFormSet(request.POST, prefix="apps") # do some magic to ALSO apply POST to inner formsets
if formset.is_valid(): # do some magic to ALSO validate inner formsets
for form in formset.forms:
# saved App Instances
form.save()
for innerform in form.appoptions_formset:
# saved AppOption instances
innerform.save()
else:
formset = AppFormSet(prefix="apps")
options = Option.objects.all()
return render(
request,
"playground/example/one.html",
{
'formset' : formset,
'options' : options,
}
)
template
this is a test
<style>
thead td {
width: 50px;
height: 100px;
}
.vertical {
-webkit-transform: rotate(-90deg);
-moz-transform: rotate(-90deg);
-ms-transform: rotate(-90deg);
-o-transform: rotate(-90deg);
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=3);
}
</style>
<form>
<table>
<thead>
<tr>
<td> </td>
<td><p class="vertical">Activate App</p></td>
{% for option in options %}
<td><p class="vertical">{{ option.condition }}</p></td>
<td><p class="vertical">{{ option.option }}</p></td>
{% endfor %}
</tr>
</thead>
{% for form in formset.forms %}
{% if form.instance.pk %}
<tr>
<td align="center">{{ form.instance.title }}{{ form.id.as_hidden }}</td>
<td align="center">{{ form.activated }}</td>
{% for optionform in form.appoptions_formset.forms %}
{% if optionform.instance.pk %}
<td align="center">
{{ optionform.app.as_hidden }}
{{ optionform.app.as_hidden }}
{{ optionform.condition_activated }}
</td>
<td align="center">{{ optionform.option_activated }}</td>
{% endif %}
{% endfor %}
</tr>
{% endif %}
{% endfor %}
</table>
</form>
A:
For those who came here looking to render CheckBoxMultipleSelect manually but in the standard way (the way Django does, using HTML lists), the following is what I came up with (@below-the-radar's solution helped me achieve it)
<ul id="id_{{field.name}}">
{% for pk, choice in field.field.widget.choices %}
<li>
<label for="id_{{field.name}}_{{ forloop.counter0 }}">
<input id="id_{{field.name}}_{{ forloop.counter0 }}" name="{{field.name}}" type="checkbox" value="{{pk}}" />
{{ choice }}
</label>
</li>
{% endfor %}
</ul>
| {
"pile_set_name": "StackExchange"
} |
Q:
How to check if entry is file or folder using Python's standard library zipfile?
I have a zip file and I need to check if a file is a folder or a file without extracting them. I could check it using file_size property of infolist if it is 0 but this is the same for a file with 0 size. So it is not useful.
I looked on the ZIP specification but that didn't helped much either.
How to check if entry is file or folder using Python's standard library zipfile?
A:
How about checking if the filename ends with /?
| {
"pile_set_name": "StackExchange"
} |
Q:
If str is a "string slice", why don't std::slice blanket implementations work on str?
In the Rust book and docs, str is being referred to as a slice (in the book they say a slice into a String).
So, I would expect str to behave the same as any other slice: I should be able to for example use blanket implementations from std::slice.
However, this does not seem to be the case:
While this works as expected (playground):
fn main() {
let vec = vec![1, 2, 3, 4];
let int_slice = &vec[..];
for chunk in int_slice.chunks(2) {
println!("{:?}", chunk);
}
}
This fails to compile: (playground)
fn main() {
let s = "Hello world";
for chunk in s.chunks(3) {
println!("{}", chunk);
}
}
With the following error message:
error[E0599]: no method named `chunks` found for type `&str` in the current scope
--> src/main.rs:3:20
|
3 | for chunk in s.chunks(3) {
| ^^^^^^
Does this mean str is not a regular slice?
If it's not: What is the characteristic of str, which make it impossible to be a slice?
On a side-note: If the above is an "int slice", shouldn't str be described as a "char slice"?
A:
The documentation of str starts with
The str type, also called a 'string slice',
The quotes are important here. A str is a 'string slice', which is not the same as a simple 'slice'. They share the name because they are very similar, but are not related to each other otherwise.
You can however get a regular slice out of a 'str' using as_bytes. There is also a mutable version, which is unsafe because you could use it to break an important invariant of &str over &[u8]: UTF-8 validity.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to understand certain protein names
I am looking for a reference to help me understand what is meant by acronyms such as : H3K9me1, H3K9me2, and H3K9Ac.
I know that these are all histone proteins, but is there a general nomenclature algorithm for all cellular proteins? Are H3K9me1 and H3K9me2 just different proteins that methylate the same region? What is meant by H3K? Etc.
Thanks!
A:
For H3K9me1:
H3: name of the histone
K9: amino acid and position (K = lysine, position 9) of modification
me1: modification (me = methylation, ac = Acetylation), and number of modifications
So H3K9me1 means that the 9th residue (lysine) of histone H3 was monomethylated. Similarly H3K9Ac means that same residue was acetylated.
Wikipedia gives a decent explanation of the nomenclature of histone modifications, and there is also a more detailed table in Nature Structural & Molecular Biology.
| {
"pile_set_name": "StackExchange"
} |
Q:
Restrict a specific letter/number in the first input in the edittext
Currently i have edittext that the user requires to input his/her mobile number.
What i need to do is to restrict the user to input "0" and only allow "9" to be first input in the edittext
for the example:
when the user tries to input "0" in the edittext and it is the current first number in the edittext, it shouldn't be allowed.
input that can only be accept is :
9351312351, 9313151231.
the other number like
0931231551, 02312512312 shouldn't be allow.
My current idea is something like this in the textwatcher:
@Override
public void afterTextChanged(Editable editable) {
if (!mMobileNumberNoEditText.getText().toString().substring(0,0).equals("9")) {
mMobileNumberNoEditText.setText("");
}
}
Currently causing an error the moment i input any number.
The logcat shows nothing. it just freeze and stopped working.
A:
You can do it like below by checking length
@Override
public void afterTextChanged(Editable editable) {
String number = number.toString()
if(number.startsWith("0"))
{
mMobileNumberNoEditText.setText("");
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
When refreshing a submitted content on Google Chrome, confirmation prompt doesn't get focused
Here is the scenario: whenever I submit a form on Google Chrome, then try to refresh the page, I get prompted by the browser to confirm the re-submition, due to data loss et cetera, as usual. The annoying part is that the confirmation prompt doesn't get the focus (it used to do it).
This is a common task I believe many programmers do:
CTRL+S on code editor;
ALT+TAB to Google Chrome and
CTRL+R (or F5) to refresh the page.
RETURN or SPACE to confirm the re-submition.
The problem is at the 4th step: once the confirmation prompt doesn't get the focus, I have to mouse click the "OK" button.
It's not a big deal, but it really becomes a pain when you have to test multiple times a form submition.
My Chrome version is: 27.0.1453.94 m
Thanks in advance.
A:
I noticed the same behavior in Chrome 27.0.1453.94 m, it's very annoying when developing webpages. The problem persists in version 27.0.1453.110 m.
I've tried 29.0.1529.3 canary, it's refresh behavior is back to normal in that version.
Today i noticed that this behaviour is fixed :-)
Current version: 28.0.1500.71 m
| {
"pile_set_name": "StackExchange"
} |
Q:
Can I use k-means with a distance matrix composed of percentages?
I have objects o1, o2,...,on and for each pair I calculate a value that measures the pair's difference. This is a percentage, so for example o1o2 differ by 56%. Now I want to cluster this data. I can see how a hierarchical clustering analysis would fit e.g. o5o6 are the closest pair, now which of the rest are closest to either o5 or o6, and so on.
My question is: can I apply a k-means analysis as well? It seems to me k-means requires (x,y) type data but I may be wrong. Perhaps k-means requires data with levels of measurement: (ratio, ratio) and hierarchical requires (nominal, nominal, ratio)? Or maybe there is a better technique?
A:
k-means is called k-means because it needs to compute means.
So yes, k-means requires coordinate data. Usually, the data should also be continuous and linear scaled for the mean to make sense. Technically you can run k-means on binary data, but the result will not be binary anymore, and may not make much sense.
(And the means must minimize your objective, otherwise it may fail to converge - so your distance function should be sum-of-squared-errors, because that is what the mean minimizes)
Furthermore, a distance matrix is useless for k-means. Because k-means only computes squared Euclidean distances point-to-mean, and not point-to-point.
| {
"pile_set_name": "StackExchange"
} |
Q:
What happens if the mass of a rigidbody doesn't match to it's volume X density?
I would like to know what effect does an object have in the rigid body simulations, when the volume x density of the object doesn't match the mass. In most cases the default density is 1.0 in blender!
A:
It's not that the mass doesn't match density * volume, it's that the density is defined by
mass / volume.
Needless to say, the default mass of 1 gram makes most objects very much the opposite of "dense" ;)
To calculate physically accurate values for mass based on measured density of real-world materials, try using the Calculate Mass operator in 3D view > Tool Shelf (T) > Physics > Rigid Body Tools:
| {
"pile_set_name": "StackExchange"
} |
Q:
Setting array to JTextField then removing
I am wanting to get a JTextField filled with an array A-Z. Then when a user pressed e.g. P on the keyboard, that letter will be removed from the JTextField.
So far all I have is the following, I know it's nowhere close so apologies (And I know it wont work).
tf_1 = new JTextField();
String[] alphabet = {"A", "B" //etc};
tf_1.setText(alphabet);
tf_1.addKeyListener(new KeyAdapter() {
public void keyTyped(KeyEvent e) {
// Remove letter if typed.
}
}
Inside the key listener, how can I add code to remove the typed letter from the alphabet array?
A:
Not so fine, but works
//Frame maninFr = new Frame();
JTextField tf_1 = new JTextField();
//maninFr.add(tf_1);
//maninFr.show();
String[] alphabet = {"A","B"};
tf_1.setText(Arrays.toString(alphabet));
tf_1.addKeyListener(new KeyListener() {
@Override
public void keyTyped(KeyEvent e) {}
@Override
public void keyReleased(KeyEvent e) {
String input = tf_1.getText();
char pressed = e.getKeyChar();
String newInput = input.replaceAll(Character.toString(pressed), "");
System.out.println("pressed: " + pressed);
System.out.println("newin : " + newInput);
tf_1.setText(newInput);
}
@Override
public void keyPressed(KeyEvent e) {}
});
| {
"pile_set_name": "StackExchange"
} |
Q:
MySQL - Are column names stored in each row in full length?
MySQL: I'm not finding this in the manual or in google, but do column names get stored in full in each row?
I suppose that this might change depending on compression, but leaving compression aside, suppose I have a normal InnoDB table with a column's name "really_long_name_for_a_column". Will this name (29 bytes) be stored in each row for every row in the table? Or is a shortened version recorded for each row or nothing at all?
Thanks!
A:
The answer is of course an emphatic no. It would be a very silly thing to store them in each and every - of the possibly billion - rows.
Column names are metadata and are stored only once, in the system tables.
The format of the rows and how they know where a column ends and the next starts depend on the engine of the table. It differs between InnoDB, MyISAM, Memory, etc. Check the respective documentation for more details.
Having said that, nothing prevents you from writing - and using - an engine of your own that stores the column names in each and every row. It would still be a very silly thing to do. You should also consider where the names will be kept when the table has 0 rows, eg. immediatey after creation of the table or after it has been truncated.
| {
"pile_set_name": "StackExchange"
} |
Q:
Click event produces different transition delays, despite equal values
I am creating a simple Dropdown menu in React. The initial event of the drop down works as expected. However, when retracting the dropdown there is a noticeable delay in the transition despite equal default values.
The CSS transition properties are identical so I am not sure why there is a delay. I have also set the transition delay value of both explicitly to 0, to be sure.
here is the component
class DrawerLink extends React.Component {
constructor() {
super();
this.state = { collapse: false };
this.handleLinkToggle = this.handleLinkToggle.bind(this);
}
handleLinkToggle(e) {
this.setState({ collapse: !this.state.collapse });
}
render() {
const { collapse } = this.state;
return (
<div className="DrawerLinkContainer">
<div onClick={this.handleLinkToggle}>
<div className="image-container">
<img src="../public/images/user.jpeg" />
</div>
<p>Steave Jobs</p>
</div>
<div
className={
collapse ? "sub-menu-container collapse" : "sub-menu-container"
}
>
<p>First Sub Menu</p>
<p>Second Sub Menu</p>
</div>
</div>
);
}
}
and the CSS:
.sub-menu-container {
padding-left: 40px;
max-height: 0px;
overflow: hidden;
transition: max-height 1s linear 0s;
&.collapse {
max-height: 500px;
transition: max-height 1s linear 0s;
}
p {
padding: 7px 35px 7px 15px;
}
}
A:
Because you have set max-height to 500px and your content is only about 60px, the first transition is still executing causing what seems like a delay. I.E increasing to a "max height" of 500px.
You can see better if you set a background colour on the sub container and set the height equal to the max height in the collapse class. Something like:
.sub-menu-container {
padding-left: 40px;
max-height: 500px;
overflow: hidden;
transition: max-height 1s linear 0s;
background: red;
&.collapse {
max-height: 0px;
}
p {
padding: 7px 35px 7px 15px;
}
}
The real solution is to use an animation library because transitions have issues (they just don't work) when element are removed from the DOM by React.
If you need a solution to your example, you need to know the height of the sub container and use that as a max height. It's not very dynamic though.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I extrude in the direction the vertices are facing?
It seems that Blender randomly selects when to extrude on a locked axis or projected from the viewport. How do I extrude so that it locks on the axis the vertices are facing?
A:
It's a real pity, but Blender is indeed broken like that. It does not do what it says. If you choose to Extrude and move on normals it works only if you have faces selected and with edges and vertices it does not work. This could be considered to be a bug, since the functionality does not work as intended or as it's label suggests at least. Luckily, it is not a huge inconvenience since it can be fixed very easily - you can just hit Enter without moving the vertices when extruded and then use Shrink/Flatten(alt+s) to move them in the direction of their normals:
You can also use the functionality of the transform operators to move them as you wish after they are extruded:
Edit: who could have thought?.. - it is broken here as well - you can see in my example gif that the vertex normals change when they are extruded with regular e. Well, if you use Extrude Only Vertices from the Space menu, it seems to work. This is a complete mess when you think about it... I am surprised they managed to leave it like that.
| {
"pile_set_name": "StackExchange"
} |
Q:
Как задать css для класса если внутри блока 2 элемента
Как добавить новый класс для <li class="li-child"> если внутри блока с классом ul-parent 2 li-элемента?
<ul class="ul-parent">
<li class="li-child">1</li>
<li class="li-child">1</li>
</ul>
A:
Проходим циклом по всем значениям и добавляем класс
document.querySelectorAll('.ul-parent .li-child').forEach(elem=>elem.classList.add('new-class'))
UPD
Если задача состоит в том, чтобы отмечать только те элелменты, которые в кол-ве 2, то можно воспользоваться следующей проверкой.
let liElems = document.querySelectorAll('.ul-parent .li-child')
if (liElems.length === 2){
liElems.forEach(elem=>elem.classList.add('new-class'))
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to index a table with order by?
Using a while loop I'm able to return my table in the order I want, but after implementing pagination the variable I've created (counter) resets itself on each page, frustratingly. Example code:
$sql = ('SELECT id,name,logo FROM mytable ORDER BY name DESC LIMIT 25');
$query = mysqli_query($db_conx,$sql);
$counter = 0;
while ($row = $query->fetch_assoc()) {
$counter++;
echo "$counter, $row['id'], $row['name']";
echo "<br />";
}
I've tried many things and can't get this to work. Obviously my logic is flawed. The loop returns the correct results, but the $counter variable breaks on each page, resetting itself indefinitely.
What I am trying to do is get $counter to increase by 25 (representing results for each page) for each of the pages created by the pagination loop. Example code:
for ($i=1; $i<=$total_pages; $i++) {
echo "<a href='page.php?page=".$i."'> [".$i."]</a> ";
$GLOBALS["counter"]+=25;
};
Obviously this was not working, so I am stumped at what I should try next. If anyone has any ideas I would love to hear them, I have heard great things about the SO community.
A:
You seem to display only the first 25 results at any time.
You need to initialize $counter to zero if it's the first page, to 26 if it's the second page, and so on :
$counter = 0;
if(isset($_GET['counter'])){
$counter = intval($_GET['counter']);
}
You need to modify your query to fetch a different set of results for each page :
$sql = 'SELECT id,name,logo FROM mytable ORDER BY name DESC LIMIT ' . mysqli_real_escape_string($db_conx, $counter . ',25');
$query = mysqli_query($db_conx,$sql);
Then I assume you display a link to the other paginated pages, you need to pass it the value of $counter :
<a href="results.php?counter=<?php echo $counter;?>">Next</a>
| {
"pile_set_name": "StackExchange"
} |
Q:
New to Macbook Pro, need help upgrading
I was recently gifted an old Mid 2010 Macbook Pro with the following specs listed here:
http://www.everymac.com/systems/apple/macbook_pro/specs/macbook-pro-core-i7-2.66-aluminum-15-mid-2010-unibody-specs.html
I would like to do a few things to it, but I have never owned an apple computer before and I know there are some restrictions on the hardware i can purchase to upgrade it.
These are the things I want to do
1) Upgrade Ram: Currently it has 2x2GB of RAM in it. This model is rated for up to 8GB but I have been reading online and it seems some people have been able to successfully install 16GB of RAM, so i am confused. Does anyone have a definitive statement on whether or not this Macbook Pro will work with 16GB of RAM? If it does accept 16GB total which RAM should i buy for a Macbook Pro like this? If it only accepts 8GB total which RAM should i buy? And I know the clock speed must be 1066Hz for this Macbook Pro, but can i use DDR3, or DDR4 RAM? Does the DDR type matter?
2) Install Secondary Hard Drive: I am not 100% so i need someone to inform me but i believe there is space in this Macbook Pro for a second hard drive. Does the CD Drive need to be removed in order for a second Hard Drive to be put in? Or is there simply space inside it for a second hard drive but not installed by Apple? Either way I would like to install an SSD into it and install the Operating System onto that SSD. Which SSD can i look to buy that is compatible with this Pro?
3) Fresh install of Mac OSX: As it currently stands my friend who gave this laptop to me didn't erase his data. I am uncomfortable with this and want a fresh start. How do I do a fresh install of Mac OSX Mavericks (or should i wait for Yosemite?). Where can i buy a Disk/USB for it (or make a disk/USB for it) to do a clean install and how do i format a Macbook Pros hard drive and install fresh? Should this be done at an apple store?
4) Higher Screen Resolution: This is my lowest priority and I don't think it is possible to do without major invasive changes to the Macbook Pro but i want to know if it is possible. Currently the Max Resolution on the Device is 1440x900, i would like to know if it is possible to replace the LCD with one that is 1920x1080 for high resolution.
A:
1) RAM Upgrade
As far as I'm aware, the laptop cannot be upgraded beyond 8GB. This is due to restrictions by the CPU and chipset architecture in that particular model. Unless you are a real power user who does lots of video editing or likes to have a lot of applications open at a single given point in time, 8GB should be perfectly fine.
DDR4 ram is NOT supported. It is only a new technology (literally only about a month old) and presently is only compatible with 3 CPU's that were released this year and certain motherboards with a specific chipset and socket for these new CPU's. Bottom line is, your 2010 laptop will not be able to use this type of memory. DDR3 is your only option given the type and generation of hardware inside the laptop.
2) Secondary HDD
It is possible to install a second hard drive, however this will involve removing the CD drive. This will also cause trouble if you ever take the laptop to a genius bar for technical assistance on an unrelated failed component as you have tinkered with the laptop beyond what is allowed. If you're willing to take the risk and do some DIY within the laptop, this is the part that will allow you to install a secondary drive.
3) Fresh Install of OSX
Yes. Wipe it clean now with Mavericks. Updating to Yosemite when it launches will be extremely simple so there's no need to wait for it. Use this guide to help you create a bootable installer USB drive to completely wipe the drive and put a fresh install of Mavericks on the computer.
4) Higher screen resolution
Unfortunately you cannot modify the screen resolution. No such third party screen exists. Generally speaking 15" laptops (especially around the 2010 time period) never had 1920x1080 resolutions and only few laptops today have them (commonly known as high density displays, or in the apple world, retina displays). Only the 17" model had a resolution that high (1920x1200 - 16:10 aspect ratio) and that model has been discontinued for a few years now. So I'm afraid that you're stuck with the resolution you've got.
| {
"pile_set_name": "StackExchange"
} |
Q:
El uso de "can't" para traducción de "no capaz"
(Por favor, perdona - y/o corrija - mi "Google Translate" español.)
Tanto Duolingo como Google Translate traducen la frase "¡No eres capaz!" como "You are not capable!" Esto es razonable, pero nunca lo utilizaría en el lenguaje cotidiano.
Para mí, en inglés "can't" significa tanto incapacidad como falta de habilidad, mientras en español "no poder" significa el primero pero "no capaz" significa el último. ¿"No you can't!" está bien, o debería yo atenerme estrictamente a "not capable" por "no capaz"?
A:
capable es un adjetivo. Comparar:
No eres capaz. (Nos dirigimos a la persona mostrando que su cualidad para ejecutar una acción, no es posible.)
No puedes hacerlo. (Mostramos que la persona no posee la habilidad de ejecutar una acción.)
Todo depende en un tema de contexto, pues decir no puedes o no eres capaz, no son iguales pero la segunda alude más a denotar una cualidad de la persona que no posee, la primera se refiere a la acción.
| {
"pile_set_name": "StackExchange"
} |
Q:
Android - How to remove activity from recent apps?
I created a custom dialog on my android app. This dialog is an activity with Dialog theme. Now, assume that the app is showing this dialog, User press "Home" to back to Android Home view. Later, user press and hold button Home then choose my app from recent apps. It's will show the dialog again.
What I want to do here is that the dialog shouldn't show. I want to show the activity which called this dialog.
How can I do this?
A:
How to remove activity from recent apps?
I think android:excludeFromRecents="true" should do the trick. Use it in your manifest
What I want to do here is that the dialog shouldn't show.
dialog.cancel() in onPause()
A:
Also you can use flag Intent.FLAG_ACTIVITY_EXCLUDE_FROM_RECENTS:
.....
i.addFlags(Intent.FLAG_ACTIVITY_EXCLUDE_FROM_RECENTS);
startActivity(i);
The activity you start won't be in recent apps.
| {
"pile_set_name": "StackExchange"
} |
Q:
Build fails after upgrading from Vaadin 7 8
I just upgraded from Vaadin 7 to 8. We were on 7.7.7 and now we are on 8.0.3. My IDE shows no compile time errors but when I run mvn package I see the following error..
[INFO] --- vaadin-maven-plugin:8.0.3:compile (default) @ eagleportal ---
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 25.728 s
[INFO] Finished at: 2017-03-22T18:19:55-06:00
[INFO] Final Memory: 22M/142M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.vaadin:vaadin-maven-plugin:8.0.3:compile (default) on project eagleportal: GWT Module com.vaadin.terminal.gwt.DefaultWidgetSet not found in project sources or resources. -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
My maven config for the Vaadin compiler looks like this
<plugin>
<groupId>com.vaadin</groupId>
<artifactId>vaadin-maven-plugin</artifactId>
<version>${vaadin.plugin.version}</version>
<configuration>
<extraJvmArgs>-Xmx512M -Xss1024k</extraJvmArgs>
<!-- <runTarget>mobilemail</runTarget> -->
<!-- We are doing "inplace" but into subdir VAADIN/widgetsets. This
way compatible with Vaadin eclipse plugin. -->
<webappDirectory>${basedir}/src/main/webapp/VAADIN/widgetsets</webappDirectory>
<hostedWebapp>${basedir}/src/main/webapp/VAADIN/widgetsets</hostedWebapp>
<noServer>true</noServer>
<!-- Remove draftCompile when project is ready -->
<draftCompile>false</draftCompile>
<compileReport>true</compileReport>
<style>OBF</style>
<runTarget>http://localhost:8080/</runTarget>
<warSourceDirectory>${basedir}/src/main/webapp</warSourceDirectory>
</configuration>
<executions>
<execution>
<configuration>
<modules>
<module>com.eaglecopter.portal.EaglePortal</module>
<module>com.eaglecopter.portal.EaglePortalMobile</module>
</modules>
</configuration>
<phase>generate-resources</phase>
<goals>
<goal>clean</goal>
<goal>resources</goal>
<goal>update-theme</goal>
<goal>update-widgetset</goal>
<goal>compile-theme</goal>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
A:
Your application widgetset or some add-on that you are using is still pointing to V6 era com.vaadin.terminal.gwt.DefaultWidgetSet. If the re is a direct reference to that in your applications .gwt.xml files, switch that to com.vaadin.DefaultWidgetSet.
If there are some add-ons that try to bring it in, upgrade those to Vaadin 8 compatible versions. If no V8 compatible version is out yet, I'll need to help the maintainer to upgrade or try to manage without the add-on.
| {
"pile_set_name": "StackExchange"
} |
Q:
convert first character of string to uppercase using java 8 lambdas only
I want to create a basic program of converting first character of string to uppercase through lambdas
Input
singhakash
Output
Singhakash
I tried
String st = "singhakash";
//approach 1
System.out.print(st.substring(0, 1).toUpperCase());
st.substring(1).codePoints()
.forEach(e -> System.out.print((char) e));
System.out.println();
//approach 2
System.out.print(st.substring(0, 1).toUpperCase());
IntStream.range(0, st.length())
.filter(i -> i > 0)
.mapToObj(st::charAt)
.forEach(System.out::print);
But for both the cases I have to print the first character seperately.Is there any way I can do that without having a seperate print statement?
Note: I can do that normally by loop or any other approach but I am looking for lambdas only solution.
Thanks
A:
You could do it like this:
String st = "singhakash";
IntStream.range(0, st.length())
.mapToObj(i -> i == 0 ? Character.toUpperCase(st.charAt(i)) : st.charAt(i))
.forEach(System.out::print);
A:
The simplest way to do it would be
String result = Character.toUpperCase(st.charAt(0))+st.substring(1);
If you feel like you have to optimize it, i.e. reduce the number of copying operations (instead of letting the JVM do it), you may use:
StringBuilder sb=new StringBuilder(st);
sb.setCharAt(0, Character.toUpperCase(sb.charAt(0)));
String result=sb.toString();
But if it really has to be done using the fancy new Java 8 feature, you can use
String result=IntStream.concat(
IntStream.of(st.codePointAt(0)).map(Character::toUpperCase), st.codePoints().skip(1) )
.collect(StringBuilder::new, StringBuilder::appendCodePoint, StringBuilder::append)
.toString();
This solution will even handle supplementary code points correctly, so it has even an advantage over the simple solutions (though it would not be too hard to makes these supplementary code point aware too).
If you want to print directly, you can use
IntStream.concat(
IntStream.of(st.codePointAt(0)).map(Character::toUpperCase), st.codePoints().skip(1))
.forEach(cp -> System.out.print(Character.toChars(cp)));
| {
"pile_set_name": "StackExchange"
} |
Q:
Mapper vs Reducer Computation Time and effect on network performance Hadoop
I have to generate n*(n-1)/2 candidate pairs, from a list of n candidates.
This can be done in every mapper instance or in every reducer instance.
But I observed that, when this operation was done in Reduce phase it was way faster than done in the Map Phase. What is the reason?
Can Mappers not support heavy computation?
What is the impact of a Mapper instance doing such a computation on the network?
Thanks!
A:
The short answer is : when use mapper to generate data, Hadoop have to copy the data from mapper to redcuer, this cost too much time.
result total data size
The total data generated is O(n^2).
comparesion of data generation by mapper VS reducer
If you generate n*(n-1)/2 pairs using mapper, the intermediate data have to be copied to the reducer. This step in Hadoop is named Shuffle Phase. and reducer will still need to put these data to HDFS. The total data read/write from the Harddisk in your cause during the shuffle phase can be 6* sizeof(intermediate data), which is very large.
while if the data is generated by the reducer, the O(n^2) intermediate data transformation is unnecessary. So it could have a better performance.
So your performance issue is mainly caused by data transformation, not computation. And if no disk-access, the mapper and reducer just have the same performance.
ways to improve performance of the mapper data generation strategy
If you still want to use mapper to generate the data, maybe the io.sort.factor, turn on compression may help improve the performance.
| {
"pile_set_name": "StackExchange"
} |
Q:
Formatting Cucumber Feature file
I extensively use Data tables in cucumber feature file. Data Tables are mostly dump of Database table which i export in pipe delimted format and not properly aligned.
Is there any option in Cucumber-JVM that will auto align DataTables ?
A:
If you're using IntelliJ go to Code>Reformat code
| {
"pile_set_name": "StackExchange"
} |
Q:
Why doesnt JQuery datePicker execute?
<!DOCTYPE html>
<html>
<head>
<meta charset = "utf-8/">
<title>Calander</title>
<script type = "text/javascript" src = "http://code.jquery.com/jquery-1.11.3.min.js"></script>
<script>
$('#Date').datepicker();
</script>
</head>
<body>
<label for= "Date"> Date </label>
<input type = "text" id = "Date" name = "Date"/>
</body>
</html>
When I run this nothing happens I want the calendar to show up when clicking on textbox. I think it may be because of the format i.e. inncorrect src
A:
here is the whole code of including jQuery datepicker.
you can also see here : https://jqueryui.com/datepicker/
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>jQuery UI Datepicker - Default functionality</title>
<link rel="stylesheet" href="http://code.jquery.com/ui/1.11.4/themes/smoothness/jquery-ui.css">
</head>
<body>
<p>Date: <input type="text" id="datepicker"></p>
<script src="http://code.jquery.com/jquery-1.10.2.js"></script>
<script src="http://code.jquery.com/ui/1.11.4/jquery-ui.js"></script>
<link rel="stylesheet" href="/resources/demos/style.css">
<script type="text/javascript">
$(function() {
$( "#datepicker" ).datepicker();
});
</script>
</body>
</html>
| {
"pile_set_name": "StackExchange"
} |
Q:
Qt Model-View update view?
I have a model which is updated dynamically not related to the view. Which method should be called on the view to show the current models data?
Example:
StationListModel *model = new StationListModel(dynamic_list);
QListView *view = new QListView;
view->setModel(model); //view set with empty model
view->show();
In some point in time the dynamic_list is populated with new entries via a socket connection (nothing to do with View). How to populate the view with new data?
A:
Model must emit signals to notify views when its data changed. Choose appropriate signals depending on how exactly data is changed:
dataChanged signal forces view to update specific cells, but not to create or remove cells.
layoutAboutToBeChanged and layoutChanged signals forces view to update everything.
signals about adding or removing rows and columns forces view to update accordingly.
| {
"pile_set_name": "StackExchange"
} |
Q:
Get DHCP running with Kernel 4.14-rc5
I tried to solve my problem that my touchpad is not running on my Lenovo 720-15IKB by installing the latest rc kernel 4.14.rc5 as described here, which really worked! The touchpad is working then! But now I have a new problem caused by that kernel:
Networking doesn't work correctly with kernel 4.14-rc5
I don't get any IPv4 IP any more in my local Network. IPv6 works correctly. If IPv6 is running in your network, You could add all needed addresses by hand in my /etc/hosts file, but that is no solution ;)
I could only workaround it like this:
Instead of DHCP I used manual wifi configuration which still didn't help at first. Then I connected a USB-LAN adapter once and noticed, that I got a correct internet settings via LAN then. This seems somehow to have fixed some misconfiguration. I can now get correct internet settings via WiFi too. Also after a reboot I can reconnect via WiFi only. But DHCP still doesn't work. I tested this with 3 different WiFis in different places.
I just installed plain standard Ubuntu 17.10 with systemd and Network Manager, no modifications.
How can I get IP4 with DHCP running with the latest kernel?
A:
Looking on google:
https://ubuntuforums.org/showthread.php?t=2372492
https://www.phoronix.com/scan.php?page=news_item&px=AppArmor-Linux-4.14
https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1724450
It seems related to AppArmor, and I read there are patches (like perhaps editing apparmor's configuration: apparmor-for-4.14.diff ).
This Ubuntu page on Apparmor gives informations on how to partially disable it. The same command aa-complain can be used to allow both a given command or a whole profile to be bypassed. So first install the required tools (be creative if network isn't working yet...):
apt install apparmor-utils
For dhclient and related binaries (including its communication with NetworkManager), doing this fixes the DHCP issue:
sudo aa-complain /etc/apparmor.d/sbin.dhclient
In case an other unrelated command behaves differently than before and there's no easy profile for it, just using sudo aa-complain /path/to/command should allow it to work unhindered. Keep security considerations in mind.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to display status updates from followed users? (MVC)
What would be a good approach to display on a dashboard, status updates from users that are being followed (e.g. twitter) on a MVC framework such as codeigniter.
I have a Table, just for the status update, where I record the ID,user_id & message.
Should I create a DB table where I record who is following who, by recording the Users ID when a user choose to follow someone?
If so how would I make a query to the database to request for status update only for followed users?
A:
This is a typical Many-To-Many relationship, so you'll need a table to store this relation. The table would simply contain two user id's one for the follower and one for the one being followed. For example:
Followed_Id (BIGINT) | Follower_Id (BIGINT)
These columns would then both have a foreign key, referencing the ID column of your user table.
There are a few ORM tools for CI, like swatkins notes in his comment.
For the querying of the status updates, you basically have two options:
Polling, where your client would periodically poll the backend for new updates
Pushing, where your backend will notify your client of new updates
The second option is considered a better approach for problems like these because:
It can be implemented asynchronously
It avoids unnecessary calls to the backend in case there's no new data
| {
"pile_set_name": "StackExchange"
} |
Q:
Invoking method using reflection
I am trying to figure out how to invoke a method of a custom class. Here is the process of what I am trying to do:
1) I initialize an array of methods from the list of methods of my custom class, and an empty List of Method which will be used to hold a filtered list of these methods.
Method method[] = MyClass.getDeclaredMethods();
List<Method> x = new ArrayList<Method>();
2) I then run my array of methods through a for loop and filter out whichever methods do not fill my required criteria.
for (Method m : methods){
if(...){
if(...){
x.add(m);
}
}
}
3) Finally, I need to invoke each of the methods in the finalized list. This is where I am stuck, I am not exactly sure how the invoke function works. Here is what I am trying:
for(int i=0; i < x.size(); i++){
boolean g = x.get(i).invoke();
if(...)
else(...)
}
The thing is, I know Exactly what it is I don't know, I am just having trouble finding the answers. These are the questions I need answered:
1) Which object will actually use the invoke function? Is it going to be, in my case, the particular method I want to invoke, or an instance of the class I am trying to invoke?
2) I know that the invoke function is going to require arguments, one of which is the parameter data for the method. What I am unclear about is what exactly the first argument needs to be. I am thinking that the first argument is the actual method itself, but then I run into a logical loop, because the way I have it coded has the method using the invoke function, so I am stumped.
3) In my case, the methods I wish to invoke don't actually take any parameters, so when I do happen to figure out how the invoke function works, will I need to set one of the arguments to null, or will I just omit that part of the argument list?
A:
You're using .invoke incorrectly. See this short example:
public class Test {
public static void main(String[] args) throws NoSuchMethodException, SecurityException, IllegalAccessException, IllegalArgumentException, InvocationTargetException {
X obj = new X();
Method method = obj.getClass().getMethod("test", null);
method.invoke(obj, null);
}
}
class X {
public void test(){
System.out.println("method call");
}
}
Output:
method call
More information in the docs.
Invokes the underlying method represented by this Method object, on the specified object with the specified parameters.
You have never specified an object nor parameters. My sample uses no parameters so I can put null instead. But either way you have to provide an instance as the first parameter (unless it is static).
| {
"pile_set_name": "StackExchange"
} |
Q:
Mapping object with AutoMapper
I have my classes that looks like this :
public class Student{
public string Name { get; set; }
public string Id { get; set; }
public List<Course> Courses { get; set; }
public string Address { get; set; }
}
public class Course{
public string Id { get; set; }
public string Description { get; set; }
public Date Hour { get; set; }
}
And i want to map the Student class to the following one using AutoMapper
public class StudentModel{
public string Id { get; set; }
public StudentProperties Properties { get; set; }
}
where StudentProperties is the remaining properties of the student class
public class StudentProperties{
public string Name { get; set; }
public List<Course> Courses { get; set; }
public string Address { get; set; }
}
based on the AutoMapper documentation (https://github.com/AutoMapper/AutoMapper/wiki) we can use custom resolver to solve destination member while performing the mapping.
But i don't want to add new class for the resolver.
I'm wondering if there is a simple way to perform the mapping by just doing simple configuration like this :
Mapper.Initialize(cfg =>
{
cfg.CreateMap<Student, StudentProperties>();
cfg.CreateMap<Student, StudentModel>();
});
A:
Here's one option that would work for you and would use AutoMapper for both StudentModel and StudentProperties:
Mapper.Initialize(cfg =>
{
cfg.CreateMap<Student, StudentProperties>();
cfg.CreateMap<Student, StudentModel>()
.ForMember(dest => dest.Properties,
opt => opt.ResolveUsing(Mapper.Map<StudentProperties>));
});
Here, we're making use of ResolveUsing, but using the Func<> version in order to avoid creating a new class. This Func<> is just Mapper.Map itself, which already knows how to map from Student to StudentProperties.
| {
"pile_set_name": "StackExchange"
} |
Q:
Algorithm for finding double subwords of given word
Subword U of given word V is double, when It's in form u=ww, for example "abab" is a double subword of "acdababx" but "cdab" is not.
I need an algorithm that checks if given subword U of word V is double. V can be preprocessed in linear time, but answer for any particular U should have constant time complexity, becouse there will be many U's for every V. U is given as an interval,for example if V = "acdababx",
interval [3..6] corresponds to subword "daba".
example input and output:
V = abbacbacca
U =
[1 4] --> No
[3 8] --> Yes
[5 8] --> No
[8 9] --> Yes
[1 10] --> No
This is not a problem from any current contest.
A:
Here is one algorithm which claims to mark the endpoints of all the double words (or as it is commonly known in literature, tandem repeats) in a suffix tree of the input word (which can be constructed in O(n)) time. Of course, since I don't have full access to the article, I am not sure if it will satisfy the O(1) query time.
The paper is: Linear time algorithms for finding and representing all the tandem repeats in a string
Hope that helps.
| {
"pile_set_name": "StackExchange"
} |
Q:
Outlook reminders gone missing on Citrix client (works in cached mode)
Setup
3 ESXi 5.1 hosts, 4 virtual XENAPP servers (3 in production, 1 test server), 300 users (50 PC's, the rest is on clients), Exchange 2010
Problem
One Citrix user's Outlook suddenly won't show reminders anymore. Everybody elses works just fine.
So far I tried
1) In outlook: Show -> Reminders -> 0 reminders (this is the problem).
2) In outlook: Checked (and checked again) that reminders are set correctly. Files -> Settings -> Advanced -> reminders -> Show reminders -> checkboxed
3) In outlook: Changed setting to cached mode. Works, resetting back to none-cached mode. No go.
4) Logged him off, deleted his Citrix profile (this kinda fix most), logged him in again, no luck.
I hoped that reversing cached mode in step 3 would make outlook sync everything back to the exchange server. Do I misunderstand something here? Also, I've been reading about 'outlook /cleanreminders' and 'outlook /resetfolders' could do the trick. Will these commands also apply to a Citrix clients? (and how to do it - im quite new to Citrix).
Im not sure why, but I'm having a hunch that its not so Citrix related, but more outlook/exhange related. But again, im new to Citrix.
Any thougts and suggestions are highly appriciated. Thank you.
-Rasmus
A:
After reading the FAQ it seems to be okay to answer your own question. It's already in the commentary above, but to close it off, here goes:
The problem was with outlook, not with Citrix. As stated above, I deleted all notifications older that one month (users best guess to when the problem accured), and bingo - the remaining reminders showed up as supposed to.
I wish I could give more details into whether it was syncronization error og what else, but this is all I got.
| {
"pile_set_name": "StackExchange"
} |
Q:
TypeError: Timestamp subtraction
I have a script that goes and collects data. I am running into the TypeError: Timestamp subtraction must have the same timezones or no timezones error. I have looked at other postings on this error, but had trouble finding a solution for me.
How can I bypass this error. Once the data is collected, I don't manipulate it and I don't quite understand why I cannot save this dataframe into an excel document. Can anyone offer help?
import pandas as pd
import numpy as np
import os
import datetime
import pvlib
from pvlib.forecast import GFS, NAM
#directories and filepaths
barnwell_dir = r'D:\Saurabh\Production Forecasting\Machine Learning\Sites\Barnwell'
barnwell_training = r'8760_barnwell.xlsx'
#constants
writer = pd.ExcelWriter('test' + '_PythonExport.xlsx', engine='xlsxwriter')
time_zone = 'Etc/GMT+5'
barnwell_list = [r'8760_barnwell.xlsx', 33.2376, -81.3510]
def get_gfs_processed_data1():
start = pd.Timestamp(datetime.date.today(), tz=time_zone) #used for testing last week
end = start + pd.Timedelta(days=6)
gfs = GFS(resolution='quarter')
#get processed data for lat/long point
forecasted_data = gfs.get_processed_data(barnwell_list[1], barnwell_list[2], start, end)
forecasted_data.to_excel(writer, sheet_name='Sheet1')
get_gfs_processed_data1()
A:
When I run your sample code I get the following warning from XlsxWriter at the end of the stacktrace:
"Excel doesn't support timezones in datetimes. "
TypeError: Excel doesn't support timezones in datetimes.
Set the tzinfo in the datetime/time object to None or use the
'remove_timezone' Workbook() option
I think that is reasonably self-explanatory. To strip the timezones from the timestamps pass the remove_timezone option as recommended:
writer = pd.ExcelWriter('test' + '_PythonExport.xlsx',
engine='xlsxwriter',
options={'remove_timezone': True})
When I make this change the sample runs and produces an xlsx file. Note, the remove_timezone option requires XlsxWriter >= 0.9.5.
| {
"pile_set_name": "StackExchange"
} |
Q:
Multiple groups in mongodb aggregate pipeline
I have a collection which contains data like:
{
attribute: 'value',
date: ISODate("2016-09-20T18:51:05Z")
}
and I want to group by attribute to get a total count per attribute and at the same time group by attribute and $hour to get a count per attribute and hour.
I know I can do something like:
{
$group: {
_id: '$attribute'
count: {
$sum: 1
}
}
}
and
{
$group: {
_id: {
attribute : '$attribute',
hour: {
'$hour' : '$date'
}
},
count: {
$sum: 1
}
}
},
{
$group: {
_id: {
attribute: '$_id.attribute'
},
hours: {
'$push': {
hour: '$_id.hour',
count: '$count'
}
}
}
}
to get both results in two separate aggregations, but is there a way to get the result I want in one query?
Edit as requested:
Perhaps this is utterly wrong but ideally I would like to get a response like:
{
_id: "attribute1",
total: 20, // Total number of attribute1 documents
// And the breakdown of those 20 documents per hour
hours: [{
hour: 10,
count: 8
},
{
hour: 14,
count: 12
}]
}
However the whole point of the question is whether this can be done (and how) on one query in mongodb. Not in two queries that would be merged on the application layer
A:
You can first group by attribute-hour pairs and their corresponding occurrences.
Then, you project the attribute, a pair of hour-count, and a copy of that count.
Finally, you group by the attribute alone, $push the pairs, and sum the counts.
[
{
$group: {
_id: {
attribute: "$attribute",
hour: { "$hour": "$date" }
},
count: {
$sum: 1
}
}
},
{
$project: {
attribute: "$_id.attribute",
pair: { hour: "$_id.hour", count: "$count" },
count: "$count",
_id: 0
}
},
{
$group: {
_id: "$attribute",
hours: {
$push: "$pair"
},
total: {
$sum: "$count"
}
}
}
]
| {
"pile_set_name": "StackExchange"
} |
Q:
How to fix tensorflow protobuf compilation errors on OSX?
I'm trying to compile TensorFlow after checking out the repo.
I've reached a point where I'm stuck with google protobuf errors:
INFO: From Compiling tensorflow/core/kernels/histogram_op_gpu.cu.cc:
./tensorflow/core/lib/core/status.h(32): warning: attribute "warn_unused_result" does not apply here
external/protobuf_archive/src/google/protobuf/arena.h(719): error: more than one instance of overloaded function "google::protobuf::Arena::CreateMessageInternal" matches the argument list:
function template "T *google::protobuf::Arena::CreateMessageInternal<T>(google::protobuf::Arena *)"
function template "T *google::protobuf::Arena::CreateMessageInternal<T,Args...>(Args &&...)"
argument types are: (google::protobuf::Arena *)
detected during:
instantiation of "Msg *google::protobuf::Arena::CreateMaybeMessage<Msg>(google::protobuf::Arena *, google::protobuf::internal::true_type) [with Msg=tensorflow::TensorShapeProto_Dim]"
(729): here
instantiation of "T *google::protobuf::Arena::CreateMaybeMessage<T>(google::protobuf::Arena *) [with T=tensorflow::TensorShapeProto_Dim]"
external/protobuf_archive/src/google/protobuf/repeated_field.h(648): here
instantiation of "GenericType *google::protobuf::internal::GenericTypeHandler<GenericType>::New(google::protobuf::Arena *) [with GenericType=tensorflow::TensorShapeProto_Dim]"
external/protobuf_archive/src/google/protobuf/repeated_field.h(675): here
instantiation of "GenericType *google::protobuf::internal::GenericTypeHandler<GenericType>::NewFromPrototype(const GenericType *, google::protobuf::Arena *) [with GenericType=tensorflow::TensorShapeProto_Dim]"
external/protobuf_archive/src/google/protobuf/repeated_field.h(1554): here
instantiation of "TypeHandler::Type *google::protobuf::internal::RepeatedPtrFieldBase::Add<TypeHandler>(TypeHandler::Type *) [with TypeHandler=google::protobuf::RepeatedPtrField<tensorflow::TensorShapeProto_Dim>::TypeHandler]"
external/protobuf_archive/src/google/protobuf/repeated_field.h(2001): here
instantiation of "Element *google::protobuf::RepeatedPtrField<Element>::Add() [with Element=tensorflow::TensorShapeProto_Dim]"
bazel-out/local_darwin-opt/genfiles/tensorflow/core/framework/tensor_shape.pb.h(471): here
....
Has anyone bumped into this issue ? Any ideas on how to tackle this issue ?
(I'm using Python 2.7 in a virtual environment on OSX 10.11.5)
A:
Luckily someone else already no only had the same issue, but also found a fix and shared it. Thanks to Daniel Trebbien's comments on protobuf and eigen I could compile tensorflow with GPU support on OSX:
>>> import tensorflow as tf
>>> tf.__version__
'1.6.0-rc0'
>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
2018-02-19 22:22:12.194516: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:859] OS X does not support NUMA - returning NUMA node zero
2018-02-19 22:22:12.195011: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1331] Found device 0 with properties:
name: GeForce GT 750M major: 3 minor: 0 memoryClockRate(GHz): 0.9255
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 12.58MiB
2018-02-19 22:22:12.195038: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1410] Adding visible gpu devices: 0
2018-02-19 22:22:14.563665: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-02-19 22:22:14.563700: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-02-19 22:22:14.563707: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-02-19 22:22:14.563798: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1021] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 65 MB memory) -> physical GPU (device: 0, name: GeForce GT 750M, pci bus id: 0000:01:00.0, compute capability: 3.0)
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GT 750M, pci bus id: 0000:01:00.0, compute capability: 3.0
2018-02-19 22:22:14.697626: I tensorflow/core/common_runtime/direct_session.cc:297] Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GT 750M, pci bus id: 0000:01:00.0, compute capability: 3.0
For reference, here are the patches proposed in the comments:
--- a/tensorflow/workspace.bzl
+++ b/tensorflow/workspace.bzl
@@ -353,11 +353,11 @@ def tf_workspace(path_prefix="", tf_repo_name=""):
tf_http_archive(
name = "protobuf_archive",
urls = [
- "https://mirror.bazel.build/github.com/google/protobuf/archive/396336eb961b75f03b25824fe86cf6490fb75e3a.tar.gz",
- "https://github.com/google/protobuf/archive/396336eb961b75f03b25824fe86cf6490fb75e3a.tar.gz",
+ "https://mirror.bazel.build/github.com/dtrebbien/protobuf/archive/50f552646ba1de79e07562b41f3999fe036b4fd0.tar.gz",
+ "https://github.com/dtrebbien/protobuf/archive/50f552646ba1de79e07562b41f3999fe036b4fd0.tar.gz",
],
- sha256 = "846d907acf472ae233ec0882ef3a2d24edbbe834b80c305e867ac65a1f2c59e3",
- strip_prefix = "protobuf-396336eb961b75f03b25824fe86cf6490fb75e3a",
+ sha256 = "eb16b33431b91fe8cee479575cee8de202f3626aaf00d9bf1783c6e62b4ffbc7",
+ strip_prefix = "protobuf-50f552646ba1de79e07562b41f3999fe036b4fd0",
)
--- a/tensorflow/workspace.bzl
+++ b/tensorflow/workspace.bzl
@@ -120,11 +120,11 @@ def tf_workspace(path_prefix="", tf_repo_name=""):
tf_http_archive(
name = "eigen_archive",
urls = [
- "https://mirror.bazel.build/bitbucket.org/eigen/eigen/get/2355b229ea4c.tar.gz",
- "https://bitbucket.org/eigen/eigen/get/2355b229ea4c.tar.gz",
+ "https://mirror.bazel.build/bitbucket.org/dtrebbien/eigen/get/374842a18727.tar.gz",
+ "https://bitbucket.org/dtrebbien/eigen/get/374842a18727.tar.gz",
],
- sha256 = "0cadb31a35b514bf2dfd6b5d38205da94ef326ec6908fc3fd7c269948467214f",
- strip_prefix = "eigen-eigen-2355b229ea4c",
+ sha256 = "fa26e9b9ff3a2692b092d154685ec88d6cb84d4e1e895006541aff8603f15c16",
+ strip_prefix = "dtrebbien-eigen-374842a18727",
build_file = str(Label("//third_party:eigen.BUILD")),
)
| {
"pile_set_name": "StackExchange"
} |
Q:
Coping files from multiple directories into one folder
I have many pictures in multiple directories and want to copy them to one folder, but my script is interrupted when a file with the same name already exists in the destination folder. I tried
for /R d:\dups %f in (*.jpg) do copy "%f" d:\pictures\.
What can i add to the code to append the name of the newer file before it's copied to the the destination folder? My source directory is "d:\dups" and my destination folder is "d:\pictures". thanks!
A:
example:
@echo off
for /R "d:\dups" %%f in (*.jpg) do copy "%%f" "d:\pictures\New-%%f"
| {
"pile_set_name": "StackExchange"
} |
Q:
How to check in using facebook android sdk (graph api)
I want to check in using facebook android sdk (graph api),
I am trying this
String checkinData = "{"+
"\"message\"=\"Test\"" +
"\"place\"=\"000000000\""
+ "\"coordinates\"={\"latitude\":\"000000000\", \"longitude\":\"-000000000\"}\"" + "}";
Bundle params = new Bundle();
params.putString("checkin", checkinData);
String pageData = "";
try {
pageData = facebook.request("/checkins", params, "POST");
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("Data : " + pageData);
But its giving me this error
{"error":{"message":"batch parameter must be a JSON array","type":"GraphBatchException"}}
is this correct way to check in using facebook graph api
A:
Simple code for checkins . Try this :
Bundle params = new Bundle();
params.putString("access_token", "YOUR ACCESS TOKEN");
params.putString("place", "203682879660695"); // YOUR PLACE ID
params.putString("message","I m here in this place");
JSONObject coordinates = new JSONObject();
coordinates.put("latitude", "YOUR LATITUDE");
coordinates.put("longitude", "YOUR LONGITUDE");
params.putString("coordinates",coordinates.toString());
params.putString("tags", "xxxx");//where xx indicates the User Id
String response = faceBook.request("me/checkins", params, "POST");
Log.d("Response",response);
| {
"pile_set_name": "StackExchange"
} |
Q:
How to derive differential of a variable
Could you please explain the following two questions relating to differential of a variable.
In the method of integration by parts using substitution, we have $u = f(x)$, $v = g(x)$, $du = f'(x)dx$, and $dv = g'(x)dx$. Is it just a definition to assign those values to the differentials $du$ and $dv$ or is it based on some logics?
How could the highlighted differentials $dt$ $du$ in the below text be derived? I tried to use the method in (1) above but it didn't work out as I got $sin\theta cos\theta (1 - r^2) + r(cos^2\theta - sin^2\theta)$.
A:
The point is that when you do substitution, you have
$$
\int g(f(x))\,f'(x)\,dx=\int g(v)\,dv
$$
by taking $v=f(x)$. Then the formula suggest that $f'(x)\,dx=dv$. The formula for integration by parts, that comes from the derivative of a product, is
$$
\int f(x)\,g'(x)\,dx = f(x)g(x)-\int g(x)\,f'(x)\,dx.
$$
With the above convention, and taking $u=g(x)$, you get
$$
\int u\,dv=uv-\int v\,du.
$$
The second case has nothing to do with the above. The situation is that now you have a double integral. When you do substitution in a double integral, taking
$$
t=g(r,\theta),\ \ u=h(r,\theta),
$$
the change of variable is given by
$$
dt\,du=\begin{vmatrix} \frac{\partial g}{\partial r}&\frac{\partial g}{\partial \theta}\\
\frac{\partial h}{\partial r}&\frac{\partial h}{\partial \theta}\end{vmatrix}\,dr\,d\theta.
$$
For the particular choice of polar coordinates in your example, the above determinant (called the Jacobian) is $r$.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add editor to the textarea in my custom component in joomla 2.5?
I have created a custom component in joomla 2.5 where the fields are creating in an xml file.
Here is the code.
<?xml version="1.0" encoding="utf-8"?>
<form>
<fieldset>
<field
name="id"
type="hidden"
/>
<field
name="title"
type="text"
label="Title"
description="This is the title"
size="40"
class="inputbox"
default=""
style="width:294px;height:135px;"
/>
<field
name="description"
type="textarea"
label="Description"
description="This is the description"
rows="20"
cols="10"
class="inputbox"
default=""
/>
</fieldset>
</form>
I want to the description text area to be an editor just like in the new article form.
How will I do this?
A:
I solved my problem.
Just change the type of the description field.
<field
name="description"
type="editor"
label="Description"
description="This is the description"
filter="safehtml"
class="inputbox"
default=""
/>
| {
"pile_set_name": "StackExchange"
} |
Q:
Send multiple tags with UIButton in UITableView
Im suffering from sending two tags to a function to show UIActivityViewController and share the cell data, well before i used to send one value at a time and within a single UIButton:
cell.sharefb.tag = indexPath.row
cell.sharefb.addTarget(self, action: "showAlert:", forControlEvents:UIControlEvents.TouchUpInside)
But now I've implemented sections in my UITableView so my array is like :
Array[indexPath.section][indexPath.row]
One of them (section or row) is not enough at a time, i need to send both to my function ? how can i do that ?
func showAlert(sender:AnyObject){
// i want to use it like :
Array[sender.sectionvalue][sender.rowvalue]
}
A:
You could subClass UIButton and add properties for the data you need eg
class MyButton : UIButton {
var row : Int?
var section : Int?
}
you can then set those properties, and in your showAlert function you can get them back and use:
func showAlert(sender:AnyObject){
let theButton = sender as! MyButton
let section = theButton.section
let row = theButton.row
}
Edit: Added where to set the button as per comment requested:
In your StoryBoard, make sure that your button is of type MyButton (and not UIButton anymore).
and then where you used to set the tag, don't use the tag but set the properties. So replace this code :
cell.sharefb.tag = indexPath.row
cell.sharefb.addTarget(self, action: "showAlert:", forControlEvents:UIControlEvents.TouchUpInside)
with:
cell.sharefb.row = indexPath.row
cell.sharefb.section = indexPath.section
cell.sharefb.addTarget(self, action: "showAlert:", forControlEvents:UIControlEvents.TouchUpInside)
| {
"pile_set_name": "StackExchange"
} |
Q:
What is the right pattern of the answers for the questions of the specific kind?
Recently, I was asked at the forum by one of the participants:
"A.D., you are not Swedish?"
I answered "Yes, I'm not."
But now I think I should have answered "No, I'm not."
Which is right? The first or the second variant? Or are both wrong?
A:
You should have said, "No, I'm not."
Your initial logic is valid but the normal idiomatic response to questions like that is to answer "No, I'm not X."
Hopwever - if the questioner is wanting you NOT to be finished, and therefore you really want to affirm exactly what he is saying with a positive phrase, you should say, "Right, I'm not finished." or "You're right, I'm not finished."
| {
"pile_set_name": "StackExchange"
} |
Q:
How to bring my PostGIS map from GeoServer to Leaflet based web application?
I have created a group of shapefiles using QGIS and I have stored their corresponding information in PostGIS database by importing the shapefile into pgAdmin3 using shapefile and DBF loader. Then I imported the whole PostGIS database to GeoServer. It works fine using OpenLayers link in layer preview option of GeoServer. The attribute values are showing up in tabular form when I click on each object.
What I need now is to create a web based application using Leaflet or OpenLayers where the attribute value is showing as pop-ups and I want the facility to add more shapefiles/PostGIS table dynamically.
A:
Here is a code sample to show how you could publish a WMS layer hosted on geoserver in Leaflet:
var map = L.map('map').setView([51.505, -0.09], 8);
var forest2000 = L.tileLayer.wms("http://138.26.24.xxx:8080/geoserver/tiger/wms",{
layers: 'forest2000',
format: 'image/png',
transparent: true,
opacity: 0.7
}).addTo(map);
Change this to match to the url of your geoserver instance. The working openlayers examples you mention above will help you to figure out what url and path to use.
If you want to get attribute values from feature clicks (getFeatureInfo), check out this code snippet using "BetterWMS":
https://gist.github.com/rclark/6908938
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I check for available disk space?
I need a way to check available disk space on a remote Windows server before copying files to that server. Using this method I can check to see if the primary server is full and if it is, then I'll copy the files to a secondary server.
How can I check for available disk space using C#/ASP.net 2.0?
A:
You can check it by doing the following:
Add the System.Management.dll as a reference to your project.
Use the following code to get the diskspace:
using System;
using System.Management;
public string GetFreeSpace();
{
ManagementObject disk = new ManagementObject("win32_logicaldisk.deviceid=\"c:\"");
disk.Get();
string freespace = disk["FreeSpace"];
return freespace;
}
There are a myriad of ways to do it, I'd check the System.Management namespace for more ways.
Here's one such way from that page:
public void GetDiskspace()
{
ConnectionOptions options = new ConnectionOptions();
ManagementScope scope = new ManagementScope("\\\\localhost\\root\\cimv2",
options);
scope.Connect();
ObjectQuery query = new ObjectQuery("SELECT * FROM Win32_OperatingSystem");
SelectQuery query1 = new SelectQuery("Select * from Win32_LogicalDisk");
ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query);
ManagementObjectCollection queryCollection = searcher.Get();
ManagementObjectSearcher searcher1 = new ManagementObjectSearcher(scope, query1);
ManagementObjectCollection queryCollection1 = searcher1.Get();
foreach (ManagementObject m in queryCollection)
{
// Display the remote computer information
Console.WriteLine("Computer Name : {0}", m["csname"]);
Console.WriteLine("Windows Directory : {0}", m["WindowsDirectory"]);
Console.WriteLine("Operating System: {0}", m["Caption"]);
Console.WriteLine("Version: {0}", m["Version"]);
Console.WriteLine("Manufacturer : {0}", m["Manufacturer"]);
Console.WriteLine();
}
foreach (ManagementObject mo in queryCollection1)
{
// Display Logical Disks information
Console.WriteLine(" Disk Name : {0}", mo["Name"]);
Console.WriteLine(" Disk Size : {0}", mo["Size"]);
Console.WriteLine(" FreeSpace : {0}", mo["FreeSpace"]);
Console.WriteLine(" Disk DeviceID : {0}", mo["DeviceID"]);
Console.WriteLine(" Disk VolumeName : {0}", mo["VolumeName"]);
Console.WriteLine(" Disk SystemName : {0}", mo["SystemName"]);
Console.WriteLine("Disk VolumeSerialNumber : {0}", mo["VolumeSerialNumber"]);
Console.WriteLine();
}
string line;
line = Console.ReadLine();
}
A:
by using this code
static void Main()
{
try
{
DriveInfo driveInfo = new DriveInfo(@"C:");
long FreeSpace = driveInfo.AvailableFreeSpace;
}
catch (System.IO.IOException errorMesage)
{
Console.WriteLine(errorMesage);
}
}
IF you are getting the error 'The device is not ready' .i.e your device is not ready .
If you are trying this code for a CD drive without CD you will get the same error : )
A:
This seems to be an option from the System.IO:
DriveInfo c = new DriveInfo("C");
long cAvailableSpace = c.AvailableFreeSpace;
| {
"pile_set_name": "StackExchange"
} |
Q:
Replacing accented characters with plain ascii ones
I need to turn a list of last names into alphanumeric usernames, however unfortunately some of them contain non-ascii characters:
Hernández
Quermançós
Migueláñez
Now one way would just to use a regex to remove any non-alpha numeric characters such as a.replace(/[^a-z0-9]/gi,''). However a more intuitive solution (at least for the user) would be to replace accented characters with their "plain" equivalent, e.g. turn á, á into a, and ç into c, etc. Is there an easy way to do this in javascript?
A:
The correct terminology for such accents is Diacritics. After Googling this term, I found this function which is part of backbone.paginator. It has a very complete collection of Diacritics and replaces them with their most intuitive ascii character. I found this to be the most complete Javascript solution available today.
The full function for future reference:
function removeDiacritics (str) {
var defaultDiacriticsRemovalMap = [
{'base':'A', 'letters':/[\u0041\u24B6\uFF21\u00C0\u00C1\u00C2\u1EA6\u1EA4\u1EAA\u1EA8\u00C3\u0100\u0102\u1EB0\u1EAE\u1EB4\u1EB2\u0226\u01E0\u00C4\u01DE\u1EA2\u00C5\u01FA\u01CD\u0200\u0202\u1EA0\u1EAC\u1EB6\u1E00\u0104\u023A\u2C6F]/g},
{'base':'AA','letters':/[\uA732]/g},
{'base':'AE','letters':/[\u00C6\u01FC\u01E2]/g},
{'base':'AO','letters':/[\uA734]/g},
{'base':'AU','letters':/[\uA736]/g},
{'base':'AV','letters':/[\uA738\uA73A]/g},
{'base':'AY','letters':/[\uA73C]/g},
{'base':'B', 'letters':/[\u0042\u24B7\uFF22\u1E02\u1E04\u1E06\u0243\u0182\u0181]/g},
{'base':'C', 'letters':/[\u0043\u24B8\uFF23\u0106\u0108\u010A\u010C\u00C7\u1E08\u0187\u023B\uA73E]/g},
{'base':'D', 'letters':/[\u0044\u24B9\uFF24\u1E0A\u010E\u1E0C\u1E10\u1E12\u1E0E\u0110\u018B\u018A\u0189\uA779]/g},
{'base':'DZ','letters':/[\u01F1\u01C4]/g},
{'base':'Dz','letters':/[\u01F2\u01C5]/g},
{'base':'E', 'letters':/[\u0045\u24BA\uFF25\u00C8\u00C9\u00CA\u1EC0\u1EBE\u1EC4\u1EC2\u1EBC\u0112\u1E14\u1E16\u0114\u0116\u00CB\u1EBA\u011A\u0204\u0206\u1EB8\u1EC6\u0228\u1E1C\u0118\u1E18\u1E1A\u0190\u018E]/g},
{'base':'F', 'letters':/[\u0046\u24BB\uFF26\u1E1E\u0191\uA77B]/g},
{'base':'G', 'letters':/[\u0047\u24BC\uFF27\u01F4\u011C\u1E20\u011E\u0120\u01E6\u0122\u01E4\u0193\uA7A0\uA77D\uA77E]/g},
{'base':'H', 'letters':/[\u0048\u24BD\uFF28\u0124\u1E22\u1E26\u021E\u1E24\u1E28\u1E2A\u0126\u2C67\u2C75\uA78D]/g},
{'base':'I', 'letters':/[\u0049\u24BE\uFF29\u00CC\u00CD\u00CE\u0128\u012A\u012C\u0130\u00CF\u1E2E\u1EC8\u01CF\u0208\u020A\u1ECA\u012E\u1E2C\u0197]/g},
{'base':'J', 'letters':/[\u004A\u24BF\uFF2A\u0134\u0248]/g},
{'base':'K', 'letters':/[\u004B\u24C0\uFF2B\u1E30\u01E8\u1E32\u0136\u1E34\u0198\u2C69\uA740\uA742\uA744\uA7A2]/g},
{'base':'L', 'letters':/[\u004C\u24C1\uFF2C\u013F\u0139\u013D\u1E36\u1E38\u013B\u1E3C\u1E3A\u0141\u023D\u2C62\u2C60\uA748\uA746\uA780]/g},
{'base':'LJ','letters':/[\u01C7]/g},
{'base':'Lj','letters':/[\u01C8]/g},
{'base':'M', 'letters':/[\u004D\u24C2\uFF2D\u1E3E\u1E40\u1E42\u2C6E\u019C]/g},
{'base':'N', 'letters':/[\u004E\u24C3\uFF2E\u01F8\u0143\u00D1\u1E44\u0147\u1E46\u0145\u1E4A\u1E48\u0220\u019D\uA790\uA7A4]/g},
{'base':'NJ','letters':/[\u01CA]/g},
{'base':'Nj','letters':/[\u01CB]/g},
{'base':'O', 'letters':/[\u004F\u24C4\uFF2F\u00D2\u00D3\u00D4\u1ED2\u1ED0\u1ED6\u1ED4\u00D5\u1E4C\u022C\u1E4E\u014C\u1E50\u1E52\u014E\u022E\u0230\u00D6\u022A\u1ECE\u0150\u01D1\u020C\u020E\u01A0\u1EDC\u1EDA\u1EE0\u1EDE\u1EE2\u1ECC\u1ED8\u01EA\u01EC\u00D8\u01FE\u0186\u019F\uA74A\uA74C]/g},
{'base':'OI','letters':/[\u01A2]/g},
{'base':'OO','letters':/[\uA74E]/g},
{'base':'OU','letters':/[\u0222]/g},
{'base':'P', 'letters':/[\u0050\u24C5\uFF30\u1E54\u1E56\u01A4\u2C63\uA750\uA752\uA754]/g},
{'base':'Q', 'letters':/[\u0051\u24C6\uFF31\uA756\uA758\u024A]/g},
{'base':'R', 'letters':/[\u0052\u24C7\uFF32\u0154\u1E58\u0158\u0210\u0212\u1E5A\u1E5C\u0156\u1E5E\u024C\u2C64\uA75A\uA7A6\uA782]/g},
{'base':'S', 'letters':/[\u0053\u24C8\uFF33\u1E9E\u015A\u1E64\u015C\u1E60\u0160\u1E66\u1E62\u1E68\u0218\u015E\u2C7E\uA7A8\uA784]/g},
{'base':'T', 'letters':/[\u0054\u24C9\uFF34\u1E6A\u0164\u1E6C\u021A\u0162\u1E70\u1E6E\u0166\u01AC\u01AE\u023E\uA786]/g},
{'base':'TZ','letters':/[\uA728]/g},
{'base':'U', 'letters':/[\u0055\u24CA\uFF35\u00D9\u00DA\u00DB\u0168\u1E78\u016A\u1E7A\u016C\u00DC\u01DB\u01D7\u01D5\u01D9\u1EE6\u016E\u0170\u01D3\u0214\u0216\u01AF\u1EEA\u1EE8\u1EEE\u1EEC\u1EF0\u1EE4\u1E72\u0172\u1E76\u1E74\u0244]/g},
{'base':'V', 'letters':/[\u0056\u24CB\uFF36\u1E7C\u1E7E\u01B2\uA75E\u0245]/g},
{'base':'VY','letters':/[\uA760]/g},
{'base':'W', 'letters':/[\u0057\u24CC\uFF37\u1E80\u1E82\u0174\u1E86\u1E84\u1E88\u2C72]/g},
{'base':'X', 'letters':/[\u0058\u24CD\uFF38\u1E8A\u1E8C]/g},
{'base':'Y', 'letters':/[\u0059\u24CE\uFF39\u1EF2\u00DD\u0176\u1EF8\u0232\u1E8E\u0178\u1EF6\u1EF4\u01B3\u024E\u1EFE]/g},
{'base':'Z', 'letters':/[\u005A\u24CF\uFF3A\u0179\u1E90\u017B\u017D\u1E92\u1E94\u01B5\u0224\u2C7F\u2C6B\uA762]/g},
{'base':'a', 'letters':/[\u0061\u24D0\uFF41\u1E9A\u00E0\u00E1\u00E2\u1EA7\u1EA5\u1EAB\u1EA9\u00E3\u0101\u0103\u1EB1\u1EAF\u1EB5\u1EB3\u0227\u01E1\u00E4\u01DF\u1EA3\u00E5\u01FB\u01CE\u0201\u0203\u1EA1\u1EAD\u1EB7\u1E01\u0105\u2C65\u0250]/g},
{'base':'aa','letters':/[\uA733]/g},
{'base':'ae','letters':/[\u00E6\u01FD\u01E3]/g},
{'base':'ao','letters':/[\uA735]/g},
{'base':'au','letters':/[\uA737]/g},
{'base':'av','letters':/[\uA739\uA73B]/g},
{'base':'ay','letters':/[\uA73D]/g},
{'base':'b', 'letters':/[\u0062\u24D1\uFF42\u1E03\u1E05\u1E07\u0180\u0183\u0253]/g},
{'base':'c', 'letters':/[\u0063\u24D2\uFF43\u0107\u0109\u010B\u010D\u00E7\u1E09\u0188\u023C\uA73F\u2184]/g},
{'base':'d', 'letters':/[\u0064\u24D3\uFF44\u1E0B\u010F\u1E0D\u1E11\u1E13\u1E0F\u0111\u018C\u0256\u0257\uA77A]/g},
{'base':'dz','letters':/[\u01F3\u01C6]/g},
{'base':'e', 'letters':/[\u0065\u24D4\uFF45\u00E8\u00E9\u00EA\u1EC1\u1EBF\u1EC5\u1EC3\u1EBD\u0113\u1E15\u1E17\u0115\u0117\u00EB\u1EBB\u011B\u0205\u0207\u1EB9\u1EC7\u0229\u1E1D\u0119\u1E19\u1E1B\u0247\u025B\u01DD]/g},
{'base':'f', 'letters':/[\u0066\u24D5\uFF46\u1E1F\u0192\uA77C]/g},
{'base':'g', 'letters':/[\u0067\u24D6\uFF47\u01F5\u011D\u1E21\u011F\u0121\u01E7\u0123\u01E5\u0260\uA7A1\u1D79\uA77F]/g},
{'base':'h', 'letters':/[\u0068\u24D7\uFF48\u0125\u1E23\u1E27\u021F\u1E25\u1E29\u1E2B\u1E96\u0127\u2C68\u2C76\u0265]/g},
{'base':'hv','letters':/[\u0195]/g},
{'base':'i', 'letters':/[\u0069\u24D8\uFF49\u00EC\u00ED\u00EE\u0129\u012B\u012D\u00EF\u1E2F\u1EC9\u01D0\u0209\u020B\u1ECB\u012F\u1E2D\u0268\u0131]/g},
{'base':'j', 'letters':/[\u006A\u24D9\uFF4A\u0135\u01F0\u0249]/g},
{'base':'k', 'letters':/[\u006B\u24DA\uFF4B\u1E31\u01E9\u1E33\u0137\u1E35\u0199\u2C6A\uA741\uA743\uA745\uA7A3]/g},
{'base':'l', 'letters':/[\u006C\u24DB\uFF4C\u0140\u013A\u013E\u1E37\u1E39\u013C\u1E3D\u1E3B\u017F\u0142\u019A\u026B\u2C61\uA749\uA781\uA747]/g},
{'base':'lj','letters':/[\u01C9]/g},
{'base':'m', 'letters':/[\u006D\u24DC\uFF4D\u1E3F\u1E41\u1E43\u0271\u026F]/g},
{'base':'n', 'letters':/[\u006E\u24DD\uFF4E\u01F9\u0144\u00F1\u1E45\u0148\u1E47\u0146\u1E4B\u1E49\u019E\u0272\u0149\uA791\uA7A5]/g},
{'base':'nj','letters':/[\u01CC]/g},
{'base':'o', 'letters':/[\u006F\u24DE\uFF4F\u00F2\u00F3\u00F4\u1ED3\u1ED1\u1ED7\u1ED5\u00F5\u1E4D\u022D\u1E4F\u014D\u1E51\u1E53\u014F\u022F\u0231\u00F6\u022B\u1ECF\u0151\u01D2\u020D\u020F\u01A1\u1EDD\u1EDB\u1EE1\u1EDF\u1EE3\u1ECD\u1ED9\u01EB\u01ED\u00F8\u01FF\u0254\uA74B\uA74D\u0275]/g},
{'base':'oi','letters':/[\u01A3]/g},
{'base':'ou','letters':/[\u0223]/g},
{'base':'oo','letters':/[\uA74F]/g},
{'base':'p','letters':/[\u0070\u24DF\uFF50\u1E55\u1E57\u01A5\u1D7D\uA751\uA753\uA755]/g},
{'base':'q','letters':/[\u0071\u24E0\uFF51\u024B\uA757\uA759]/g},
{'base':'r','letters':/[\u0072\u24E1\uFF52\u0155\u1E59\u0159\u0211\u0213\u1E5B\u1E5D\u0157\u1E5F\u024D\u027D\uA75B\uA7A7\uA783]/g},
{'base':'s','letters':/[\u0073\u24E2\uFF53\u00DF\u015B\u1E65\u015D\u1E61\u0161\u1E67\u1E63\u1E69\u0219\u015F\u023F\uA7A9\uA785\u1E9B]/g},
{'base':'t','letters':/[\u0074\u24E3\uFF54\u1E6B\u1E97\u0165\u1E6D\u021B\u0163\u1E71\u1E6F\u0167\u01AD\u0288\u2C66\uA787]/g},
{'base':'tz','letters':/[\uA729]/g},
{'base':'u','letters':/[\u0075\u24E4\uFF55\u00F9\u00FA\u00FB\u0169\u1E79\u016B\u1E7B\u016D\u00FC\u01DC\u01D8\u01D6\u01DA\u1EE7\u016F\u0171\u01D4\u0215\u0217\u01B0\u1EEB\u1EE9\u1EEF\u1EED\u1EF1\u1EE5\u1E73\u0173\u1E77\u1E75\u0289]/g},
{'base':'v','letters':/[\u0076\u24E5\uFF56\u1E7D\u1E7F\u028B\uA75F\u028C]/g},
{'base':'vy','letters':/[\uA761]/g},
{'base':'w','letters':/[\u0077\u24E6\uFF57\u1E81\u1E83\u0175\u1E87\u1E85\u1E98\u1E89\u2C73]/g},
{'base':'x','letters':/[\u0078\u24E7\uFF58\u1E8B\u1E8D]/g},
{'base':'y','letters':/[\u0079\u24E8\uFF59\u1EF3\u00FD\u0177\u1EF9\u0233\u1E8F\u00FF\u1EF7\u1E99\u1EF5\u01B4\u024F\u1EFF]/g},
{'base':'z','letters':/[\u007A\u24E9\uFF5A\u017A\u1E91\u017C\u017E\u1E93\u1E95\u01B6\u0225\u0240\u2C6C\uA763]/g}
];
for(var i=0; i<defaultDiacriticsRemovalMap.length; i++) {
str = str.replace(defaultDiacriticsRemovalMap[i].letters, defaultDiacriticsRemovalMap[i].base);
}
return str;
}
A:
Since those characters have no mathematical relation to their 'plain equivalents' in the unicode table you will have to replace them manually using something like this:
function cleanUpSpecialChars(str)
{
return str
.replace(/[ÀÁÂÃÄÅ]/g,"A")
.replace(/[àáâãäå]/g,"a")
.replace(/[ÈÉÊË]/g,"E")
//.... all the rest
.replace(/[^a-z0-9]/gi,''); // final clean up
}
The case-insensitve option doesn't work on those characters, so you have to do it for the lower and upper case variants of them.
A:
Say you have a dictionary like:
var dict = {"á":"a", "á":"a", "ç":"c"}
then do a function like:
a.replace(/[^\w ]/g, function(char) {
return dict[char] || char;
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Getting the input box name when clicked
In my site I use 10 file upload boxes. I want to get a file upload box name when I click on the box.
That means first upload box, second upload box, third upload box etc...
So if I click on the first upload box then I want to get name of that file upload box.
How can i get the upload button name in ajax function.
This is my ajax code:
$(function(){
var countfile = 10;
var strfileid = '';
for(i=1;i<=countfile;i++){
var btnUpload=$('#browse'+i);
var adinfoid=$('#adinfoid').val();
new AjaxUpload(btnUpload, {
action: '<?php echo base_url()?>index.php/post/upload_editgalleryimage/'+adinfoid,
name: 'uploadfile',
onSubmit: function(file, ext){
alert(btnUpload.Name);
var photoplancnt=$('#photoplancnt').val();
var hidcountimg=$('#hidcountimg').val();
if(parseInt(hidcountimg)>=parseInt(photoplancnt)){
$("#photoerror").html('maximum '+photoplancnt +' files are allowed');
$("#photoerror").css('display','block');
return false;
}
if (! (ext && /^(jpg|png|jpeg|gif|JPG|PNG|JPEG|GIF)$/.test(ext))){
$("#photoerror").html('Only JPG, PNG, GIF, files are allowed');
$("#photoerror").css('display','block');
return false;
}
},
onComplete: function(file, response){
if(response){
$(".upload_main_div").html('');
$(".upload_main_div").html(response);
var insid = $("#hiddengalidnow").val();
calltoloadimage(insid);
/*$("#galimageicon").attr("src",response);
$("#galimageicon").attr("width",55);
$("#galimageicon").attr("height",55);*/
//$("#mainimageicon1").attr("src",response);
}else{
alert("error");
}
}
});
}
});
It will alert 'browse12' at all time.
Html code:
<?php
for($i=1;$i<=10;$i++){
?>
<input type="button" id="browse<?php echo $i;?>" name ="browse<?php echo $i;?>" class="browse_media" value="Browse">
<?php
}
?>
A:
Finally
alert(this._button.name);
| {
"pile_set_name": "StackExchange"
} |
Q:
Include subfolder with .htaccess
I have a sub folder with php files.
Like this: domain.com/subfolder/file.php
It would be very useful to call the php files as if they were on the root folder, without ignoring the existing root files.
Is there a way to include a subfolder and all its contents through .htaccess?
A:
You can use the following rule in root/.htaccess :
RewriteEngine on
#1--If the request is not for an existent root directory--#
RewriteCond %{REQUEST_FILENAME} !-d
#2--And the request is not for an existent root file--#
RewriteCond %{REQUEST_FILENAME} !-d
#3--Then, rewrite the request to "/subfolder"--#
RewriteRule ^([^/]+)/?$ /subfolder/$1 [NC,L]
The RewriteConditions above are important to avoid rewriting your root folder and files to /subfolder.
Or try this :
RewriteEngine on
#--if /document_root/subfolder/foo is an existent dir--#
RewriteCond %{DOCUMENT_ROOT}/subfolder/$1 -d [OR]
#--OR /document_root/subfolder/foo is an existent file--#
RewriteCond %{DOCUMENT_ROOT}/subfolder/$1 -f
#--rewrite "/foo" to "/subfolder/foo--#
RewriteRule ^(.+)$ /subfolder/$1 [NC,L]
| {
"pile_set_name": "StackExchange"
} |
Q:
Codomains, products and limits
I have a proof in front of me of the theorem that if a category $\mathcal{C}$ has equalisers and all small/finite products, then it has all small/finite limits. I'm not sure how standard the proof is (it's in some notes I've taken rather than a book) but I know the result is quite standard so perhaps you'll be familiar enough with it to help me clear up a confusion.
The proof starts as follows:
Let $D: \mathcal{J} \to \mathcal{C}$ be a diagram with $\mathcal{J}$ small/finite. Form the products $P = \prod \limits_{j \in Ob(\mathcal{J})} D(j)$, and $Q = \prod \limits_{\alpha \in Mor(\mathcal{J})} D(\operatorname{cod} \alpha)$, and define $f,\,g: P \to Q$ by $\pi_\alpha f = \pi_{\operatorname{cod} \alpha}$, $\pi_\alpha g = D(\alpha) \pi_{\operatorname{dom} \alpha}$. Then let $(L,e)$ be the equaliser of $f$ and $g$. We go on to show that $L$ gives a limit cone.
My confusion is early on (I only provided the rest of the proof for context, although I'd imagine it's pretty standard). We define $Q = \prod \limits_{\alpha \in Mor(\mathcal{J})} D(\operatorname{cod} \alpha)$, the product of all the codomains of all the morphisms in $\mathcal{J}$: but then surely isn't that just identical to $P$? Surely, since $\mathcal{J}$ is a category it contains an identity morphism for every object, and that morphism has codomain precisely the object it is the identity for, so isn't $Q$ just the product of all $D(j)$ too? Or is $Q$ simply constructed that way to make it easier to work with $\alpha$? Many thanks.
A:
If there were $only$ identities, then yes, $Q$ would be $P$, but else $Q$ won't be $P$, since there will be repetitions.
For example, if $\mathcal{J}$ is the category with two distinct objects $A, B$ and one arrow $f:A \to B$ (besides the identities $\operatorname{id}_A$ and $\operatorname{id}_B$), then $P=D(A) \times D(B)$, but $Q= D(A) \times D(B) \times D(B)$ (the first two objects are $\operatorname{cod}(\operatorname{id}_A)$ and $\operatorname{cod}(\operatorname{id}_B)$; the second one is $\operatorname{cod}(f)$).
| {
"pile_set_name": "StackExchange"
} |
Q:
Update Column C with the difference of Column A & B (Timevalue)
I'm looking for UPDATE query to update 3rd column with the difference of first two column. Below is my data.
Table name - Report
Field Name - Data Type
--------------------
New - Date/Time
Opened - Date/Time
NewOpen_Time - Date/Time
NewOpen_Time is to be updated with the difference of Opened - NEW.
Both the columns contains data in below format
New = 11/18/2015 4:42:46 AM
Opened = 11/18/2015 4:51:22 AM
and I want column NewOpen_Time to be updated in below format.
NewOpen_Time = 0 days, 0 hrs, 8 mins, 36 secs
Any help would be highly appreciated.
A:
Use a function like this:
Public Function FormatYearDayHourMinuteSecondDiff( _
ByVal datTimeStart As Date, _
ByVal datTimeEnd As Date, _
Optional ByVal strSeparatorDate As String = " ", _
Optional ByVal strSeparatorTime As String = ":") _
As String
' Returns count of years, days, hours, minutes and seconds of difference
' between datTimeStart and datTimeEnd converted to
' years, days, hours and minutes and seconds as a formatted string
' with an optional choice of date and/or time separator.
'
' Should return correct output for a negative time span but
' this is not fully tested.
'
' Example:
' datTimeStart: #2006-05-24 10:03:02#
' datTimeEnd : #2009-04-17 20:01:18#
' returns : 2 328 09:58:16
'
' 2007-11-06. Cactus Data ApS, CPH.
Const cintSecondsHour As Integer = 60& * 60&
Dim intYears As Integer
Dim intDays As Integer
Dim intSeconds As Integer
Dim intHours As Integer
Dim datTime As Date
Dim strDatePart As String
Dim strTimePart As String
Dim strYDHMS As String
intYears = Years(datTimeStart, datTimeEnd)
datTimeStart = DateAdd("yyyy", intYears, datTimeStart)
intDays = DateDiff("h", datTimeStart, datTimeEnd) \ 24
datTimeStart = DateAdd("d", intDays, datTimeStart)
intHours = DateDiff("h", datTimeStart, datTimeEnd)
datTimeStart = DateAdd("h", intHours, datTimeStart)
intSeconds = DateDiff("s", datTimeStart, datTimeEnd)
' Format year and day part.
strDatePart = CStr(intYears) & strSeparatorDate & CStr(intDays)
datTime = TimeSerial(intHours, 0, intSeconds Mod cintSecondsHour)
' Format hour, minute and second part.
strTimePart = Format(datTime, "hh\" & strSeparatorTime & "nn\" & strSeparatorTime & "ss")
strYDHMS = strDatePart & " " & IIf(datTime < 0, "-", "") & strTimePart
FormatYearDayHourMinuteSecondDiff = strYDHMS
End Function
Just modify strDatePart and strTimePart to fit your need.
Then in your query:
NewOpen_Time: FormatYearDayHourMinuteSecondDiff([New],[Opened])
Edit
The calculation of years you can just remove, as I guess it will not be relevant here, or you can use this function:
' Returns the difference in full years between Date1 and Date2.
'
' Calculates correctly for:
' negative differences
' leap years
' dates of 29. February
' date/time values with embedded time values
' any date/time value of data type Date
'
' Optionally returns negative counts rounded down to provide a
' linear sequence of year counts.
' For a given Date1, if Date2 is decreased stepwise one year from
' returning a positive count to returning a negative count, one or two
' occurrences of count zero will be returned.
' If LinearSequence is False, the sequence will be:
' 3, 2, 1, 0, 0, -1, -2
' If LinearSequence is True, the sequence will be:
' 3, 2, 1, 0, -1, -2, -3
'
' If LinearSequence is False, reversing Date1 and Date2 will return
' results of same absolute Value, only the sign will change.
' This behaviour mimics that of Fix().
' If LinearSequence is True, reversing Date1 and Date2 will return
' results where the negative count is offset by -1.
' This behaviour mimics that of Int().
' DateAdd() is used for check for month end of February as it correctly
' returns Feb. 28th when adding a count of years to dates of Feb. 29th
' when the resulting year is a common year.
'
' 2015-11-24. Gustav Brock, Cactus Data ApS, CPH.
'
Public Function Years( _
ByVal Date1 As Date, _
ByVal Date2 As Date, _
Optional ByVal LinearSequence As Boolean) _
As Long
Dim YearCount As Long
Dim DayCount As Long
DayCount = DateDiff("d", Date1, Date2)
If DayCount = 0 Then
' The dates are equal.
Else
' Find difference in calendar years.
YearCount = DateDiff("yyyy", Date1, Date2)
' For positive resp. negative intervals, check if the second date
' falls before, on, or after the crossing date for a 1 year period
' while at the same time correcting for February 29. of leap years.
If DayCount > 0 Then
If DateDiff("d", DateAdd("yyyy", YearCount, Date1), Date2) < 0 Then
YearCount = YearCount - 1
End If
Else
If DateDiff("d", DateAdd("yyyy", -YearCount, Date2), Date1) < 0 Then
YearCount = YearCount + 1
End If
' Offset negative count of years to continuous sequence if requested.
If LinearSequence = True Then
YearCount = YearCount - 1
End If
End If
End If
' Return count of years as count of full year periods.
Years = YearCount
End Function
| {
"pile_set_name": "StackExchange"
} |
Q:
Pager below sharepoint list
I have a sharepoint site with a list, loaded from an external content type with Sharepoint designer. The list limit is 30. Below the list, a data pager is shown to navigate through the pages.
1-30 ->
On another list with the same limit and more than 30 items, the pager is not shown. Paging is possible only by using the task bar (List Tools -> List -> Navigate). How can I make the data pager appear?
A:
Actually, the pager vas visible, but because the list was wider than the page, it showed up far to the right, so I had to scroll horizontally to find it!
| {
"pile_set_name": "StackExchange"
} |
Q:
Public / guest Issue Tracker
i'm currently testing with Gitlab CE and i want to use private repos with public issue trackers.
I tried to add a seconde repo for each private repo with public flag. Is there a way to open the issue tracker for guests? Or how can i open the registration for everyone?
A:
Opening the issue tracker for guests is currently (GitLab 7.2) not possible. However, fou can enable users to sign up themselves by enabling signup_enabled in gitlab.yml.
| {
"pile_set_name": "StackExchange"
} |
Q:
If $H$ Hilbert space, $T\in L(H)$, $(z_{n})\subset H$ orthonormal, $y\in T(H)$. Show under some conditions that $\sum|\langle y,z_{n}\rangle|<\infty$
Let $H$ be a Hilbert space , $T\in L(H)$ (set of all continuous linear functions of $H$ in $H$), $(z_{n})$ a othonormal sequence on $H$, and $(\lambda_{n})$ a sequence in $\mathbb{K}$ (it is $\mathbb{R}$ or $\mathbb{C}$) such that $T^{*}(z_{n})=\lambda_{n}z_{n}$ (where $T^{*}$ is the adjoint map of $T$) and $\sum_{n=1}^{\infty}|\lambda_{n}|^{2}<\infty$.
Show that
$$\sum_{n=1}^{\infty}\left|\langle y,z_{n}\rangle\right|<\infty$$
for all $y\in T(H)$.
Remark: I am distrusting the hypothesis $\sum_{n=1}^{\infty}|\lambda_{n}|^{2}<\infty$. Note that if we replace $\sum_{n=1}^{\infty}|\lambda_{n}|^{2}<\infty$ for $\sum_{n=1}^{\infty}|\lambda_{n}|<\infty$ then the problem is easier to show, in fact, for $y=T(x)\in T(H)$ we would have
$$|\langle y,z_{n}\rangle|=|\langle T(x),z_{n}\rangle|=|\langle x,T^{*}(z_{n})\rangle|=|\langle x,\lambda_{n}z_{n}\rangle|\leq|\lambda_{n}|\|x\|.$$
Therefore
$$\sum_{n=1}^{\infty}|\langle y,z_{n}\rangle|\leq \|x\|\sum_{n=1}^{\infty}|\lambda_{n}|<\infty.$$
I do not see how to prove it if the hypothesis is $\sum_{n=1}^{\infty}|\lambda_{n}|^{2}<\infty$.
A:
We have
$$\lvert\langle y, z_n\rangle\rvert = \lvert \lambda_n\rvert\cdot \lvert \langle x,z_n\rangle\rvert,$$
and by Bessel's inequality
$$\sum_{n = 1}^{\infty} \lvert \langle x,z_n\rangle\rvert^2 \leqslant \lVert x\rVert^2.$$
And by Cauchy-Schwarz
$$\sum_{n = 1}^{\infty} \lvert \lambda_n\rvert\cdot \lvert \langle x,z_n\rangle\rvert \leqslant \sqrt{\sum_{n = 1}^{\infty} \lvert \lambda_n\rvert^2} \cdot \sqrt{\sum_{n = 1}^{\infty} \lvert \langle x, z_n\rangle\rvert^2}\,.$$
| {
"pile_set_name": "StackExchange"
} |
Q:
Vuex getters real time updating audio.currentTime state
I want to have audio player in Vuex store to use its data for output to custom audio player in a Vue components.
In Vuex store there is a state property aplayer that plays music.
store.js code:
state: {
aplayer: new Audio()
},
getters: {
getCurrentTime: state => state.aplayer.currentTime
}
In the component I want to display aplayer.currentTime the current song. I'm trying to get it by using the getter getPlayer in a computed properties.
components.vue code:
getCurrentTime() {
return this.$store.getters.getCurrentTime;
}
But when output to the template, the current time freezes to 0. However, when I press "pause", the current time is displayed correctly.
Please help.
A:
Vue works best when observing plain objects, of which Audio is not. Vue cannot detect when the currentTime property changes.
From the docs:
The object must be plain: native objects such as browser API objects and prototype properties are ignored. A rule of thumb is that data should just be data - it is not recommended to observe objects with their own stateful behavior.
Your best solution would be to generate your own currentTime number property and synchronize its value to audio.currentTime periodically by using setInterval while the audio is playing, and clear the interval when the audio is paused.
| {
"pile_set_name": "StackExchange"
} |
Q:
Order wordpress post by custom field value?
Currently i have tried half internet codes ( :D ) to make this work, but with no luck.
I'm not a wordpress guru so it's quite bit hard. Basically what i want is to make plugin that will alter all post ordering across all wordpress blog based on date value in custom field.
For example i add custom field to each post (meta_key=bb_history) and (meta_value=2011-04-03).
So where i would hook or what filter should i use to get this working somehow? I guess you can use posts_where, posts_join, posts_orderby actions to make something?
A:
You don't want a plug-in. Just edit your query_posts statement in the template in question.
query_posts($query_string . '&meta_key=YOURFIELDNAME');
| {
"pile_set_name": "StackExchange"
} |
Q:
What is the name of this problem in Computer Vision?
My google skills have failed me during research for a college project. Our task is write a program that describes an image in spacial terms, e.g. "Cat on the left of a chair", or for a simple image "Two triangles over a large circle, and a sqaure underneath it".
Suppose that after analysing an image I have a set of objects, with names, coordinates, sizes, orientations. I want to transform this set into sentences like above, i.e. describing relative positions, overlaying and so on.
Unfortunately, I cannot find anything on this subject. "Scene decription" means 3D graphics description languages. "Image description" is about the nouns and, recently, the verbs describing an image. It seems that I have the wrong keywords.
I'd much appreciate any hints about what to look for. Links to scientific papers to peruse (if you have any at hand) would be great too.
A:
This seems to be exactly scene description as described, for example, in Timor Kadir's thesis:
Broadly speaking, the aim of scene description is to arrive at a set of descriptions of a real world scene which sufficiently capture the component parts of the scene, their positions, poses, motions and interactions. (p. 3)
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use Geoquery on another project with geofire real-time database in android?
I am having 2 firebase project, one for user app and second for the driver app. I am storing the latest location(lat-long) of the driver app on firebase realtime database. Now I want to execute geoquery on driver's project database from my user app. My question is how to execute geoquery on another project. I want to get driver list from user app with the help of geoquery (bypassing the current position of the user to find the nearby driver)
FirebaseOptions options = new FirebaseOptions.Builder().setApplicationId("1:my_App_ID").setApiKey("AIza_My_API_KEY").setDatabaseUrl("https://My_URL.firebaseio.com/").build();
FirebaseApp.initializeApp(this , options);
FirebaseApp app = FirebaseApp.getInstance("xyz");
FirebaseDatabase secondaryDatabase FirebaseDatabase.getInstance(app);
DatabaseReference data = secondaryDatabase.getInstance().getReference().child("Drivers Available");
GeoFire geoFire = new GeoFire(data);
GeoQuery geoQuery = geoFire.queryAtLocation(new GeoLocation(21.11626328, 79.051096406), 1);
geoQuery.addGeoQueryEventListener(new GeoQueryEventListener() {
@Override
public void onKeyEntered(String key, GeoLocation location) {
Log.d("test_geolocation",key);
}
@Override
public void onKeyExited(String key) {}
@Override
public void onKeyMoved(String key, GeoLocation location) {}
@Override
public void onGeoQueryReady() {
}
@Override
public void onGeoQueryError(DatabaseError error) {
Log.d("errorGeoQuery",error+"");
}
});
A:
You need to give your app instance a name xyz if you want to look it up by that name later.
So something like:
FirebaseOptions options = new FirebaseOptions.Builder().setApplicationId("1:my_App_ID").setApiKey("AIza_My_API_KEY").setDatabaseUrl("https://My_URL.firebaseio.com/").build();
FirebaseApp.initializeApp(this , options, "xyz");
FirebaseApp app = FirebaseApp.getInstance("xyz");
Also see the Firebase documentation on using multiple projects in your application.
| {
"pile_set_name": "StackExchange"
} |
Q:
SVN commit authentication failure
Note: I have read through the last 150 svn commit error questions and none of the answers solved my problem.
I'm trying to set up a simple svn server from which I can checkout and to which I can commit files.
conf/svnserv.conf
[global]
anon-access = none
auth-access = rw
password-db = passwd
#authz-db = authz
conf/passwd
[users]
myuser = password
svn checkout from another machine works fine:
svn co svn://138.25.25.42:3690 test --username myuser --password password
then I added a local file within my local repository copy
echo "content" > testfile
svn add testfile
but when I now try to commit I get an Authentication/Authorization (german: Autorisierung) error (all 3 the same)
svn ci --username myuser --password password -m "lala"
svn ci . --username myuser --password password -m "lala"
svn ci test --username myuser --password password -m "lala"
looking at the transmissions with wireshark it looks like my username and password is ignored (?)
[SERVER->CLIENT]
( success ( 2 2 ( ) ( edit-pipeline svndiff1 absent-entries commit-revprops depth log-revprops partial-replay ) ) ) (
[CLIENT->SERVER]
2 ( edit-pipeline svndiff1 absent-entries depth mergeinfo log-revprops ) 25:svn://138.25.25.42:3690 21:SVN/1.6.15 (r1038135) ( ) )
[SERVER->CLIENT]
( success ( ( ANONYMOUS ) 36:ab25c003-0e26-4e93-98a3-6c5d9cc9a979 ) ) ( ANONYMOUS ( 0: ) ) ( success ( ) ) ( success ( 36:ab25c003-0e26-4e93-98a3-6c5d9cc9a979 25:svn://138.25.25.42:3690 ( mergeinfo ) ) )
[CLIENT->SERVER]
( commit ( 4:lala ( ) false ( ( 7:svn:log 4:lala ) ) ) )
[SERVER->CLIENT]
( failure ( ( 170001 0: 62:/build/buildd/subversion-1.6.5dfsg/subversion/svnserve/serve.c 167 ) ) )
Any help getting the commits working would be greatly appreciated.
edit: strangely, an anonymous checkout works as well; so the configuration files seem to be ignored, I just can not see why.
A:
Solved!
The problem was: I have set up the svn server directory in my home directory, and the user under which svnserve ran did not have access to it, regardless of the 777 permissions in the directory (since it had no permissions to access the parent dir).
| {
"pile_set_name": "StackExchange"
} |
Q:
Nesting directives within directives
Regarding AngularJS directives I've ran into a situation where I'm calling a directive from within another directive and I have the following questions.
Why can't I referecne scope.bindValue in my link function? Is there a way I can compute a value from scope.bindValue and set it to scope?
Why can the subdirective bind using "@" in scope:{} but not use scope.value = attrs.value in the link function?
All of the below can be seen at http://jsfiddle.net/sdg9/AjDtt/13/
HTML:
<directive bind-value="12" value="7"></directive>
JS:
var myApp = angular.module('myApp', []);
var commonTemplate = '<div>{{name}} bind-value is: {{bindValue}} </div><div>{{name}} value is: {{value}} </div><div>{{name}} add one to bind-value is: {{addOneBindValue}} </div><div>{{name}} add one to value is: {{addOneValue}} </div><br/>';
myApp.directive('directive', function () {
return {
scope: {
bindValue: "@",
},
template: commonTemplate + '<br/><sub-directive bind-value="{{value}}" value="{{value}}"></sub-directive>',
restrict: 'E',
link: function (scope, element, attrs) {
scope.name = "Directive";
scope.value = attrs.value;
scope.addOneBindValue = parseInt(scope.bindValue) + 1;
scope.addOneValue = parseInt(scope.value) + 1;
}
};
});
myApp.directive('subDirective', function () {
return {
scope: {
bindValue: "@"
},
template: commonTemplate,
restrict: 'E',
link: function (scope, element, attrs) {
scope.name = "SubDirective";
scope.value = attrs.value;
scope.addOneBindValue = parseInt(scope.bindValue) + 1;
scope.addOneValue = parseInt(scope.value) + 1;
}
};
});
Output:
Directive bind-value is: 12
Directive value is: 7
Directive add one to bind-value is: null <--- why?
Directive add one to value is: 8
SubDirective bind-value is: 7
SubDirective value is: <--- why?
SubDirective add one to bind-value is: null
SubDirective add one to value is: null
A:
Interpolated attributes (i.e., attributes that use {{}}s) and isolate scope properties defined with '@' are not available when the link function runs. You need to use attrs.$observe() (or scope.$watch( @ property here, ...)) to get the values (asynchronously).
So, scope.bindValue is not available when you are trying to use it.
Similarly, in your subDirective, attribute value has {{}}s, so its value will also not be available when you are trying to use it. You also need to define an '@' directive property for this.
Working fiddle.
The reason for the asynchronous requirement is that the items inside the {{}}s may change, and you normally want your directive to notice (and then do something -- like update the "addOne" values). '@' is normally used with isolate scopes when the attribute value contains {{}}s.
If attribute values are constants, and you're not going to use the values in a template (or templateUrl), then '@' probably shouldn't be used. In the link function, just use attrs.attrName if the value is a string, or scope.$eval(attrs.attrName) if the attribute is a number or boolean (or parseInt(attrs.attrName) if you know it is a number).
| {
"pile_set_name": "StackExchange"
} |
Q:
How I can use pow with negative number
How could I use pow with negative number.
I use this code and it is working fine with positive.
let divisor = pow(10.0, Double(5))
How could I change this line of code to use negative numbers.
Because this part returns:
let divisor = pow(10.0, Double(-8)) // 1e-08
I need to convert one value to another and there is special formula for that.
This formula is :
(Mathematical formula: Value * 10 ^ -8)
A:
1e-08 is scientific notation that literally means 1 x 10^-8 which is exactly what you'd expect 10^(-8) to return.
So how can I do this 5000 * 10 ^ -8 = 0.00005?
5000 * pow(10.0, -8)
which is the same as:
5000 / pow(10.0, 8)
Note that:
0.00005 is the same as 5e-05
| {
"pile_set_name": "StackExchange"
} |
Q:
Google docs for Ubuntu 16.10
I am using Ubuntu 16.10. I want to install google docs or microsoft docs for offline use and when I connect with internet the all documents will update with server.
If it is possible , please suggest me how can I do it. If it is not possible please suggest me other options which is working on same concept.
A:
This is easily possible for google-docs. Just install the chrome-browser, or preferably the free chromium-browser from the repositories and install the Google-Docs plugin from the web-store.
| {
"pile_set_name": "StackExchange"
} |
Q:
Spark Streaming checkpoint to remote hdfs
I am trying to checkpoint my spark streaming context to hdfs to handle a failure at some point of my application. I have my HDFS setup on a separate cluster and spark running on a separate standalone server. To do this, I am using :
ssc.checkpoint(directory: String)
This gives me : org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE when I try with directory as "hdfs://hostname:port/pathToFolder"
How can I checkpoint to a remote hdfs path? Is it possible to add credentials to the string uri? I tried googling, but no help so far.
Thanks and appreciate any help!
A:
You can provide the credentials by using:
hdfs://username:password@hostname:port/pathToFolder
| {
"pile_set_name": "StackExchange"
} |
Q:
What's the encoding for the Google main page?
When Google's main page communicates with Firefox or Chrome it uses a particular type of encoding (Perl says it is utf.64). However, I can't decode it using such; is it a gzipped enconding? I need to finish an app in Perl that should be able to make sense of the Google homepage using Firefox (like a proxy).
A:
Using LiveHTTPHeaders:
http://www.google.com/
GET / HTTP/1.1
Host: www.google.com
User-Agent: ***
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.7,tr;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: UTF-8,*
Keep-Alive: 115
Connection: keep-alive
Cookie: ***
HTTP/1.1 200 OK
Date: Thu, 18 Mar 2010 15:29:03 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=UTF-8
Content-Encoding: gzip
Server: gws
Content-Length: 4440
X-XSS-Protection: 0
which shows that the data returned is gzipped and the character encoding used is UTF-8.
#!/usr/bin/perl
use strict; use warnings;
use LWP::UserAgent;
my $ua = LWP::UserAgent->new();
$ua->show_progress(1);
my $response = $ua->get('http://google.com/');
if ( $response->is_success ) {
print $response->decoded_content, "\n";
}
A:
Assuming you are employing LWP or something compatible, just use HTTP::Message::decoded_content. Both content encoding and character encoding is figured out automatically for you.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it more efficient to use boost::asio::basic_stream_socket::async_read_some instead of boost::asio::async_read?
Is it better to use boost::asio::basic_stream_socket::async_read_some instead of boost::asio::async_read when it comes to high performance data throughput?
A:
boost::asio::async_read is a composed operation, which is well described in the documentation
This operation is implemented in terms
of zero or more calls to the stream's
async_read_some function, and is known
as a composed operation. The program
must ensure that the stream performs
no other read operations (such as
async_read, the stream's
async_read_some function, or any other
composed operations that perform
reads) until this operation completes.
any performance conclusions should be based on empirical data depending on your application.
| {
"pile_set_name": "StackExchange"
} |
Q:
ASP DetailsView, conditionally hide show controls and row?
I conditionally hide/show panels in a DetailsView...
I want to also hide/show the DetailsView row/field that the panel is contained in
because it is currently displaying empty rows when the panels are hidden?
ASCX:
<asp:DetailsView>
<asp:TemplateField>
<ItemTemplate>
<asp:panel runat="server" ID="pnlHideShow" OnInit="OnInit_Panel">
...
CodeBehind:
protected void OnInit_Panel(object sender, EventArgs e)
{
Panel pnl = (Panel) sender;
pnl.Visible = false;
switch (pnl.ID)
{
default:
break;
case "pnlHideShow":
pnl.Visible = (some condition);
//How to hide/show DetailsView item containing this panel?
break;
...
}
...
}
Hope I am not a candidate for "worse-than-failure" ;)
A:
Something like:
pnl.Visible = (some condition);
pnl.Parent.Visible = true; // you may have to go pnl.Parent.Parent.Parent.Visible... try stepping through debug
| {
"pile_set_name": "StackExchange"
} |
Q:
Grunt-contrib-connect: how to launch the server in a specified browser
I'm quite new to the bower / grunt / yeoman environment. I'm trying to customize the app generated by the default yeoman Webapp generator.
Basically when I launch grunt serve the default browser will be launched opening the url served by the grunt server. I'd like to specify in which browser the webapp should be opened but I had no luck.
These are the default options of the connect task (using grunt-contrib-connect) inside my gruntfile:
connect: {
options: {
port: 9000,
open: true,
livereload: 35729,
// Change this to '0.0.0.0' to access the server from outside
hostname: 'localhost',
}
I've tried to add the field appName: 'Firefox' but I think this is not what I'm looking for. I guess the appName is used to specify how to lauch the default browser from the command line (e.g. with the open command), am I right?
Is it possible to specify the browser in grunt-contrib-connect or not at all? If not how should I accomplish this task? Maybe using grunt-open?
Thanks
A:
According to this commit in grunt-contrib-connect
, the open option seems to be supported from v0.6.0.
But the app generated webapp generator uses v0.5.0 by default.
So you need to upgrade it in package.json.
"grunt-contrib-connect": "~0.7.1",
Then run npm install (and you can double-check with npm list | grep grunt-contrib-connect if v0.7.1 has been installed) and add the open option in Gruntfile.js.
connect: {
...
livereload: {
options: {
open: {
appName: 'Firefox'
},
...
This works for me, so I hope this helps you too.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I assign concurrent tasks based on user selections in a SharePoint workflow?
I am trying to create a workflow on SharePoint 2007 that should go like so:
User adds a new item in the "Faults" list. When creating it, he ticks the boxes for whatever departments needs to address the fault.
The appointed representative of each selected department is assigned a task to check the new item.
The represenative marks the task as complete.
All tasks are completed and the workflow is finished.
The problem: if I create a workflow in SharePoint Designer 2007 and create multiple steps checking to see "if X department was ticked then assign task to user", it will wait until the first department marks their task as complete before it assigns a task to the next department. I need all departments to be assigned the task at the same time.
Other options I have considered:
If department X was ticked then add the representatives' username to a variable named "userX". Repeat with department Y and variable "userY", and then Z with variable "userZ". Finally, assign a task to userX, userY and userZ. I had hoped it would ignore the blank variables, but instead it assigns a task to nobody and the workflow never finishes.
Having the user assign the item to users instead of departments is not possible since they can't be expected to know the appointed representatives of every department.
I can create a task through the "Create List Item" action, but the workflow is marked finished after creating the tasks, even through the tasks are not complete.
Does anyone have any ideas?
A:
If anyone is interested, here is the solution I eventually used:
First step of the workflow:
if [department checkboxes] contains "Department 1"
store "user1" in [Variable:assignedTo]
One step each for other departments:
if [department checkboxes] contains "Department X"
and [Variable:assignedTo] is empty
store "userX" in [Variable:assignedTo]
else if [department checkboxes] contains "Department X"
and [Variable:assignedTo] is not empty
store "[Variable:assignedTo]; userX" in [Variable:assignedTo]
And finally,
assign "Task" to [Variable:assignedTo]
Hope this helps somebody.
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits