text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
Creating file from php radio button variables
I want to create a text file (or at least to echo) with some values taken from a radio button list.
Looking for info, I have been able to build the radio forms like this:
<!DOCTYPE html>
<html>
<head>
<title>Config</title> <!-- Include CSS File Here-->
<link href="css/style.css" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="main">
<!---- Radio Button Starts Here ----->
<form>
<label class="heading">First value </label><br>
<input name="v1" type="radio" value="v1text1">Value 1 - Option 1<br>
<input name="v1" type="radio" value="v1text2">Value 2 - Option 1<br>
</form>
<br>
<form>
<label class="heading">Second value </label><br>
<input name="v2" type="radio" value="v2text1">Value 2 - Option 1<br>
<input name="v2" type="radio" value="v2text2">Value 2 - Option 2<br>
<input name="v2" type="radio" value="v2text3">Value 2 - Option 3
</form>
<input name="submit" type="submit" value="Submit">
</div>
</div>
</body>
</html>
And now I would like to take the values v1 and v2, so I found the following php code (my idea would be to do this with each one of the values):
<?php
if (isset($_POST['v1']))
echo $_POST['v1'];
else
?>
So I added it after the submit button code, however after selecting values and clicking on Submit button nothing happens.
I know nothing about php, the code I have written has been taken from google findings.
A:
First of all create a single form, & place the <input name="submit" type="submit" value="Submit"> inside the <form> tag.
We need certain parameters for submitting the form, just like method & action parameter. Always use 'post' method for better security.
Now, modify the <form> tag with this code:
<form method="post" name="form1" action="">
And in 'action' parameter , write the path of the page where you want to submit the form & want to display the values of radio buttons. Just like this
<form method="post" name="form1" action="results.php">
otherwise leave the action parameter 'Blank' just like the first <form> tag, if you want to submit & get the radio button values on the same page.
Now, the code for retrieving the values of radio buttons:
<?php
if(isset($_POST['submit'])){
if(isset($_POST['v1'])){
echo $_POST['v1'];
}
if(isset($_POST['v2'])){
echo $_POST['v2'];
}
}
?>
Place it just above the html tag.
Now, here is your full updated code for proper understanding, just copy & replace with your code.
<?php
if(isset($_POST['submit'])){
if(isset($_POST['v1'])){
echo $_POST['v1'];
}
if(isset($_POST['v2'])){
echo $_POST['v2'];
}
}
?>
<!DOCTYPE html>
<html>
<head>
<title>Config</title> <!-- Include CSS File Here-->
<link href="css/style.css" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="main">
<!---- Radio Button Starts Here ----->
<form method="post" name="form1" action="">
<label class="heading">First value </label><br>
<input name="v1" type="radio" value="v1text1">Value 1 - Option 1<br>
<input name="v1" type="radio" value="v1text2">Value 2 - Option 1<br>
<br/>
<label class="heading">Second value </label><br>
<input name="v2" type="radio" value="v2text1">Value 2 - Option 1<br>
<input name="v2" type="radio" value="v2text2">Value 2 - Option 2<br>
<input name="v2" type="radio" value="v2text3">Value 2 - Option 3
<input name="submit" type="submit" value="Submit">
</form>
</div>
</div>
</body>
</html>
Hope, this may be useful to you.
| {
"pile_set_name": "StackExchange"
} |
Q:
ClasNotFound Exception in jar
I have the following pom.xml.
POM.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.datasys.prasanna</groupId>
<artifactId>hadoop-wordcount</artifactId>
<version>1.0.0</version>
<packaging>jar</packaging>
<name>hadoop-wordcount</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
</dependency>
</dependencies>
</project>
When I create a jar like mvn package, I get a jar named hadoop-wordcount-1.0.0.jar
But when I try to run the jar like hadoop jar hadoop-wordcount-1.0.0.jar WordCount /input /out1
It says Exception in thread "main" java.lang.ClassNotFoundException: WordCount
WordCount is the java file which has my main method. Am I missing out something in pom.xml?
A:
If your WordCount class is in a (Java) package (for example, if you copied and pasted code from this example), then you'll need to provide the fully qualified class name on the command line.
For example:
hadoop jar hadoop-wordcount-1.0.0.jar org.myorg.WordCount /input /out1
| {
"pile_set_name": "StackExchange"
} |
Q:
ssrs - how to use a group by in the header
My aim is to have that
I want to add a colum name, on my header that will be in a group. That column name will be linked to an existing group within the body of my report
This is what I've done
I tried to put a column name inside of a textbox , within the header
=(Fields!EVENEMENTS_TYPE_LIBEL.Value, "DataSetEvtsLibel")
It did not work. The only thing possible is to have an agreggate (see below)
=First(Fields!EVENEMENTS_TYPE_LIBEL.Value, "DataSetEvtsLibel")
I then created a parameter Libel, I've used a query to provide available values inside of the parameter.
It gives me the relevant field but alas it does not do the grouping.
I looked at the internet but I did not find anything relevant.
If you have any tips, they are more than welcomed.
Thanks
Update: Should I mention I'm talking about Page header
A:
You can create a Group Header in order to show the current value in the header between each instance of your group.
In the Row Groups pane, right click Details group, and add a parent group.
Select EVENEMENTS_TYPE_LIBEL in the group by drop down list, and mark the Add group header check box.
Now in the tablix delete the first row and the first column. You should get a tablix like this:
Merge the first row in one cell and use EVENEMENTS_TYPE_LIBEL field.
Add the columns in the next row, you will have to insert an additional row for column headers, so use insert inside the group:
It should produce the following tablix:
UPDATE: Adding textbox with the current group present in the page.
See the header textbox properties in the tablix and look for the textbox name:
Now in the Page Header textbox use:
=ReportItems!Textbox176.Value
It will show something like this:
Hope this be what you are looking for.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can some one give be a sample code how to add a request handler to a HttpServer Object in dart?
I am very new to the dart programming any help is appreciated.
void main() {
var server = new HttpServer();
server.listen('127.0.0.1', 8080);
server.
addRequestHandler(
accept(HttpRequest function) => acceptInput(request, response), handler);
}
I want to add the function below to the request handler.
server.addrequestHandler()
I would like to do this so that I can add many request handlers as such including one for websockets
A sample or a tutorial would be very helpful.
I want to keep each handler in a separate function just for simplicity.
void acceptInput(HttpRequest request,HttpResponse response){
print(request.connectionInfo.toString());
print(request.queryParameters.toString());
response.outputStream.write('Hello dude'.charCodes);
response.outputStream.close();
}
Note:I know my void main code is wrong I need help to make it correct so that it incorporates the acceptInput Function.
A:
Actually, you're really close.
Try this:
var server = new HttpServer();
server.addRequestHandler(
(req) => req.path == '/save',
handleSave);
server.addRequestHandler(
(req) => req.path == '/delete',
handleDelete);
server.defaultRequestHandler = new StaticFileHandler(basePath).onRequest;
Where handleSave and handleDelete are just functions, like:
handleSave(HttpRequest req, HttpResponse resp) {
// ...
}
| {
"pile_set_name": "StackExchange"
} |
Q:
OS X terminal command to create a file named on current date
I have set up a cron task and I want to save the output to a file. The file should have the name based on the time at which the cron was executed (eg.: 20110317-113051.txt).
My actual cron command is as follows: lynx -dump http://somesite/script.php > /Volumes/dev0/textfile.txt
I want the textfile to be replaced by some sort of unique time stamp.
I've tried lynx -dump http://somesite/script.php > $(date).txt but I receive an error that the command is ambiguous.
Thanks for your help!
Sorin
A:
The date command can be given a format to determine exactly what form it generates dates in. It looks as if you want $(date +%Y%m%d-%H%M%S).txt. With this format, the output of date should be free of spaces, parentheses, etc., which might otherwise confuse the shell or the lynx command.
See http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/date.1.html for documentation of the date command and http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man3/strftime.3.html for documentation of the format string, which is the same as for the strftime function in the standard library.
A:
You need to quote the file name, since date by default will have special characters in it:
lynx -dump http://somesite/script.php > "$(date).txt"
As @Gareth says though, you should specify a format string for date, so that you get a more readable/manageable file name, e.g.
lynx -dump http://somesite/script.php > "$(date +%Y%m%d-%H%M%S).txt"
| {
"pile_set_name": "StackExchange"
} |
Q:
.NET: Is Type.GetHashCode guaranteed to be unique?
I have someone using Type.GetHashCode as if it were a primary key. I think this is a horrible idea but I wanted to know if there was some sort of documented special case that says no two types would have the same hash code.
A:
There are no guarantees around GetHashCode except that it will likely be randomly distributed, not unique. Documentation specifically mentions that:
The default implementation of the GetHashCode method does not
guarantee unique return values for different objects. Furthermore,
the .NET Framework does not guarantee the default implementation of
the GetHashCode method, and the value it returns will be the same
between different versions of the .NET Framework. Consequently, the
default implementation of this method must not be used as a unique
object identifier for hashing purposes. ... if two objects do not compare as equal, the GetHashCode methods for the two object do not have to return different values.
Random distribution is encouraged to avoid hash collisions (slow Dictionaries):
For the best performance, a hash function must generate a random
distribution for all input.
It is also a very bad idea to persist results of GetHashCode and base any decisions on this persisted value. The same object may return different hash code on a next application execution:
The GetHashCode method for an object must consistently return the same
hash code as long as there is no modification to the object state that
determines the return value of the object's Equals method. Note that
this is true only for the current execution of an application, and
that a different hash code can be returned if the application is run
again.
CLR itself changed GetHashCode implementation for a String between .NET 1 and .NET 2 and uses different hash algorithm for 32 and 64 bit versions.
From Guidelines and rules for GetHashCode:
GetHashCode is designed to do only one thing: balance a hash table. Do
not use it for anything else.
You should be looking at cryptographic hashes if you want almost unique hashcode based on the object value.
A:
It's not guaranteed to be unique.
If your assemblies are strongly named you could use the fully qualified type name as a unique key to identify a Type.
| {
"pile_set_name": "StackExchange"
} |
Q:
Convert RDD[(Int,Int)] to PairRDD in scala
What is the problem with this example ?
val f = sc.parallelize(Array((1,1),(1,2)))
val p = new org.apache.spark.rdd.PairRDDFunctions[Int,Int](f)
Name: Compile Error
Message: error: type mismatch;
found : org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.RDD[(Int, Int)]
required: org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.RDD[(Int, Int)]
val p = new org.apache.spark.rdd.PairRDDFunctions[Int,Int](f)
^
A:
Your code seems to work fine on Spark 2.2.0.
This is the transcript of the console commands in Spark version 2.2.0:
scala> val f = sc.parallelize(Array((1,1),(1,2)))
f: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[0] at parallelize at <console>:24
scala> val p = new org.apache.spark.rdd.PairRDDFunctions[Int,Int](f)
p: org.apache.spark.rdd.PairRDDFunctions[Int,Int] = org.apache.spark.rdd.PairRDDFunctions@6e1d939e
scala> p
res0: org.apache.spark.rdd.PairRDDFunctions[Int,Int] = org.apache.spark.rdd.PairRDDFunctions@6e1d939e
scala> f
res1: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[0] at parallelize at <console>:24
Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_131)
This seems to be a bug in an older version to me.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is there anyway to use cognitive services to detect if a string contains words vs just junk shift chars/gibberish?
I'm trying to find a way to use cognitive services to detect if a string contains a piece of coherent text or is just junk. Example:
SDF#%# ASFSDS b
vs
Hi my name is Sam.
This seems impossible to do. I had the idea of running the text through the keywords text analysis (which would give me a keyword of ASDSDS (how useful!)) and then run that keyword though the Bing Spell Check. I'm not sure what is going on in the the USA but it seems ASFSDS is English. It really is quite... erm.. dumb.
I've tried running similar text through a bunch of services (like language detection) and they all seem convinced that my gibberish samples are 100% coherent English.
I'm going to quiz an MS rep about it on Friday but I was wondering if anyone has achieved something like this using Cognitive services?
A:
Rather than a binary is-word-or-not question, what you might consider instead is the probability of a word being gibberish. You can then choose a threshold that you like.
For computing word probalities, you might try the Web Language Model API. You could look at the joint probability, as an example. For your set of words, the response looks as follows (values for the body corpus):
{
"results": [
{
"words": "sdf#%#",
"probability": -12.215
},
{
"words": "asfsds",
"probability": -12.215
},
{
"words": "b",
"probability": -3.127
},
{
"words": "hi",
"probability": -3.905
},
{
"words": "my",
"probability": -2.528
},
{
"words": "name",
"probability": -3.128
},
{
"words": "is",
"probability": -2.201
},
{
"words": "sam.",
"probability": -12.215
},
{
"words": "sam",
"probability": -4.431
}
]
}
You will notice a couple of idiosyncrasies:
Probabilities are negative. This is because they are logarithmic.
All terms are case-folded. This means that the corpus won't
distinguish between, say, GOAT and goat.
Caller must perform a
certain amount of normalization themselves (note probability of
sam. vs sam)
Corpora are only available for the en-us market. This could be problematic
depending on your use case.
An advanced use case would be computing conditional probabilities, i.e. the probability of a word in the context of words preceding it.
| {
"pile_set_name": "StackExchange"
} |
Q:
ASP.NET Button click not caught (button in user control which is dynamically loaded in Repeater)
I have written a user control that captures some user input and has a Save button to save it to the DB. I use a repeater to render a number of these controls on the page - imagine a list of multiple choice questions with a Save button by each question.
I am loading the user control inside the repeater's ItemDataBound event like this (code simplified):
Protected Sub rptAssignments_ItemDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.RepeaterItemEventArgs) Handles rptAssignments.ItemDataBound
Dim CurrentAssignment As Assignment = DirectCast(e.Item.DataItem, Assignment)
Dim ctl As UA = CType(LoadControl("~\Controls\UA.ascx"), UA)
ctl.AssignmentID = CurrentAssignment.AssignmentID
ctl.Assignment = CurrentAssignment.AssignmentName
ctl.EnableViewState = True
e.Item.Controls.Add(ctl)
End Sub
FYI, I need to load the control at runtime rather than specify it in the ItemTemplate because a different control could be used for each row.
In the user control, there is a linkbutton like this:
<asp:LinkButton ID="lbnUpdate" runat="server" Text="Update" OnClick="lbnUpdate_Click" />
... and a button click handler like this:
Protected Sub lbnUpdate_Click(ByVal sender As Object, ByVal e As EventArgs) Handles lbnUpdate.Click
' my code to update the DB
End Sub
The problem is that when the Save button is clicked, the page posts back, but lbnUpdate_Click is not called. The Page_Load event of the page itself is called however.
I should mention that the repeater is part of a user control, and that user control is loaded inside another user control (this is a DotNetNuke site which makes heavy use of user controls). The Save button link looks like this:
javascript:__doPostBack('dnn$ctr498$AssignmentsList$rptAssignments$ctl04$ctl00$lbnUpdate','')
A:
This problem exemplifies how webforms outsmarts itself.
You have to reconstitute the Repeater, either by re-binding or from viewstate, to have sub-controls raise events. The price you pay is either another trip to your data source or all that redundant data stored on the client in the viewstate. Shameful!
| {
"pile_set_name": "StackExchange"
} |
Q:
Magento Disable Filter Field to Admin Grid
I have a custom module with a backend page. In the grid, I show the customer email as the user name. By default, Magento adds a filter to every column in the grid. Now, when I try to filter by the customer's email, I get an exception saying that my custom table doesn't have an email column. Magento is trying to find that in my custom table. How can I fix this problem, or how can I remove the field of that column so that the admin can't filter by that field.
Thanks.
A:
Add the option
'filter' => false
to the column you want to remove the filter from in the grid view (e.g. app/code/core/Mage/Adminhtml/Block/Sales/Order/Grid.php)
$this->addColumn('email', array(
'header' => Mage::helper('module')->__('Email'),
'align' =>'left',
'index' => 'email',
'filter' => false,
));
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get all contacts in Exchange Web Service (not just the first few hundreds)
I'm using Exchange Web Service to iterate through the contacts like this:
ItemView view = new ItemView(500);
view.PropertySet = new PropertySet(BasePropertySet.IdOnly, ContactSchema.DisplayName);
FindItemsResults<Item> findResults = _service.FindItems(WellKnownFolderName.Contacts, view);
foreach (Contact item in findResults.Items)
{
[...]
}
Now this restricts the result set to the first 500 contacts - how do I get the next 500? Is there some kind of paging possible? Of course I could set 1000 as Limit. But what if there are 10000? Or 100000? Or even more?
A:
You can do 'paged search' as explained here.
FindItemsResults contains a MoreAvailable probably that will tell you when you're done.
The basic code looks like this:
while (MoreItems)
{
// Write out the page number.
Console.WriteLine("Page: " + offset / pageSize);
// Set the ItemView with the page size, offset, and result set ordering instructions.
ItemView view = new ItemView(pageSize, offset, OffsetBasePoint.Beginning);
view.OrderBy.Add(ContactSchema.DisplayName, SortDirection.Ascending);
// Send the search request and get the search results.
FindItemsResults<Item> findResults = service.FindItems(WellKnownFolderName.Contacts, view);
// Process each item.
foreach (Item myItem in findResults.Items)
{
if (myItem is Contact)
{
Console.WriteLine("Contact name: " + (myItem as Contact).DisplayName);
}
else
{
Console.WriteLine("Non-contact item found.");
}
}
// Set the flag to discontinue paging.
if (!findResults.MoreAvailable)
MoreItems = false;
// Update the offset if there are more items to page.
if (MoreItems)
offset += pageSize;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Python lxml - can't get to parent
I have an xml tree that I need to search:
<instance>
<hostName>hostname1</hostName>
<port enabled="true">9010</port>
<metadata>
<branch>master</branch>
</metadata>
<vipAddress>vip.address.com</vipAddress>
</instance>
<instance>
<hostName>hostname2</hostName>
<port enabled="true">9011</port>
<metadata>
<branch>sub_branch</branch>
</metadata>
<vipAddress>vip2.address.com</vipAddress>
</instance>
I am trying to do a search via the text in branch then get the grandparent element and get the vipAddress and port but when I use the code below when I try and print the vipAddress and the port it prints all of them instead of the one I was actually looking for:
branch_name = 'master'
for record in tree.xpath('//branch/text()'):
if(record == branch_name):
branch = record.getparent()
target_environment = branch.xpath('//vipAddress/text()')
print(target_environment)
target_port = branch.xpath('//port/text()')
example:
If I was to search for master instead of returning target_environment=vip.address.com and port=9011 it will return target_environment=[vip.address.com, vip2.address.com] and port=[9010,9011]
I am sure I am doing something simple wrong I just can't see what.
A:
I'm not the best at working with xml in python, but I do see a few issues:
for record in tree.xpath('//branch/text()'): converts your branch elements to string, which you use further down to compare to branch_name. Strings do not have a getparent() method, so you may want to remove the text() from the xpath and compare branch_name to record.text
Once your record is an element, calling getparent() twice on record will give you it's grandparent element instance. I'm sure there's a better way to do this, but it seems to work.
.xpath('//') searches for items matching the query anywhere. Since you just want the element that's a child of <instance>, branch.xpath('vipAddress/text()') should do. Same goes for finding the target_port.
Also I think .xpath always returns a list, so even if this all works your port will look like [9011]
Putting it together I get something like:
branch_name = 'master'
for record in tree.xpath('//branch'):
if(record.text == branch_name):
branch = record.getparent().getparent()
target_environment = branch.xpath('vipAddress/text()')
print(target_environment)
target_port = branch.xpath('port/text()')
print(target_port)
| {
"pile_set_name": "StackExchange"
} |
Q:
Will a bryant compatible 60amp breaker fit a #2 wire?
I looked up size for #2 and it says just over a quarter inch. I'm concerned a 60amp breaker won't fit a #2. I have an older Bryant panel. Maybe breaker manufacturer makes a difference or I must order a special one?
I have along run 100+ft which has forced me to use aluminium and a #2.
A:
On the side or front of the breaker, there is a label that says what the maximum wire size is that you can use. This label is required by UL when the breaker is made. It may have fallen off however. If so, you can look on-line for a similar breaker. Bryant is now part of Cutler Hammer, their version of that breaker is called a BR series with the BR standing for Bryant. The breaker is unchanged from when Westinghouse owned the Bryant line (Westinghouse was bought by Eaton years ago, Eaton owns Cutler Hammer). So any technical specs on a new version are going to be the same.
| {
"pile_set_name": "StackExchange"
} |
Q:
Calc expected value of 5 random number with uniform distribution
Assume we have a random numbers $\sim U(0,100)$.
Then the expected value of that number will be: $\int_{0}^{100} \frac{x}{100}$ = 50.5
Now assume we have 5 random numbers $\sim U(0,100)$.
How can I calculate what would be the expected value of the maximal number?
Thanks.
A:
You need to learn about order statistics:
https://en.wikipedia.org/wiki/Order_statistics
The maximum of five independent observations is the fifth order statistic (of that sample). In your case, that will have a certain (scaled) beta distribution. You can find the datails in wikipedia above. In your case it will be (100 multiplied by) a beta(5,1)-variable, with expectation $100 \cdot \frac{5}{6}$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is "what technologies exist to solve this problem" an appropriate question?
The question is:
I'm interested in the idea of syndicating/sharing discussions across
sites. Many discussion platforms are effectively walled gardens where
the operators want to keep you on their site (so they can show you
adverts).
But it doesn't have to be like that. I'm sure some sites
would be happy to share their conversations (as open data) with other
websites and allow users of these other websites to contribute
back, with appropriate authentication. Does anyone know of any
standards or platforms that can achieve this?
I don't think it has one "correct" answer, and it might stimulate discussion, but it is technical and I can't think of a better place to ask it than on Stack Overflow. Can you?
A:
I'd say that's essentially a recommendation/shopping question, and those are indeed not appropriate. As you say there isn't "one "correct" answer, and it might stimulate discussion". Or as the close reason states:
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam.
With regards to "a better place", there wouldn't be one on the network. A site that often comes up for such questions is http://www.slant.co. I can't personally recommend it, since I haven't used it. So perhaps have a look and decide for yourself if that is an appropriate location, and how you might need to modify your question to make it fit there, if possible.
A:
This is the sort of question you want reduced a bit more before it's time to use a tool like Stack Overflow, at least most of the time. It depends a bit on the background, for instance:
I'm using lint checkers as part of my build / test suite, yet none of them manage to detect when I'm making an assignment in a conditional. Is there a lint checker for POSIX operating systems that can detect this in C99 code? I've used foo, bar, barfoo and foobar with no success.
That's a very narrow question and it's not asking for a recommendation. If a lint checker does this then it's a valid answer, else not, nothing subjective about it. It might be closed, but it's answerable and useful.
What lint checkers catch the widest sorts of problems in C code? Are there better tools than lint checkers available? Is there some kind of standard that lint checkers should meet?
This is simply way too broad and way too subjective. In the first example, we've already done the work of:
Determining what problem we're actually solving, then solving for it in our question
Narrowly scoping the answers that we receive. If an answer doesn't say 'No, there's no tool for that' or 'This tool catches exactly what you want' then it's wrong, and not just an unpopular opinion. Answers will ideally name a tool and show me how to use it.
Narrowed the breadth of the search by specifically saying what didn't work
You could conceivably get your question to the point where it would be a decent fit for a Q&A format. What you have is a rather good discussion, the fruits of which would likely be excellent building blocks of a great question.
Just keep refining your idea until it looks like something that could have a single correct answer, or at least lend to what sorts of answers would not be correct. Ideally, you're exploring an actual implementation at this time, and can name a very specific platform.
| {
"pile_set_name": "StackExchange"
} |
Q:
Endless nested tree of organizations in database
I have a table named as Organizations
id | organization | parent_id
-----+---------------+-----------
51 | Organ1 | 0
71 | Organ2 | 0
83 | Organ2.1 | 71
89 | Organ1.1 | 51
104 | Organ1.1.1 | 89
...
The organizations that have parent_id = 0 are the root organizations. Now I see I am wrong. parent_id defines the organization's parent organization.
It looked a brilliant idea to me first. Thus, I would have created endless nested child organizations in a single table.
Now I am stuck how to determine how deep a organization is.
My current attempt is for now like the following:
<?php
$queryRootOrg = $db->prepare("SELECT * FROM Organizations WHERE parent_id = 0" );
$queryRootOrg ->execute();
$menu = "<ul id='menu'>";
if($queryRootOrg ->rowCount() > 0)
{
while($rowRoot = $queryRootOrg ->fetch(PDO::FETCH_ASSOC))
{
$menu .= "<li><a href='?org=".$rowRoot['id']."'>".$rowRoot['birim']."</a>";
$queryChildren= $db->prepare("SELECT * FROM Organizations WHERE parent_id = :p1");
$queryChildren->bindValue(":p1", $rowRoot['id'], PDO::PARAM_INT);
$queryChildren->execute();
if ($queryChildren->rowCount() > 0)
{
$menu .= "<ul>";
while($rowSub = $queryChildren->fetch(PDO::FETCH_ASSOC))
{
$menu .= "<li><a href='?org=" . $rowSub['id'] . "'>" . $rowSub['birim'] . "</a></li>";
}
$menu .= "</ul>";
}
$menu .= "</li>";
}
}
$menu .= "</ul>";
?>
This code brings me only two levels. I think I cannot see the third level organization in the list.
My question is how come I know the depth of an organization in this circumstance? Or is there another approach?
A:
Please refer below link as it provide all method for routing through tree like structure ,
http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/
| {
"pile_set_name": "StackExchange"
} |
Q:
Delete answers not visible; neither is their delete vote
Since I have > 10k rep I can vote to delete questions.
I have noticed several time on page https://workplace.stackexchange.com/tools?tab=delete&daterange=last2days that some delete votes click through to answers not questions, and over there nothing about deletion is visible.
Currently in view:
That top one links to answer https://workplace.stackexchange.com/a/120063/8036, but I see no delete votes there (as they would appear under a question), and I think I have also never seen the possibility to add a delete vote with one of these answers (yes, I'm aware of the delay period of 2 days after closure):
What am I missing here?
Does voting to delete answers require more rep?
Does that top answer currently have 1 or 2 delete votes?
What is the significance of the smaller font - does that indicate an answer?
A:
You need 10k rep to delete questions but 20k to delete answers. However, there's only one "delete votes" page where you're seeing this, and it's computed for the site, not for individual users. Unfortunately that means that, yes, you'll see some things there that you can't vote on.
The larger font size is for questions and the smaller font size is for answers. The number to the left of the link is the number of votes that have already been cast. I think the number in parentheses to the right is the number of additional votes needed, but I'm not sure. (While it usually takes three votes and you'd think you could derive the one from the other, it takes more votes to delete a highly-upvoted question.)
| {
"pile_set_name": "StackExchange"
} |
Q:
Adding row AngularJS
Function:
angular.module('formTerm', [])
.controller('MainController', ['$scope',
function($scope) {
$scope.rows = [{
explanation: '',
example: ''
}];
$scope.counter = 5;
$scope.addRow = function() {
$scope.rows.push({
explanation: '',
example: ''
});
$scope.counter++;
}
HTML :
<span class="add btn btn-primary" ng-click="addRow()">Add</span>
<div class="template term-row" ng-repeat="rows in rows">
<div class="col-md-4">
<textarea class="form-control" rows="2" id="explanation" placeholder="Explanation" name="explanation" ng-model="formData.rows.explanation"></textarea>
</div>
<div class="col-md-4">
<textarea class="form-control" rows="2" id="example" placeholder="Example" name="example" ng-model="formData.rows.example"></textarea>
</div>
<div class="clearfix"></div>
<br />
</div>
Someone please help. The row was added but I want it in different element name such as "explanation1@explanation2@explanation3" etc.
Please have a look at the sample
A:
You assigned your inputs models to the variables formData.rows.explanation and formData.rows.example - so simply put them as the value.
This should do the trick:
$scope.counter = 5;
$scope.addRow = function() {
$scope.rows.push({
explanation: $scope.formData.rows.explanation,
example: $scope.formData.rows.example
});
$scope.counter++;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Why two strings allocated with NSString *str1 = [[NSString alloc]init]; have same address?
NSString *str1 = [[NSString alloc]init];
NSString *str2 = [[NSString alloc]init];
NSLog(@"%p\n%p",str1,str2);
result
str1:0x7fff7b559d00
str2:0x7fff7b559d00
Why str1 and str2 have same memory address?
A:
NSString is immutable, so two empty strings are identical. Therefore, Cocoa can return the same object every time you create it.
The real question, however, how can this be done when alloc returns two different addresses for the two strings?
The answer is that init is allowed to substitute the value of self for anything it wishes, and return a completely different object. For example, NSString's init can be implemented like this:
-(id)init {
static NSString *empty = nil;
if (!empty) {
empty = [[NSString alloc] initWithCharacters:"" length:0];
}
return empty;
}
Note: The real code in Cocoa will almost certainly be different; this is only an illustration to show how the same address could be returned for different allocations.
| {
"pile_set_name": "StackExchange"
} |
Q:
PS3 sending bad data over HDMI?
Within the last two weeks or so (I think since I last did a system update), my PS3 has on occasion been causing my television to freak out and stop accepting commands via remote control and the buttons on the face of the television itself, until I pull out and plug back in the TV's power cable. What happens is that, as the PS3 turns on, I hear a couple of beeps from the TV as if it's changing inputs, and the TV briefly shows the selected input on the screen. A couple seconds later, I'll sometimes hear another beep and see the same thing on the screen, and then I'll notice that the TV no longer responds to volume changes, nor any other command (such as input changes or even the power button). The video image displays just fine, though.
The TV is an LG 47LE5500, and thinking the HDMI cable might have been going bad, I replaced it but still had the problem. I then tried switching HDMI ports, to see if the HDMI input that I had plugged the PS3 into was bad. The problem remained, however. Since I also have a Windows-based DVR plugged into a third HDMI port on the TV and have no such problems when watching that, I've concluded that it's the PS3 that's causing this.
Has anyone experienced something like this before? It seems like my PS3, upon powering on, is sending garbage data over HDMI which is causing my TV to lock up. I took a quick look at the video settings on the PS3 but didn't see anything that looked odd that might cause my issue.
Update on 7/10/14:
My DVR is now causing the same problem with my television, so it is definitely the TV that is causing the problem. As I point out below, keeping the resolution at 720p gets around the issue, though it's not ideal.
A:
Sounds like you've done all the research already.
The only other two steps I can think of at this point:
Try another PS3.
Try to connect to another TV.
The other possibility here is that your TV is going bad for whatever reason. It could be that the PS3's HDCP handshake is throwing the TV off, or some other data that it's trying to send. Trying another known-good device would confirm this though.
In either case, I'm sure a Component connection wouldn't have an issue. Otherwise it's just a matter of replacing whichever device is going bad, there's not really much you can do yourself to fix it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Stream Poco Zip Compression to Poco HTTPServerResponse
I would like to directly compress a directory into a Poco::HTTPServerResponse stream. However, downloading the zip file produced by the following code leads to a corrupt archive. I do know that the below compression approach does work for locally created zip files as I have successfully done that much. What am I missing or is this simply not possible? (Poco v1.6.1)
std::string directory = "/tmp/data";
response.setStatusAndReason(HTTPResponse::HTTPStatus::HTTP_OK);
response.setKeepAlive(true);
response.setContentType("application/zip");
response.set("Content-Disposition","attachment; filename=\"data.zip\"");
Poco::Zip::Compress compress(response.send(),false);
compress.addRecursive(directory,
Poco::Zip::ZipCommon::CompressionMethod::CM_STORE,
Poco::Zip::ZipCommon::CompressionLevel::CL_MAXIMUM,
false, "data");
compress.close();
A:
I use the same technique successfully, with only a slight difference:
The compression method and the compression level (CM and CL).
compress.addFile( cacheFile, Poco::DateTime(), currentFile.GetName(), Poco::Zip::ZipCommon::CM_DEFLATE, Poco::Zip::ZipCommon::CL_SUPERFAST );
A zip file corresponds to the DEFLATE algorithm, so when unzipping, your explorer/archive manager probably doesn't work out.
Either that, or it's pointless to use a MAXIMUM level on a STORE method (STORE non compressing by definition).
EDIT: Just tried it, actually, it's because CM_STORE internally uses headers (probably some kind of tar). Once your files have been added to the zip stream and you close it, Poco tries to order the header, and resets the position of the output stream to the start to write them.
Since it cannont be done on the HTTP output stream (your bytes are already sent!), it fails.
Switching to CM_DEFLATE should fix your problem.
| {
"pile_set_name": "StackExchange"
} |
Q:
Non-parametric alternative to simple t-test
I have five numeric variables of two populations (each of them with 60 individuals) and for each of those five variables I want to know if there is difference in the means.
I was trying to use a simple t-test for this (the t.test R function), but let me explain my concerns to see if it's possible.
One variable of one population do not pass the Shapiro normality test.
Any of the five variables passed Levene's test for 0.05, only one for 0.01 (but is the one containing the not normal distribution in one variable).
Even with all that, would it be a good choice to use the t-test to evaluate the means? What could be a non-parametric alternative that suits my problem?.
A:
The t-test does not assume normality of the dependent variable; it assumes normality conditional on the predictor. (See this thread: Where does the misconception that Y must be normally distributed come from?). A simple way to condition on your grouping variable is to look at a histogram of the dependent variable, splitting the data on your grouping variable.
Normality tests, like the Shapiro-Wilk test, may not be that informative. Small deviations from normality may come up as significant (see this thread: Is normality testing 'essentially useless'?). However, given your small sample size, this probably is not an issue. Nonetheless, it does not really matter (practically speaking) if normality is violated, but the extent to which it is violated.
Depending on how non-normal your data might be, you probably do not have much to worry about. The general linear model (of which the t-test is a part) is more robust to violations of the normality assumption than to other assumptions (such as independence of observations). Just how robust it is has been discussed on this website before, such as in this thread: How robust is the independent samples t-test when the distributions of the samples are non-normal?. There are many papers looking at how robust this method is to violations of normality, as well (as evidenced by this quick Scholar search: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C33&q=t-test+robust+to+nonnormality&btnG=).
You might as well run a nonparametric test and see if your conclusions differ; it costs essentially nothing to do so. The most popular alternative is the Mann-Whitney test, which is done with the stats::wilcox.test function in R (http://stat.ethz.ch/R-manual/R-devel/library/stats/html/wilcox.test.html). There are a lot of good introductions to this test on the internet—I would Google around until a description of it clicks with you. I find this video, calculating the test statistic by hand, to be very intuitive: https://www.youtube.com/watch?v=BT1FKd1Qzjw. Of course, you use a computer to calculate this, but I like knowing what's going on underneath.
A:
Let's look at one variable at a time. As I understand it you have $n_1 =60$ observations from Population 1 which is distributed $\mathsf{Norm}(\mu_1, \sigma_1)$ and
$n_2 =60$ observations from Population 2 which is distributed $\mathsf{Norm}(\mu_2, \sigma_2).$
You want to test $H_0: \mu_1 = \mu_2$ against $H_a: \mu_1 \ne \mu_2.$
You could use a 2-sample t test. Unless you have prior experience with
such data indicating that $\sigma_1 = \sigma_2,$ it is considered good
practice to use the Welch (separate-variances) t test, which does not
require $\sigma_1 = \sigma_2.$
Specifically, suppose you have the following data:
sort(x1); summary(x1); sd(x1)
[1] 78.0 78.5 80.1 80.9 87.2 88.8 89.0 90.1 90.7 92.6 92.9 93.7 94.5 97.3 98.3
[16] 98.3 98.6 100.5 100.9 101.1 101.8 101.9 103.2 103.4 104.0 104.1 104.6 104.9 105.1 105.4
[31] 105.8 107.2 107.6 108.1 108.1 108.2 108.7 109.6 109.6 112.0 112.2 112.7 114.0 114.1 114.7
[46] 114.8 116.6 117.0 118.0 118.4 118.6 119.2 123.1 124.1 124.7 125.5 127.4 127.7 136.4 138.2
Min. 1st Qu. Median Mean 3rd Qu. Max.
78.0 98.3 105.6 106.2 114.7 138.2
[1] 13.55809
.
sort(x2); summary(x2); sd(x2)
[1] 65.3 70.1 76.1 76.8 80.9 81.3 82.4 82.5 84.9 85.0 85.6 86.6 87.7 88.6 89.4
[16] 89.7 90.3 91.9 92.2 92.5 93.0 93.0 93.5 94.0 94.4 96.1 96.4 96.9 97.3 97.6
[31] 98.5 98.9 99.7 99.9 100.2 101.3 101.5 101.7 103.3 103.4 103.5 103.6 104.5 104.7 106.0
[46] 106.2 107.2 107.7 109.2 109.3 110.5 110.7 110.9 111.1 111.3 113.8 114.9 115.2 118.1 118.9
Min. 1st Qu. Median Mean 3rd Qu. Max.
65.30 89.62 98.05 97.30 106.05 118.90
[1] 11.89914
boxplot(x1, x2, notch=T, col="skyblue2", pch=19)
There are no outliers in either sample and samples seem roughly symmetrical.
The notches in the sides of the boxplots are approximate nonparametric
confidence intervals, here indicating that the population medians differ.
The Welch 2-sample t test shows a significant difference. [A pooled t test would have had df = 118; because of a slight difference in sample standard deviations, the Welch test has only about df = 116.]
t.test(x1, x2)
Welch Two Sample t-test
data: x1 and x2
t = 3.8288, df = 116.05, p-value = 0.0002092
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
4.304113 13.529220
sample estimates:
mean of x mean of y
106.2117 97.2950
Now for your specific concerns:
(1) For sample sizes of 60, you should not worry about a slight departure from
normality. If you feel nonnormality may be a problem you can look at all 120
'residuals' in this model together in one normality test. (The residuals
are $X_{1i} - \bar X_1, X_{2i} - \bar X_2.$ for $i=1, 2, \dots, 60.)$
(2) Any difference in variances is taken care of by doing the Welch 2-sample t test.
(3) The nonparametric two-sample Wilcoxon (signed-rank) test could be used if you really feel data are far from normality. This is a test to see if one population is shifted from the other. (Some authors frame this as testing for a difference
in medians, but a paper in this month's The American Statistician objects
to that interpretation and takes a broader view of the test: Dixon et al. (2018), Vol. 72, Nr. 3, "The Wilcoxon-Mann-Whitney procedure fails as a test of medians.") For my example, this test finds a significant difference between the two
populations, without assuming either population is normal.
wilcox.test(x1, x2)
Wilcoxon rank sum test with continuity correction
data: x1 and x2
W = 2465, p-value = 0.0004871
alternative hypothesis: true location shift is not equal to 0
(4) Addendum: A Comment and a linked Q&A mention permutation tests, so we include one possible permutation test. [For an elementary discussion of permutation tests, perhaps see Eudey et al. (2010), especially Sect. 3.]
Below is R code for
a permutation test using the pooled t statistic as 'metric'. If the two groups
are the same it should not matter if we randomly scramble the 120 observations
into two groups of 60. We recognize the pooled t statistic as a reasonable
way to measure the distance between two samples, but do not assume that statistic has Student's t distribution.
The code assumes data x1 and x2 are present, does the scrambling with the function sample(gp), and (conveniently, but somewhat inefficiently) uses t.test()$stat to get the t statistics of the permuted samples. The P-value 0.0003 indicates rejection
of the null hypothesis. (Results may vary slightly from one run to the next.)
all = c(x1, x2); gp = rep(1:2, each=60)
t.obs = t.test(all ~ gp, var.eq=T)$stat
t.prm = replicate( 10^5, t.test(all ~ sample(gp), var.eq=T)$stat )
mean(abs(t.prm) > abs(t.obs))
[1] 0.00026
The figure below shows a histogram of the simulated permutation distribution. [It happens
to match the density curve (black) of Student's t distribution with 118 degrees of freedom rather well, because data were simulated as normal with nearly equal SDs.] The
P-value is the proportion of permuted t statistics outside the vertical dotted lines.
Note: My data were generated in R as follows:
set.seed(818)
x1 = round(rnorm(60, 107, 15), 1); x2 = round(rnorm(60, 100, 14), 1)
A:
One thing to keep in mind- outside of some contexts in physics, no process in nature will generate purely normally distributed data (or data with any particular nicely behaved distribution). What does this mean in practice? It means that if you possessed an omnipotent test for normality, the test would reject 100% of the time, because your data will essentially always only be, at best, approximately normal. This is why learning to ascertain the extent of approximate normality and its possible effects on inference is so important for researchers, rather than relying on tests.
| {
"pile_set_name": "StackExchange"
} |
Q:
Alsa mixer and GtkVolumeButton
I make code to get and set alsa mixer volume:
snd_mixer_elem_t *elem = NULL;
long alsa_min, alsa_max, alsa_vol;
int alsa_get_volume( void )
{
long val;
assert (elem);
if (snd_mixer_selem_is_playback_mono(elem)) {
snd_mixer_selem_get_playback_volume(elem, SND_MIXER_SCHN_MONO, &val);
return val;
} else {
int c, n = 0;
long sum = 0;
for (c = 0; c <= SND_MIXER_SCHN_LAST; c++) {
if (snd_mixer_selem_has_playback_channel(elem, c)) {
snd_mixer_selem_get_playback_volume(elem, SND_MIXER_SCHN_FRONT_LEFT, &val);
sum += val;
n++;
}
}
if (! n) {
return 0;
}
val = sum / n;
sum = (long)((double)(alsa_vol * (alsa_max - alsa_min)) / 100. + 0.5);
if (sum != val) {
alsa_vol = (long)(((val * 100.) / (alsa_max - alsa_min)) + 0.5);
}
return alsa_vol;
}
}
int alsa_set_volume( int percentdiff )
{
long volume;
alsa_get_volume();
alsa_vol += percentdiff;
if( alsa_vol > 100 ) alsa_vol = 100;
if( alsa_vol < 0 ) alsa_vol = 0;
volume = (long)((alsa_vol * (alsa_max - alsa_min) / 100.) + 0.5);
snd_mixer_selem_set_playback_volume_all(elem, volume + alsa_min);
snd_mixer_selem_set_playback_switch_all(elem, 1);
muted = 0;
mutecount = 0;
return alsa_vol;
}
I wont to make alsa mixer volume to changed by GtkVolumeButton. Tried this but when value from gtk button is changed up or down, alsa mixer always jumps to 100 %:
int gtk_volume_button_get_value (GtkWidget *button)
{
return (int) (gtk_scale_button_get_value(GTK_SCALE_BUTTON(button)) * 100);
}
void gtk_volume_button_set_value (GtkWidget *button, int value)
{
gtk_scale_button_set_value(GTK_SCALE_BUTTON(button), (gdouble) value / 100);
}
void volume_value_changed_cb(GtkVolumeButton *button, gpointer user_data)
{
int vol = (int)(gtk_volume_button_get_value(volume_button) + 0.5);
alsa_set_volume(vol);
}
Please help me to write a corect code for GtkVolumeButton.
A:
Your problem has nothing to do with GtkVolume. In fact, it comes from you using two different approaches to handle volume. alsa_get_volume gives you an absolute sound level, which is an integer. One would expect alsa_set_volume to accept the same kind of value range. And that's how you use it in volume_value_changed_cb: « get the volume level of the volume control, between 0 and 100, and set it as current volume. ».
However, the implementation is completely different. It's implemented as if you wanted to tell it « add or substract x% of the current sound volume ». You get the current volume level and add that percentage, thus you're computing a relative sound level, not an absolute one. So, if your initial sound level is 50%, and you want to lower it to 45%, one would expect you'd call alsa_set_volume (45) to do it. But currently, calling alsa_set_volume (45) will set alsa_vol to 50 + 45 = 95%.
So you need to use absolute volume, not relative.
/* newvol: Desired volume level in the [0;100] range */
int alsa_set_volume (int newvol)
{
long volume;
alsa_vol = CLAMP(absvol, 0, 100);
volume = (long)((alsa_vol * (alsa_max - alsa_min) / 100.) + alsa_min);
snd_mixer_selem_set_playback_volume_all(elem, volume);
snd_mixer_selem_set_playback_switch_all(elem, 1);
muted = 0;
mutecount = 0;
return alsa_vol;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Recommended Language(s) for Performing Arbitrary Precision Calculations on a PC
I would be grateful if someone could point me in the direction of a programming language (and also, where I may find good tutorials on it to teach myself) that can perform arbitrary bit-precision computations on a PC. I wish to test some computational theory that I've been working on; and for instance, such involves computing logarithms of very large integers n (some comprised of 75 digits or more) and then using the calculation as part of another computation.
I am leaning towards Haskell; but I have considered Fortran.
Any substantive pros and cons regarding either of these two languages, or even a better choice, would be appreciated.
Finally, I know that arbitrary precision computation is limited by the amount of available storage on one's computer---so I would appreciate it if someone would also tell me what type of PC memory would be needed to compute logarithms accurate to say 1/4th of the bit length of a 75-digit integer?
Thank you.
A:
I have used PARI/GP for decades. It is very easy to use and very capable of arbitrary precision computations. It is GPL software which means it is free to download from source as well as a standalone Windows executable. More information is available on their website. For your intended use, it has several bit oriented operations on integers, including for example:
bitxor(x,y): bitwise "exclusive or" of two integers x and y. Negative numbers behave as if modulo big power of 2.
bittest(x,n): gives bit number n (coefficient of 2^n) of the integer x. Negative numbers behave as if modulo big power of 2.
| {
"pile_set_name": "StackExchange"
} |
Q:
Scala method where type of second parameter equals part of generic type from first parameter
I want to create a specific generic method in Scala. It takes two parameters. The first is of the type of a generic Java Interface (it's from the JPA criteria query). It currently looks like this:
def genericFind(attribute:SingularAttribute[Person, _], value:Object) {
...
}
// The Java Interface which is the type of the first parameter in my find-method:
public interface SingularAttribute<X, T> extends Attribute<X, T>, Bindable<T>
Now i want to achieve the following:
value is currently of type java.lang.Object. But I want to make it more specific. Value has to be the of the same type as the placeholder "_" from the first parameter (and so represents the "T" in the Java interface).
Is that somehow possible, and how?
BTW Sorry for the stupid question title (any suggestions?)
EDIT:
added an addtional example which could make the problem more clear:
// A practical example how the Scala method could be called
// Java class:
public class Person_ {
public static volatile SingularAttribute<Person, Long> id;
}
// Calling the method from Scala:
genericFind(Person_.id, Long)
A:
Of the top of my head (I'm still starting with Scala):
def genericFind[T](attribute:SingularAttribute[Person, T], value:T) {
...
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Lorentz contraction in sci-fi?
In particular, I'd like to remember the title of a SF novel that I read a thousand years ago. Unfortunately I have forgotten almost every detail, except one that's stuck in my mind: after encountering many adventures, an interstellar spaceship returns to Earth. However, it carries its own reference frame with it, which differs from the terran frame. Scientists manage to pry open the ship's door and inside they find everything weirdly distorted (this is where the Lorentz contraction comes in).
Besides the title of that novel, I'd be interested to hear of any other SF novels or short stories where Lorentz contraction is used as a plot device!
A:
Found it!
It's Rogue Ship by A.E. van Vogt. I had always suspected that it was one of his, but could not find it among Wikipedia's bibliography, nor among the individual plot summaries. So I decided to systematically follow the "external links" for each novel that seemed like a candidate. Eventually via a series of links I chanced upon the full text of Rogue Ship on a Russian web server. I won't link to it because I'm not sure it isn't a copyright violation!
Awesome hard sci-fi. The spaceship somehow manages to exceed the speed of light, although from Earth's perspective it is moving at a speed of only a thousand miles per hour relative to it. It returns to Earth on a collision course but instead of being smashed to smithereens on impact, it burrows into ground, barrels straight through and re-emerges unscathed hundreds of miles down course!
The ship's owner, who had sent it on its way six years earlier, forces his way inside the spaceship where he finds time slowed down to a near-standstill and everything, people included, severely compressed in the direction of travel. I don't know how scientifically accurate this is (probably not at all) but it's seriously mind-bending stuff. And Lorentz-Fitzgerald contraction does get mentioned explicitly several times.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is $R^2$ the proportion of total variance of the data explained by the model?
I have that $R^{2} = 1 - \frac{\text{RSS}}{\sum_{i=1}^{n}(Y_{i}-\bar{Y})^{2}}$.
Also, $\text{RSS}= {\sum_{i=1}^{n}(Y_{i}-\hat{Y_{i}})^{2}}$ for the simplest linear model with only the intercept term.
I also know that $\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-\bar{Y})^{2}$ is the total variance for the intercept only model and that $\frac{\text{RSS}}{\frac{1}{n}{\sum_{i=1}^{n}(Y_{i}-\bar{Y})^{2}}}$ is approximately $\frac{\text{var. of model}}{\text{variance}}$.
However I still don't get why $R^{2}$ is the proportion of total variance of the data explained by the model.
A:
There is an error in your equations, $RSS = \sum(Y_i - \hat{Y}_i)^2$
Maybe it would help not looking at so many equations to understand.
RSS is the sum of the residual variance, basically the sum of all the variance that the model can't explain.
Therefore
$\frac{RSS}{\sum{(Y_i - \bar{Y})^2}}$ is $\frac{unexplained \ variance}{Sum \ of \ all \ variance}$
so
$1- \frac{unexplained \ variance}{Sum \ of \ all \ variance} = \frac{Sum \ of \ all \ variance - unexplained \ variance}{Sum \ of \ all \ variance} = \frac{explained \ variance}{Sum \ of \ all \ variance} $
Does this help?
A:
We have $TSS = \sum_i (Y_i - \bar{Y})^2,\ RSS = \sum_i(Y_i - \hat{Y}_i)^2,\ ESS = \sum_i(\hat{Y}_i - \bar{Y})^2$
$TSS$ - total variance, $RSS$ - residual variance, $ESS$ - regression variance
From ANOVA identity we know that
$$TSS = RSS + ESS$$
So we have $R^2 = 1 - \frac{RSS}{TSS} = \frac{ESS}{TSS}$. From last equation you can clearly see that $R^2$ states how much "variance" is explained by the regression
| {
"pile_set_name": "StackExchange"
} |
Q:
The approximation of first-ordered modified Bessel function of the second kind
After analysing the outage probability of a single relay selection system, I got to the following form:
$P = 1 + \sum\limits_{k = 1}^K {\left( \begin{array}{l}
K\\
k
\end{array} \right){{\left( { - 1} \right)}^k}2\sqrt {\frac{{k{\gamma _o}{c_p}}}{{{\lambda _{SR}}{\lambda _{RD}}}}} {e^{ - \frac{{k{\gamma _o}}}{{{\lambda _{SR}}}}}}{K_1}\left( {2\sqrt {\frac{{k{\gamma _o}{c_p}}}{{{\lambda _{SR}}{\lambda _{RD}}}}} } \right)}$. When $\lambda_{SR}$ and $\lambda_{RD}$ go to infinity we can use the approximation: ${K_1}\left( x \right) \sim \frac{1}{x}$ (where ${K_1}\left( x \right)$ is the first-ordered modified Bessel function of the second kind) to get an asymptotic of the above formula to be:
$P = 1 + \sum\limits_{k = 1}^K {\left( \begin{array}{l}
K\\
k
\end{array} \right){{\left( { - 1} \right)}^k}{e^{ - \frac{{k{\gamma _o}}}{{{\lambda _{SR}}}}}}}$. However, the original and the approximation forms are not closed when $\lambda_{SR}$ and $\lambda_{RD}$ go to infinity. Here is the test figure:
Can somebody give me some hint how these form behave like that?
Thank you very much.
Best Regards, Binh.
A:
The approximate expression is just a binomial expansion of $(1-e^{-\gamma_0/\lambda_{SR}})^K$, so it decays as $\lambda_{SR}^{-K}$. The exact expression contains other terms in the expansion of the Bessel function
K_1(x)=\frac{1}{x}+\frac{x}{4}\left(2\ln\frac{x}{2}+2\gamma-1)+O(x^3\ln x)
which do not cancel as nicely from the alternating signs.
| {
"pile_set_name": "StackExchange"
} |
Q:
Encrypting AND decrypting a string into a shorter hash (not for security)
I've been playing around with a little HTML tester, that has a cool function to share. This redirects you to the shared URL. The problem is, the URL is WAY too long! Is there a way I can shorten these variable values?
I can md5() a string, and that will create a much shorter string, but I need a way to decrypt it. This is not for security purposes, it's purely for aesthetics.
Any help appreciated, thanks in advance!
A:
The most obvious way: Make a table in a database. Put in it one field for the original string, and one for the md5 hash. When you receive in an md5 hash, lookup the original string from the table. The problem is, what if two strings match the md5 hash??
So, it would be better for your purpose (just making a shorter reference to long urls) to just generate a random string (of a certain length) per original input and associate it to the original by inserting it into a table where the random string has a unique contraint.
create table reftable (original varchar(500), shortened varchar(20) unique);
| {
"pile_set_name": "StackExchange"
} |
Q:
Well-Formed Tags Inside html Tags
I am building an HTML Gui builder and this involves round-tripping HTML pages from the browser to the server and back again.
On the back-end I have an xml parser which expects well-formed tags.
I kick off by writting well-formed HTML - for example:
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-15" />
<link rel="stylesheet" type="text/css" href="/some/path/to/some.css" />
</head>
The browser decides it knows best and turns this into:
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-15">
<link rel="stylesheet" type="text/css" href="/some/path/to/some.css">
</head>
The second plan was to force in separate closing tags:
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-15"></meta>
<link rel="stylesheet" type="text/css" href="/some/path/to/some.css"></link>
</head>
That doesn't work either.
The initial plan was just to snip out copies of the part of the document and cycle them back to the server with the new page. It seems my only option is to manually go through all the tags (there are more than in this example) and fix them all up before I round trip them.
Am I missing something? How do I get the browser to make the HTML be well behaved?
A:
This is not well-formed HTML; it's XML or XHTML:
<link rel="stylesheet" type="text/css" href="/some/path/to/some.css" />
The confusion is explained here: http://www.cs.tut.fi/~jkorpela/html/empty.html
innerHTML is exactly that - HTML. You may be able to produce XML from the DOM - try here as a start: http://www.devarticles.com/c/a/JavaScript/More-on-JavaScript-and-XML/
| {
"pile_set_name": "StackExchange"
} |
Q:
Getting koGrid to work with Breeze and Durandal HotTowel template
I have a Durandal widget (hot towel template) containing a koGrid which I'm trying to bind to my view model.
I'm pretty new to these technologies, including async deferreds and promises, so please forgive my ignorance of such matters!
The view model gets its data from a datacontext class which simply returns the results of a Breeze entity manager query (which returns a Q promise):
var manager = new breeze.EntityManager({ dataService: dataService });
return manager.executeQuery(query)
.then(function (data) {
return data.results;
})
.fail(queryFailed);
In the constructor of my widget, I have:
var vm = function(element, settings) {
var self = this;
this.settings = settings;
this.myData = ko.observableArray([]);
this.viewAttached = viewAttached;
queryDataContext.executeQuery('Customer', 'good').then(function(ents) {
var Item = function(id, name, maincontacttelephone) {
this.ID = id;
this.Name = name;
this.MainContactTelephone = maincontacttelephone;
};
for (var i = 0; i < ents.length; i++) {
self.myData.push(new Item(ents[i].ID(), ents[i].Name(), ents[i].MainContactTelephone()));
}
self.gridOptions = { data: self.myData };
});
};
return vm;
function viewAttached(view) {
$(window).trigger('resize');
return true;
}
The data comes back in the "ents" variable, gets pushed into the observableArray myData, and that should work...however an error occurs in the koGrid file:
/***********************************************
* FILE: ..\src\bindingHandlers\ko-grid.js
***********************************************/
ko.bindingHandlers['koGrid'] = (function () {
return {
'init': function (element, valueAccessor, allBindingsAccessor, viewModel, bindingContext) {
var options = valueAccessor();
valueAccessor() is undefined, which prevents the grid from working.
Now if I change my code which executes the remote query to:
$.when(queryDataContext.executeQuery('Customer', 'good')).then(function(ents) {
(using a jQuery promises when), it works for some reason. However the ents variable is then of type 'makePromise' which I'm not sure how to resolve.
From my understanding it's a Q promise which Breeze returns anyway, and if I use
Q.when(queryDataContext.executeQuery('Customer', 'good')).then(function(ents) {
then ents contains the data, but I'm back to the koGrid undefined problem again.
Any help much appreciated!
A:
Edit : Whoops, just see you weren't talking about Kendo Grid, my bad... But you could try it anyway, that's exactly the error I was having when trying to make KendoGrid working, so you never know! Try it, it cost nothing :)
=====
Wich version of JQuery are you using? Kendo UI controls are compatible with JQuery 1.7.2 only offically. So if you are using the lastest branch of JQuery, 1.9, it is no longer working because it rely on some functions that has been deprecated in JQuery 1.9.x.
I had the same problem few weeks ago while using the Kendo UI Grid control, but there's a solution.
You must include the JQuery.Migrate plugin beside the JQuery standard one. JQuery.Migrate restore the deprecated functions to allow you to use stuff that won't work with the latest version of JQuery.
You can get the latest version of JQuery.Migrate here : http://blog.jquery.com/2013/02/16/jquery-migrate-1-1-1-released/
Hope it solve your problem :)
| {
"pile_set_name": "StackExchange"
} |
Q:
Non-blocking TCP server using OTP principles
I'm starting to learn Erlang, so I'm trying to write the "hello, world!" of concurrent programming, an IRC bot.
I've already written one using Erlang without any OTP niceties (supervisor, application, etc. behaviours). I'm looking to rewrite it using OTP principles but unfortunately I can't figure out the "right" way to do socket programming with OTP.
It seems the only reasonable way is to create another process manually and link it to the supervisor, but surely someone, somewhere, has done this before.
A:
I think this is what you're looking for:
http://www.trapexit.org/Building_a_Non-blocking_TCP_server_using_OTP_principles
It's a full tutorial about how to build a non-blocking TCP server using OTP (of course, is fully documented and explained).
A:
Great that you've began learning Erlang/OTP!
The following resources are very useful:
The OTP Design Principles. Read this carefully, if you already haven't. Note the common misconception that OTP is object orientated (OO): it's not! Forget everything about "inheritance". It's not possible to merely build complete systems by "extending" standard modules.
The Messaging System:
These functions must be used to implement the use of system messages for a process
The Special Processes. A special process is an OTP-compliant process that can integrate well with supervisors.
This is some code I have in my project. I am an Erlang learner too, so don't trust the code too much, please.
-module(gen_tcpserver).
%% Public API
-export([start_link/2]).
%% start_link reference
-export([init/2]).
%% System internal API
-export([system_continue/3, system_terminate/4, system_code_change/4]).
-define(ACCEPT_TIMEOUT, 250).
-record(server_state, {socket=undefined,
args,
func}).
%% ListenArgs are given to gen_tcp:listen
%% AcceptFun(Socket) -> ok, blocks the TCP accept loop
start_link(ListenArgs, AcceptFun) ->
State = #server_state{args=ListenArgs,func=AcceptFun},
proc_lib:start_link(?MODULE, init, [self(), State]).
init(Parent, State) ->
{Port, Options} = State#server_state.args,
{ok, ListenSocket} = gen_tcp:listen(Port, Options),
NewState = State#server_state{socket=ListenSocket},
Debug = sys:debug_options([]),
proc_lib:init_ack(Parent, {ok, self()}),
loop(Parent, Debug, NewState).
loop(Parent, Debug, State) ->
case gen_tcp:accept(State#server_state.socket, ?ACCEPT_TIMEOUT) of
{ok, Socket} when Debug =:= [] -> ok = (State#server_state.func)(Socket);
{ok, Socket} ->
sys:handle_debug(Debug, fun print_event/3, undefined, {accepted, Socket}),
ok = (State#server_state.func)(Socket);
{error, timeout} -> ok;
{error, closed} when Debug =:= [] ->
sys:handle_debug(Debug, fun print_event/3, undefined, {closed}),
exit(normal);
{error, closed} -> exit(normal)
end,
flush(Parent, Debug, State).
flush(Parent, Debug, State) ->
receive
{system, From, Msg} ->
sys:handle_system_msg(Msg, From, Parent, ?MODULE, Debug, State)
after 0 ->
loop(Parent, Debug, State)
end.
print_event(Device, Event, _Extra) ->
io:format(Device, "*DBG* TCP event = ~p~n", [Event]).
system_continue(Parent, Debug, State) ->
loop(Parent, Debug, State).
system_terminate(Reason, _Parent, _Debug, State) ->
gen_tcp:close(State#server_state.socket),
exit(Reason).
system_code_change(State, _Module, _OldVsn, _Extra) ->
{ok, State}.
Note that this is a compliant OTP process (it can be managed by a supervisor). You should use AcceptFun to spawn (=faster) a new worker child. I have not yet tested it thorough though.
1> {ok, A} = gen_tcpserver:start_link({8080,[]},fun(Socket)->gen_tcp:close(Socket) end).
{ok,<0.93.0>}
2> sys:trace(A, true).
ok
*DBG* TCP event = {accepted,#Port<0.2102>}
*DBG* TCP event = {accepted,#Port<0.2103>}
3>
(After 2>'s ok I pointed my Google Chrome browser to port 8080: a great test for TCP!)
| {
"pile_set_name": "StackExchange"
} |
Q:
Java Servlets - how to synchronize ProcessBuilder?
I guess this isn't necessarily a servlet related question - but I'm using the code in a servlet setting to give a little background. I'm using the windows cscript tool via ProcessBuilder in order to convert MS Office documents (e.g. ppt, doc etc..) into PDF. I have a vb script that does this.
One problem I've noticed in the past is that some of the apps (powerpoint) don't run in a windowless environment; that is the vb script script briefly pops open a PowerPoint window when its run. PowerPoint is a single instance application, so issue arise when you try and run this script concurrently.
I've considered java synchronized blocks - however my understanding is that they are more geared towards shared resources such as files and IO resources - & don't think they would properly control access to a particular script that ProcessBuilder was executing.
Code Sample:
ProcessBuilder pb = new ProcessBuilder("cscript", "C:\\Users\\Foo User\\Documents\\office2pdf.vbs", "C:\\Users\\Foo User\\Documents\\SomePPTFile.pptx");
Process pr = pb.start();
int i = pr.waitFor() ;
I've used OpenOffice in the past which has a good Java API - however would prefer to stick with MS Office, as it does a better job of converting PDFs. Any suggestions would be much appreciated.
A:
You can have a private static object in your servlet:
private static final Object processLock = new Object();
The you can lock access to the entire process builder:
synchronized (processLock)
{
// only one servlet thread at a time in here...
ProcessBuilder pb = new ProcessBuilder("cscript", "C:\\Users\\Foo User\\Documents\\office2pdf.vbs", "C:\\Users\\Foo User\\Documents\\SomePPTFile.pptx");
Process pr = pb.start();
int i = pr.waitFor() ;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Can't get my query to run any faster on MySQL database with 2M entries
I have this payments table, with about 2M entries
CREATE TABLE IF NOT EXISTS `payments` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`user_id` int(11) unsigned NOT NULL,
`date` datetime NOT NULL,
`valid_until` datetime NOT NULL,
PRIMARY KEY (`id`),
KEY `date_id` (`date`,`id`),
KEY `user_id` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=2113820 ;
and this users table from ion_auth plugin/library for CodeIgniter, with about 320k entries
CREATE TABLE IF NOT EXISTS `users` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`ip_address` varbinary(16) NOT NULL,
`username` varchar(100) NOT NULL,
`password` varchar(80) NOT NULL,
`salt` varchar(40) DEFAULT NULL,
`email` varchar(100) NOT NULL,
`activation_code` varchar(40) DEFAULT NULL,
`forgotten_password_code` varchar(40) DEFAULT NULL,
`forgotten_password_time` int(11) unsigned DEFAULT NULL,
`remember_code` varchar(40) DEFAULT NULL,
`created_on` int(11) unsigned NOT NULL,
`last_login` int(11) unsigned DEFAULT NULL,
`active` tinyint(1) unsigned DEFAULT NULL,
`first_name` varchar(50) DEFAULT NULL,
`last_name` varchar(50) DEFAULT NULL,
`company` varchar(100) DEFAULT NULL,
`phone` varchar(20) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `name` (`first_name`,`last_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=322435 ;
I'm trying to get both the user information and his last payment. Ordering(ASC or DESC) by ID, first and last name, the date of the payment, or the payment expiration date. To create a table showing users with expired payments, and valid ones
I've managed to get the data correctly, but most of the time, my queries take 1+ second for a single user, and 40+ seconds for 30 users. To be honest I have no idea if it's possible to get the information under 1 second. Also probably my application is never going to reach this number of entries, probably a maximum of 10k payments and 300 users
My query, works pretty well with few entries and it's easy to change the ordering:
SELECT users.id, users.first_name, users.last_name, users.email, final.id AS payment_id, payment_date, final.valid_until AS payment_valid_until
FROM users
LEFT JOIN (
SELECT * FROM (
SELECT payments.id, payments.user_id, payments.date AS payment_date, payments.valid_until
FROM payments
ORDER BY payments.valid_until DESC
) AS p GROUP BY p.user_id
) AS final ON final.user_id = users.id
ORDER BY id ASC
LIMIT 0, 30"
Explain:
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY users ALL NULL NULL NULL NULL 322269 Using where; Using temporary; Using filesort
1 PRIMARY <derived2> ALL NULL NULL NULL NULL 50
4 DEPENDENT SUBQUERY users_deactivated unique_subquery user_id user_id 4 func 1 Using index
2 DERIVED <derived3> ALL NULL NULL NULL NULL 2072327 Using temporary; Using filesort
3 DERIVED payments ALL NULL NULL NULL NULL 2072566 Using filesort
I'm open to any suggestions and tips, since I'm new to PHP, MySQL and stuff, and don't really know if I'm doing the correct way
A:
use an index on the payments table for users, that and do the group by on the payments table...
alter table payments add index (user_id);
your query
ORDER BY users.id ASC
alter table payments drop index user_id;
and why don't you use the payments "id" instead of "valid_until" ? Is there a reason to not trust the ids are sequential? if you don't trust the id add index to the valid_until field:
alter table payments add index (valid_until) desc;
and don't forget to drop it later
alter table payments drop index valid_intil;
if the query is still slow you will need to cache the results... this means you need to improve your schema, here is a suggestion:
create table last_payment
(user_id int,
constraint pk_last_payment primary key user_id references users(id),
payment_id int,
constraint fk_last_payment foreign key payment_id references payments(id)
);
alter table payments add index (user_id);
insert into last_payment (user_id, payment_id)
(select user_id, max(id) from payments group by user_id);
#here you probably use your own query if the max (id) does not refer to the last payment...
alter table payments drop index user_id;
and now comes the magic:
delimiter |
CREATE TRIGGER payments_trigger AFTER INSERT ON payments
FOR EACH ROW BEGIN
DELETE FROM last_payment WHERE user_id = NEW.user_id;
INSERT INTO last_payment (user_id, payment_id) values (NEW.user_id, NEW.id);
END;
|
delimiter ;
and now every-time you want to know the last payment made you need to query the payments_table.
select u.*, p.*
from users u inner join last_payment lp on (u.id = lp.user_id)
inner join payments on (lp.payment_id = p.id)
order by user_id asc;
| {
"pile_set_name": "StackExchange"
} |
Q:
Adjusting dose rate, Image quality, X-ray Flurocopy
When adjusting the dose rate, in this procedure from either film or detector the image quality would become poor due to less photons being transmitted via the x-ray tube.
How many ways with respect to Fluoroscope can one alter the dose rate, and how would it effect the image in each case?
So far all I can think of it altering the current, by doing this less photons are passing through the patient, which mean that less photos are being converted to electrons in the image intensifier, or scintillations in the respect to digital detectors, so this would result in a low resolution or less Sharpe image image.
A:
Generally, decreasing the dose rate to the detector results in increased image noise, affecting low contrast detectability (harder to see differences between objects with similar density). High contrast resolution isn't necessarily affected.
| {
"pile_set_name": "StackExchange"
} |
Q:
Working with large 2D arrays in python
I have to initialize a large 2D array for some calculations. I am getting "Memory error" when I run the code. The code is as given below
a=np.zeros((200000,200000)) ## I get memory error in this line
for i in range (0,len(rows)):
for j in range (0,len(rows)):
if pq[rows[i],cols[j]]>0:
a[rows[i],cols[j]]=1
else:
a[rows[i],cols[j]]=0
Here, 'rows' and 'cols' are 1D arrays of length 200000. The dimension of pq is 433 X 800.
I am using a 64 bit Windows 10 system with Intel® Core™ i7-4770S CPU @ 3.10GHz × 8 Processor with 16 Gb RAM. I am using Python 2.7.12.
Any help to overcome this issue will be appreciated. I am new to python and thank you in advance.
Can this problem be overcome using pyTables or generators? I just read about them online.
A:
First, you have not mentioned your python architecture. If it's 32 bit, then it has limit of 2 Gb of RAM.
Second, 200000 * 200000 * 1 byte (at least, for a small int) = 37 Gb which is less than your RAM so you cannot allocate it in any way.
Third, your data is sparse, I mean most of your array will be zeros. In this case instead of allocating the array you should store coordinates of your data and (you already have this in pq) and remake your algorithm to work with this data representation.
| {
"pile_set_name": "StackExchange"
} |
Q:
SELECT a field from similar fields by MAX from another field
I have a table with three columns: Item, Quantity, and Date.
The values in the Item column may be duplicates, but the Quantity and Dates will be unique.
For example:
Item - Quantity - Date
Hammer - 3 - 1/12/15
Hammer - 7 - 5/18/15
Hammer - 6 - 8/1/15
Wrench - 8 - 2/24/15
Wrench - 3 - 6/10/15
I am trying to write a query that will only return:
Item - Quantity - Date
Hammer - 6 - 8/1/15
Wrench - 3 - 6/10/15
This is my code:
SELECT DISTINCT stock.stc_st AS Store, stock.art_st AS UPC, articles.descr AS Description, stock.quan_st AS Quantity, articles.rp AS Cost
FROM stock LEFT JOIN articles ON stock.art_st = articles.article
WHERE stock.ym_st =
(SELECT Max(stock.ym_st)
FROM stock t1
WHERE stock.art_st=t1.art_st
GROUP BY t1.art_st)
GROUP BY stock.stc_st, stock.art_st, articles.descr, stock.quan_st, articles.rp, articles.act, articles.stat
HAVING (((stock.stc_st)=[Which Store?]) AND ((articles.act)="Y") AND ((articles.stat)="Y"));
However, my code is returning all items when I only want it to return the items with the max date. If anyone could take a look at this and tell me what I am doing wrong, I would really appreciate it.
========================
Now I'm trying to use this code from the answers below and it's giving me a Syntax Error on JOIN on the Inner Join at tmaxdate.art_st. I'm sure this is something stupid like a parenthesis out of place. Could anyone more familiar with Access's SQL syntax tell me what I'm doing wrong? Thanks!
SELECT DISTINCT stock.stc_st AS Store, stock.art_st AS UPC, articles.descr AS Description, stock.quan_st AS Quantity, articles.rp AS Cost
FROM stock AS t1
INNER JOIN
(
SELECT tmaxdate.art_st, Max(tmaxdate.ym_st) AS MaxOfDate
FROM stock AS tmaxdate
GROUP BY tmaxdate.art_sc
) AS sub
ON (t1.ym_st = sub.MaxOfDate) AND (tmaxdate.art_st = sub.art_st)
LEFT JOIN articles ON stock.art_st = articles.article
GROUP BY stock.stc_st, stock.art_st, articles.descr, stock.quan_st, articles.rp, articles.act, articles.stat
HAVING (((stock.stc_st)=[Which Store?]) AND ((articles.act)="Y") AND ((articles.stat)="Y"));
A:
I couldn't figure out how that sample data is distributed among your tables. So I stored those data in a table named YourTable.
First create a GROUP BY query to show you the most recent Date for each Item:
SELECT t1.Item, Max(t1.Date) AS MaxOfDate
FROM YourTable AS t1
GROUP BY t1.Item
Then you can use that as a subquery which you join back to the main table in order to select only its rows with the matching Item/Date pairs:
SELECT t2.Item, t2.Quantity, t2.Date
FROM
YourTable AS t2
INNER JOIN
(
SELECT t1.Item, Max(t1.Date) AS MaxOfDate
FROM YourTable AS t1
GROUP BY t1.Item
) AS sub
ON (t2.Date = sub.MaxOfDate) AND (t2.Item = sub.Item);
With your sample data in Access 2010, that query returns your requested output.
Since you don't actually have a single YourTable, you will need to adapt that approach for your actual tables, but this strategy should work there, too.
| {
"pile_set_name": "StackExchange"
} |
Q:
Best way to count number of downloads of several files on website
The question is "Best way to count number of downloads of several files on website"
What I am trying to do:
Track and tally the number of downloads of Several files
For files which have different extensions. (foo.zip, bar.tar.gz, foo2.zip)
Avoid relying on server side code.
I've seen multiple answers for counting the file's downloads with the Apache access.log
cat /path/to/access.log | grep foo.zip | grep 200 | wc -l
from Best way to count file downloads on a website
, which is an eloquent solution however requires access to the backend log file. I was hoping to use an mostly-javascript solution, Which could also make a call to a php portion of code.
A:
Why not use Google Analytics?
https://developers.google.com/analytics/devguides/collection/gajs/eventTrackerGuide?hl=es
If your website already has Google Analytics enabled, you can simply add an onclick event to the links, allowing it to create a count of when the links are clicked.
For your example the links would look like:
<a onclick="var that=this;_gaq.push(['_trackEvent','Download','foo',this.href]);setTimeout(function(){location.href=that.href;},400);return false;" href="downloads/foo.zip">Download foo.zip</a>
<a onclick="var that=this;_gaq.push(['_trackEvent','Download','bar',this.href]);setTimeout(function(){location.href=that.href;},400);return false;" href="downloads/bar.tar.gz">Download foo.zip</a>
<a onclick="var that=this;_gaq.push(['_trackEvent','Download','foo2',this.href]);setTimeout(function(){location.href=that.href;},400);return false;" href="downloads/foo2.zip">Download foo.zip</a>
In Google Analytics the "download" event would be the total number of downloads for all three files, and each file will have its own event "foo, bar, foo2" which will be each file's individual download count.
If you want to do this from the server side, you need to include some code in the client. It is more efficient use tools already exist than to brew your own solution using php and javascript.
| {
"pile_set_name": "StackExchange"
} |
Q:
Showing the Progressbar: the other way round
The usual practice is to show the progress window (with a progress bar) in the UI thread and update the progress from the worker thread.
I have a lot of long operations which are started and run in the GUI thread itself(which temporarily freeze the GUI). The requirement is to show the progress bar for all existing long operations. The usual solution would be to move the long operations as threads and update the progress from there. But i am not sure about the thread safety of those long operations.
Is there a way where we show the progress window in another thread (so it doesn't freeze) and then update the progress from the main GUI thread itself?
A:
I dont know a solution to show the ProgressBar in another thread, but a hack you can try ist to let the system execute its actions (Update the UI) from within your long running operations. For this, you can call the following function repeatedly from within your long running operations:
public static void DoEvents() {
DispatcherFrame frame = new DispatcherFrame();
Dispatcher.CurrentDispatcher.BeginInvoke(DispatcherPriority.Background, new DispatcherOperationCallback(delegate(object parameter) {
frame.Continue = false;
return null;
}), null);
Dispatcher.PushFrame(frame);
}
But take care, this is not a nice way to resolve the problem. Better to chose an appropriate design.
| {
"pile_set_name": "StackExchange"
} |
Q:
ciclo de paginacion automatico de TABLA con PHP y jquery
Tengo una tabla ya paginada, y necesito un ciclo que esté mostrando los registros cada determinado tiempo por eso el setTimeout en mi caso la variable $b es la que se encarga de crear los botones que contienen el href dependiendo los registros se van creando más botones, al presionar uno de estos se pasa el valor a la variable "page".
Éste es el código que crea los botones
for($b=1; $b<=$a; $b++)
{
?><a href="paging.php?page=<?php echo $b;?>">
<?php echo $b." ";?> </a> <?php
}
Este es mi codigo de javascript
setTimeout(function(){
<?php
$b=1;
?>
location.href="paging.php?page=<?php echo $b?>"
<?php $B++?>
} , 2000);
</script>
Espero alguien pueda ayudarme.
A:
Parece que hay cierta confusión entre lo que es PHP y lo que es JavaScript en el código. Deberías separarlos mejor y tener en cuenta que los valores de PHP no estarán disponibles en JS (al menos no después de que se cargue la página).
Aparte de eso, debes tener cuidado porque tanto PHP como JavaScript son case sensitive, es decir, diferencian entre mayúsculas y minúsculas (no es lo mismo $b y $B).
El código PHP que genera los enlaces está bien. Lo único que tienes que hacer es cambiar el código JavaScript para que la página se actualice cada 2 segundos. Para ello sólo requieres pocos cambios:
Comprueba que la página es correcta
Calcula el próximo valor (en PHP)
Pasa ese valor a JavaScript (cuando se crea la página)
El código podría ser algo como esto:
<?php
// si hay un parámetro page que es numérico y page+1 sigue siendo una página válida
if (isset($_GET["page"]) && is_numeric($_GET["page"]) && $_GET["page"]+1 <= $a) {
// entonces, la próxima página será page+1
$proxima = $_GET["page"] + 1;
} else {
// si no, la siguiente página será la primera
$proxima = 1;
}
?>
<script>
setTimeout(function(){
var b = <?php echo $proxima; ?>;
location.href = "paging.php?page=" + b;
} , 2000);
</script>
| {
"pile_set_name": "StackExchange"
} |
Q:
Counting cumulative unique occurence in a R data frame
I am working on a data set, which has two columns: id, date/time. Please find the example below,
id date_time
1 2016-10-29 18:01:03.0000000 +08:00
1 2016-10-29 19:34:17.0000000 +08:00
1 2016-10-30 14:08:03.0000000 +08:00
1 2016-10-30 15:55:12.0000000 +08:00
2 2016-10-31 11:32:12.0000000 +08:00
2 2016-10-31 14:59:56.0000000 +08:00
2 2016-11-01 12:49:44.0000000 +08:00
2 2016-11-01 13:55:16.0000000 +08:00
2 2016-11-01 19:18:22.0000000 +08:00
2 2016-11-01 20:40:48.0000000 +08:00
3 2016-11-01 21:19:50.0000000 +08:00
3 2016-11-02 14:20:15.0000000 +08:00
3 2016-11-02 18:52:27.0000000 +08:00
3 2016-11-02 19:39:32.0000000 +08:00
3 2016-11-03 08:55:41.0000000 +08:00
All I wanted to obtain is two columns such that: column 1 has cumulative occurrences for each id ordered using date and time and column 2 has cumulative dates for each id as shown in the table below,
id date_time occ date
1 2016-10-29 18:01:03.0000000 +08:00 1 1
1 2016-10-29 19:34:17.0000000 +08:00 2 1
1 2016-10-30 14:08:03.0000000 +08:00 3 2
1 2016-10-30 15:55:12.0000000 +08:00 4 2
2 2016-10-31 11:32:12.0000000 +08:00 1 1
2 2016-10-31 14:59:56.0000000 +08:00 2 1
2 2016-11-01 12:49:44.0000000 +08:00 3 2
2 2016-11-01 13:55:16.0000000 +08:00 4 2
2 2016-11-01 19:18:22.0000000 +08:00 5 2
2 2016-11-01 20:40:48.0000000 +08:00 6 2
3 2016-11-01 21:19:50.0000000 +08:00 1 1
3 2016-11-02 14:20:15.0000000 +08:00 2 2
3 2016-11-02 18:52:27.0000000 +08:00 3 2
3 2016-11-02 19:39:32.0000000 +08:00 4 2
3 2016-11-03 08:55:41.0000000 +08:00 5 3
(Note that +8:00 is there as a redundant). To generate column 1 (occ): I have tried using ave with FUN=seq_along by first splitting date and time, followed by order using id, date and time.
Q1: Is there any way I can directly sort the date_time column ?
For column 2(date), I have first taken a subset of the data frame, using unique value I am generating the index using ave and seq_along. After that I am merging the two data set in a loop.
Q2: Is there a more efficient method to achieve the same ?
A:
It is unclear to me what format your date_time variable is in. I am assuming it is POSIXct. I have trimmed off the junk and converted it to that.
d <- read.table(text="id, date_time
1, 2016-10-29 18:01:03.0000000 +08:00
...
3, 2016-11-03 08:55:41.0000000 +08:00", header=TRUE, sep=",")
d$date_time <- as.POSIXct(substr(as.character(d$date_time), 4, 22))
At this point you can sort the data frame, including by dates, using ?order (see also: Understanding the order() function):
d <- d[order(d$id, d$date_time),]
With the data frame sorted, to count up rows within each id, you can use ?tapply. You can likewise use tapply to label unique days by composing as.character and as.Date, and as.numeric and factor. Consider:
d$occ <- unlist(with(d, tapply(id, id, FUN=function(x){ 1:length(x) })))
d$date <- unlist(with(d, tapply(date_time, id, FUN=function(x){
x = as.character(as.Date(x))
as.numeric(factor(x, levels=unique(x)))
})))
d
# id date_time occ date
# 1 1 2016-10-29 18:01:03 1 1
# 2 1 2016-10-29 19:34:17 2 1
# 3 1 2016-10-30 14:08:03 3 2
# 4 1 2016-10-30 15:55:12 4 2
# 5 2 2016-10-31 11:32:12 1 1
# 6 2 2016-10-31 14:59:56 2 1
# 7 2 2016-11-01 12:49:44 3 2
# 8 2 2016-11-01 13:55:16 4 2
# 9 2 2016-11-01 19:18:22 5 2
# 10 2 2016-11-01 20:40:48 6 3
# 11 3 2016-11-01 21:19:50 1 1
# 12 3 2016-11-02 14:20:15 2 1
# 13 3 2016-11-02 18:52:27 3 1
# 14 3 2016-11-02 19:39:32 4 1
# 15 3 2016-11-03 08:55:41 5 2
| {
"pile_set_name": "StackExchange"
} |
Q:
Strategy for sharing OpenGL resources
I'm creating a CAD-like app (Qt-based), it will be a multiple document interface and each document will contain about 5 viewports (derived from QGLWidget). As such I need my flat shader to be shared across the entire application, and then the 3D assets (models stored as VBOs) to be shared across each document i.e. the 5 viewports.
I thought as long as I shared around the shader program and VBO GLuint addresses all will automagickly work - it doesn't. I think because each viewport/context has it's own address space on the graphics card, if anyone knows better please inform!
I would like to have the shader compiled on application start, but this is proving difficult as I need a valid QGLWidget to get OpenGL into a valid state beforehand. But as I need to share the QGLWidgets (through their constructor) to have them share resources, one needs to be created and shown before the others can be instantiated. But this is highly impractical as multiple views to be shown at once to the user.
This must be easier than I'm making out because it's hardly groundbreaking stuff, but I am really struggling - can anyone point me in the right direction?
Thanks, Cam
A:
Here's what usual CAD/MDI applications are doing:
they create a shared context that serves for well, sharing resources.
they use wglShareLists when creating a new OpenGL rendering context for giving access to the resource ids of the shared context.
wglShareLists can be used for sharing VBOs, textures, shaders, etc, not only display lists (sharing DLs is the legacy usage, hence the function name).
I don't remember if you need to create resources with the shared context or if you can create them on any contexts.
If you're not on windows, see glXCreateContext. That should put you on track.
Edit:
I've looked at Qt, it looks like it's abstracted with member QGLContext::create.
| {
"pile_set_name": "StackExchange"
} |
Q:
I want to enable Hyper-V in Windows Features, but there is no Hyper-V option
I am very very confused about this.
I have a Sony Vaio I7 laptop (There is a Hyper-V setting in bios, so I am easy to enable Hyper-V)
I want to develop Windows Phone 8. I have read this guide from Microsoft
but even if I enable Hyper-V in bios, Windows Feature does not show Hyper-V option like MSDN guide.
(At first there is Hyper-V in windows feature but I don't know Hyper-V missing lately)
Any help will be appreciated.
Thank you friends.
(I attach an image)
A:
You will need the pro version to get the hyper-v feature.
You can read here for more help as well:
http://technet.microsoft.com/en-us/library/hh857623.aspx
A:
If someone is having this problem of missing tickbox on Windows 7 Pro, it can be solved by installing
Remote Server Administration Tools for Windows 7 with Service Pack 1 (SP1).
A:
Only problem your Hyper-V is not running is because you Don't have the Windows 8 Pro version of the window. So get that first.
| {
"pile_set_name": "StackExchange"
} |
Q:
Are there any optional areas?
So I just arrived at what I assume is the last area of the game, but I know Souls games always have an optional area or two, like Ash Lake in Dark Souls 1. I've come across at least two so far (one of which was hidden). Before I proceed to the final boss I want to make sure I visit all the areas in the game, since I don't want to miss out on any content.
Are there any optional areas in the game? If so, how do you get to them?
A:
There are 4 optional areas:
Smouldering lake - get here from climbing down the broken rope bridge in catacombs of carthus.
Consumed King's Garden - after the Dancer of the Boreal Valley boss, head left before getting to lothric castle proper.
Untended Graves - behind an illusionary wall after the boss fight in Consumed King's Garden.
Archdragon Peak - use Path of the Dragon Gesture in Irithyll Dungeon at the petrified dragon acoylyte.
| {
"pile_set_name": "StackExchange"
} |
Q:
Add double tap to UICollectionView; require single tap to fail
With a similar problem to this question, I am trying to add a double tap gesture recognizer to my UICollectionView instance.
I need to prevent the default single tap from calling the UICollectionViewDelegate method collectionView:didSelectItemAtIndexPath:.
In order to achieve this I implement the code straight from Apple's Collection View Programming Guide (Listing 4-2):
UITapGestureRecognizer* tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleTapGesture:)];
NSArray* recognizers = [self.collectionView gestureRecognizers];
// Make the default gesture recognizer wait until the custom one fails.
for (UIGestureRecognizer* aRecognizer in recognizers) {
if ([aRecognizer isKindOfClass:[UITapGestureRecognizer class]])
[aRecognizer requireGestureRecognizerToFail:tapGesture];
}
// Now add the gesture recognizer to the collection view.
tapGesture.numberOfTapsRequired = 2;
[self.collectionView addGestureRecognizer:tapGesture];
This code does not work as expected: tapGesture fires on a double tap but the default single tap is not prevented and the delegate's didSelect... method is still called.
Stepping through in the debugger reveals that the if condition, [aRecognizer isKindOfClass:[UITapGestureRecognizer class]], never evaluates to true and so the failure-requirement on the new tapGesture is not being established.
Running this debugger command each time through the for-loop:
po (void)NSLog(@"%@",(NSString *)NSStringFromClass([aRecognizer class]))
reveals that the default gesture recognizers are (indeed) not UITapGestureRecognizer instances.
Instead they are private classes UIScrollViewDelayedTouchesBeganGestureRecognizer and UIScrollViewPanGestureRecognizer.
First, I can't use these explicitly without breaking the rules about Private API. Second, attaching to the UIScrollViewDelayedTouchesBeganGestureRecognizer via requireGestureRecognizerToFail: doesn't appear to provide the desired behaviour anyway — i.e. the delegate's didSelect... is still called.
How can I work with UICollectionView's default gesture recognizers to add a double tap to the collection view and prevent the default single tap from also firing the delegate's collectionView:didSelectItemAtIndexPath: method?
Thanks in advance!
A:
My solution was to not implement collectionView:didSelectItemAtIndexPath but to implement two gesture recognizers.
self.doubleTapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(processDoubleTap:)];
[_doubleTapGesture setNumberOfTapsRequired:2];
[_doubleTapGesture setNumberOfTouchesRequired:1];
[self.view addGestureRecognizer:_doubleTapGesture];
self.singleTapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(processSingleTap:)];
[_singleTapGesture setNumberOfTapsRequired:1];
[_singleTapGesture setNumberOfTouchesRequired:1];
[_singleTapGesture requireGestureRecognizerToFail:_doubleTapGesture];
[self.view addGestureRecognizer:_singleTapGesture];
This way I can handle single and double taps. The only gotcha I can see is that the cell is selected on doubleTaps but if this bothers you can you handle it in your two selectors.
A:
I use the following to register a UITapGestureRecognizer:
UITapGestureRecognizer* singleTapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleSingleTapGesture:)];
singleTapGesture.delaysTouchesBegan = YES;
singleTapGesture.numberOfTapsRequired = 1; // number of taps required
singleTapGesture.numberOfTouchesRequired = 1; // number of finger touches required
[self.collectionView addGestureRecognizer:singleTapGesture];
By setting delaysTouchesBegan to YES the custom gesture recognizer gets priority over the default collection view tap listeners by delaying the registering of other touch events. Alternatively, you can set cancel touch recognition altogether by setting the cancelsTouchesInView to YES.
The gesture is than handled by the following function:
- (void)handleSingleTapGesture:(UITapGestureRecognizer *)sender {
if (sender.state == UIGestureRecognizerStateEnded) {
CGPoint location = [sender locationInView:self.collectionsView];
NSIndexPath *indexPath = [self.collectionsView indexPathForItemAtPoint:location];
if (indexPath) {
NSLog(@"Cell view was tapped.");
UICollectionViewCell *cell = [self.collectionsView cellForItemAtIndexPath:indexPath];
// Do something.
}
}
else{
// Handle other UIGestureRecognizerState's
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Spark treating null values in csv column as null datatype
My spark application reads a csv file, transforms it to a different format with sql and writes the result dataframe to a different csv file.
For example, I have input csv as follows:
Id|FirstName|LastName|LocationId
1|John|Doe|123
2|Alex|Doe|234
My transformation is:
Select Id,
FirstName,
LastName,
LocationId as PrimaryLocationId,
null as SecondaryLocationId
from Input
(I can't answer why the null is being used as SecondaryLocationId, it is business use case)
Now spark can't figure out the datatype of SecondaryLocationId and returns null in the schema and throws the error CSV data source does not support null data type while writing to output csv.
Below are printSchema() and write options I am using.
root
|-- Id: string (nullable = true)
|-- FirstName: string (nullable = true)
|-- LastName: string (nullable = true)
|-- PrimaryLocationId: string (nullable = false)
|-- SecondaryLocationId: null (nullable = true)
dataFrame.repartition(1).write
.mode(SaveMode.Overwrite)
.option("header", "true")
.option("delimiter", "|")
.option("nullValue", "")
.option("inferSchema", "true")
.csv(outputPath)
Is there a way to default to a datatype (such as string)?
By the way, I can get this to work by replacing null with empty string('') but that is not what I want to do.
A:
use lit(null): import org.apache.spark.sql.functions.{lit, udf}
Example:
import org.apache.spark.sql.functions.{lit, udf}
case class Record(foo: Int, bar: String)
val df = Seq(Record(1, "foo"), Record(2, "bar")).toDF
val dfWithFoobar = df.withColumn("foobar", lit(null: String))
scala> dfWithFoobar.printSchema
root
|-- foo: integer (nullable = false)
|-- bar: string (nullable = true)
|-- foobar: null (nullable = true)
and it is not retained by the csv writer. If it is a hard requirement you
can cast column to the specific type (lets say String):
import org.apache.spark.sql.types.StringType
df.withColumn("foobar", lit(null).cast(StringType))
or use an UDF like this:
val getNull = udf(() => None: Option[String]) // Or some other type
df.withColumn("foobar", getNull()).printSchema
root
|-- foo: integer (nullable = false)
|-- bar: string (nullable = true)
|-- foobar: string (nullable = true)
reposting zero323 code.
Now lets discuss your second question
Question :
"This is only when I know which columns will be treated as null datatype. When a large number of files are being read and applied various transformations on, then I wouldn't know or is there a way I might know which fields are null treated? "
Ans :
In this case you can use option
The Databricks Scala style guide does not agree that null should always be banned from Scala code and says: “For performance sensitive code, prefer null over Option, in order to avoid virtual method calls and boxing.”
Example :
+------+
|number|
+------+
| 1|
| 8|
| 12|
| null|
+------+
val actualDf = sourceDf.withColumn(
"is_even",
when(
col("number").isNotNull,
isEvenSimpleUdf(col("number"))
).otherwise(lit(null))
)
actualDf.show()
+------+-------+
|number|is_even|
+------+-------+
| 1| false|
| 8| true|
| 12| true|
| null| null|
+------+-------+
https://medium.com/@mrpowers/dealing-with-null-in-spark-cfdbb12f231e
https://github.com/vaquarkhan/scala-style-guide
| {
"pile_set_name": "StackExchange"
} |
Q:
Understanding jQuerymobile navigation model (how to clear all state when changing pages)
I'm apparently having some trouble understanding page transitions in jquerymobile.
It seems that when I navigate from one page to another (either via a simple anchor href, or $.mobile.navigate), some of the state is passed along.
For example, let's say I declare a variable like so within the script tag of page 1:
<script>
var randomVar = 'abcd';
</script>
Then on page 2, I have the following script tag:
<script>
console.log(randomVar);
</script>
If I go straight to page 2, then an error appears on the console:
"Uncaught ReferenceError: randomVar is not defined".
This is the expected behavior for me.
But if I go to page 1, and then navigate to page 2, the console will print "abcd". So it seems the state/vars from page 1 are being passed along to page 2.
I'd like to prevent this. Is there a way to clear all state when making this transition?
I only want this for certain page transitions though. I have navigations to other pages that are modals, but I'd like them to have page 1's state.
I may be thinking about the whole jQM navigation wrong, so please correct me if I am.
Thanks
A:
jQuery Mobile loads pages using Ajax. When you go there directly, the full page is loaded (so no state is kept). When you click a page to navigate, jQuery Mobile takes over and loads the new page via Ajax. Since the page has not changed, its state is the same (it's not really "retaining it", it's just leaving things as they are).
Your option is to hook into one of the jQuery Mobile Events such as pagechange and reset/modify any variables or state elements as you desire.
| {
"pile_set_name": "StackExchange"
} |
Q:
Joomla 2.5 custom component: filter entries
In a custom component, in the site view, I display a list of countries, each as a link to another page, displaying persons living in that country.
This is a link:
index.php?option=com_example&view=persons&country=1&Itemid=131
What's missing:
When the persons-page is opened, all persons are listed.
What I'm looking for:
I'd like to show only persons with country as in the link, 1 in the example above.
I tried to add this condition in the model-files of persons, but failed miserably.
+++ EDIT ++++
Thanks to the accepted answer, I was able to accomplish what I needed. Unfortunately, this seems to produce side-effects:
Fatal error: Call to a member function getPagesCounter() on a non-object
in .../view/persons/tmpl/default.php` (...)
The code throwing that error is
<?php echo $this->pagination->getPagesCounter(); ?>
When commenting out that line, the same erroer will occur with this code:
<?php echo $this->pagination->getPagesLinks(); ?>
How did that happen, and what can I do? Tried to track down that problem, but didn't know where to start.
+++ EDIT +++
Wasnm't able to solve that issue yet. Did a var_dump($this->pagination);, this is the output:
array(1) {
[0]=>
object(stdClass)#150 (20) {
["id"]=>
string(1) "6"
["name"]=>
string(11) "Fleur Leroc"
["country"]=>
string(1) "2"
(...)
["ordering"]=>
string(1) "6"
["state"]=>
string(1) "1"
["checked_out"]=>
string(3) "615"
["checked_out_time"]=>
string(19) "2013-10-10 10:53:14"
["created_by"]=>
string(10) "Super User"
["editor"]=>
string(10) "Super User"
["countriestrainers_country_828045"]=>
string(6) "France"
["countriestrainers_flag_828045"]=>
string(28) "images/trainers/flags/fr.gif"
}
}
So the object does exist, doesn't it?
A:
You were close editing the model files.
In your Persons model (ExampleModelPersons) you need to make sure you have the following elements:
Whitelist the filter name:
<?php
public function __construct($config = array())
{
if (empty($config['filter_fields'])) {
$config['filter_fields'] = array(
'country',
// other not standard filters
);
}
parent::__construct($config);
}
?>
Autopopulate the state filter:
<?php
protected function populateState($ordering = null, $direction = null)
{
$country = $this->getUserStateFromRequest($this->context.'.filter.country', 'country', '', null, false);
$this->setState('filter.country', (int) $country);
// ....Other states
{
?>
Store id for the context:
<?php
protected function getStoreId($id = '')
{
$id .= ':'.$this->getState('filter.country');
// Other states
}
?>
And the most important one, the database query
<?php
protected function getListQuery()
{
// ... Other parts of the querty
if ($country = $this->getState('filter.country'))
$query->where("country = ". (int) $country);
}
?>
If you don't need saving the state in user's session this can be easily stripped into two liner in the database query.
<?php
// ... Other parts of the querty
if ($country = $app->input->getInt('country'))
$query->where("country = ". (int) $country);
?>
| {
"pile_set_name": "StackExchange"
} |
Q:
How to resolve error "groovy.json,version=[2.4,3) -- Cannot be resolved" on Apache Sling 8 - Groovy Support?
I would like to use Groovy scripting in Apache Sling. I have installed Sling 8 and Bundle: Scripting Groovy V 1.0.2 on top. However, I am getting the following error on installed bundle.
groovy.json,version=[2.4,3) -- Cannot be resolved
groovy.lang,version=[2.4,3) -- Cannot be resolved
groovy.text,version=[2.4,3) -- Cannot be resolved
javax.script from org.apache.felix.framework (0)
org.apache.sling.commons.classloader,version=[1.0,2) from org.apache.sling.commons.classloader (84)
org.apache.sling.scripting.api,version=[2.1,3) from org.apache.sling.scripting.api (107)
org.codehaus.groovy.util,version=[2.4,3) -- Cannot be resolved
Am I missing some other dependency bundle? How to resolve this?
A:
The Groovy scripting bundle only provides the glue between Groovy and Sling. You also need to install the Groovy bundles.
The ones that we use right now for testing (see launchpad/testing: model.txt ) are
org.codehaus.groovy/groovy-all/2.4.5
org.codehaus.groovy/groovy-json/2.4.5
org.codehaus.groovy/groovy-templates/2.4.5
| {
"pile_set_name": "StackExchange"
} |
Q:
Selenium Webdriver finding an element in a sub-element
I am trying to search for an element in a sub-element with Selenium (Version 2.28.0), but selenium des not seem to limit its search to the sub-element. Am I doing this wrong or is there a way to use element.find to search a sub-element?
For an example I created a simple test webpage with this code:
<!DOCTYPE html>
<html>
<body>
<div class=div title=div1>
<h1>My First Heading</h1>
<p class='test'>My first paragraph.</p>
</div>
<div class=div title=div2>
<h1>My Second Heading</h1>
<p class='test'>My second paragraph.</p>
</div>
<div class=div title=div3>
<h1>My Third Heading</h1>
<p class='test'>My third paragraph.</p>
</div>
</body>
</html>
My python (Version 2.6) code looks like this:
from selenium import webdriver
driver = webdriver.Firefox()
# Open the test page with this instance of Firefox
# element2 gets the second division as a web element
element2 = driver.find_element_by_xpath("//div[@title='div2']")
# Search second division for a paragraph with a class of 'test' and print the content
print element2.find_element_by_xpath("//p[@class='test']").text
# expected output: "My second paragraph."
# actual output: "My first paragraph."
If I run:
print element2.get_attribute('innerHTML')
It returns the html from the second division. So selenium is not limiting its search to element2.
I would like to be able to find a sub-element of element2. This post suggests my code should work Selenium WebDriver access a sub element but his problem was caused by a time-out issue.
Can anyone help me understand what is happening here?
A:
If you start an XPath expression with //, it begins searching from the root of document. To search relative to a particular element, you should prepend the expression with . instead:
element2 = driver.find_element_by_xpath("//div[@title='div2']")
element2.find_element_by_xpath(".//p[@class='test']").text
A:
Use the following:
element2 = driver.find_element_by_cssselector("css=div[title='div2']")
element2.find_element_by_cssselector("p[@class='test']").text
Please let me know if you have any problems.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why div is taking extra space for images?
I am trying to put the text just on the side of images but I don't know why my image div is taking extra spaces
.row {
display: flex;
flex-direction: row;
}
.round img {
height: 15%;
width: 15%;
border-radius: 100%;
margin: 10px;
}
.round{
padding: 0 !important;
text-align: left;
justify-content: left;
height: 10%;
}
<div class="row">
<div class="round">
<img mat-card-image src="https://scontent-bom1-1.xx.fbcdn.net/v/t1.0-9/46463461_1157513267757941_7425556584253620224_n.jpg?_nc_cat=100&_nc_ht=scontent-bom1-1.xx&oh=3f957c2a41da24c5f0c505d61241fba5&oe=5C7550A3" alt="Card image cap">
</div>
<div>
<p><a routerLink="#">Rupesh Yadav</a></p>
<p><i>April,12,2018</i></p>
</div>
</div>
Please help me on this.
A:
You need to define css in .round class not in img, and add width:100% in img css.
It's by default distribution within a flexbox to adjust the space.
.row {
display: flex;
flex-direction: row;
}
.round {
height: 15%;
width: 15%;
margin: 10px;
padding: 0 !important;
text-align: left;
justify-content: left;
}
.round img {
width:100%;
border-radius: 100%;
}
<div class="row">
<div class="round">
<img mat-card-image src="https://scontent-bom1-1.xx.fbcdn.net/v/t1.0-9/46463461_1157513267757941_7425556584253620224_n.jpg?_nc_cat=100&_nc_ht=scontent-bom1-1.xx&oh=3f957c2a41da24c5f0c505d61241fba5&oe=5C7550A3" alt="Card image cap">
</div>
<div>
<p><a routerLink="#">Rupesh Yadav</a></p>
<p><i>April,12,2018</i></p>
</div>
</div>
Or you can just add width in pixel in image it also work:
.row {
display: flex;
flex-direction: row;
}
.round img {
width: 150px;
margin: 10px;
border-radius: 100%;
}
<div class="row">
<div class="round">
<img mat-card-image src="https://scontent-bom1-1.xx.fbcdn.net/v/t1.0-9/46463461_1157513267757941_7425556584253620224_n.jpg?_nc_cat=100&_nc_ht=scontent-bom1-1.xx&oh=3f957c2a41da24c5f0c505d61241fba5&oe=5C7550A3" alt="Card image cap">
</div>
<div>
<p><a routerLink="#">Rupesh Yadav</a></p>
<p><i>April,12,2018</i></p>
</div>
</div>
| {
"pile_set_name": "StackExchange"
} |
Q:
Why am I allowed to modify properties which are readonly with object initializers?
I have this simple code:
public static void Main(String[] args)
{
Data data = new Data { List = { "1", "2", "3", "4" } };
foreach (var str in data.List)
Console.WriteLine(str);
Console.ReadLine();
}
public class Data
{
private List<String> _List = new List<String>();
public List<String> List
{
get { return _List; }
}
public Data() { }
}
So when I'm creating a Data class:
Data data = new Data { List = { "1", "2", "3", "4" } };
The list was filled with strings "1", "2", "3", "4" even if it had no set.
Why is this happening?
A:
Your object initializer (with collection initializer for List)
Data data = new Data { List = { "1", "2", "3", "4" } };
gets turned into the following:
var tmp = new Data();
tmp.List.Add("1");
tmp.List.Add("2");
tmp.List.Add("3");
tmp.List.Add("4");
Data data = tmp;
Looking at it this way it should be clear why you are, in fact, adding to string1 and not to string2: tmp.List returns string1. You never assign to the property, you just initialize the collection that is returned. Thus you should look at the getter here, not the setter.
However, Tim is absolutely correct in that a property defined in that way doesn't make any sense. This violates the principle of least surprise and to users of that class it's not at all apparent what happens with the setter there. Just don't do such things.
A:
That is how collection initializers work internally:
Data data = new Data { List = { "1", "2", "3", "4" } };
It is basically equal to
Data _d = new Data();
_d.List.Add("1");
_d.List.Add("2");
_d.List.Add("3");
_d.List.Add("4");
Data data = _d;
And _d.List uses string1 in getter.
[*] More details in C# specification $7.6.10.3 Collection initializers
Change your code to this:
Data data = new Data { List = new List<string>{ "1", "2", "3", "4" } };
And string1 will be empty and string2 will have four items.
| {
"pile_set_name": "StackExchange"
} |
Q:
Were old TV shows routinely sped up for some reason?
I have recently made a habit of waking up early, and as a by-product spend my mornings catching re-runs of The Donna Reed Show. I've noticed in a number of episodes, though not necessarily all of them, that the video seems to be sped up; characters either seem to speak too quickly, or their movements look rather stop-motiony. If a specific example is needed, The Foundling was the first instance of this I noticed.
Was there a habit of speeding up TV shows back in the day? Or maybe this is a choice made by the channel broadcasting the re-runs for commercial reasons?
A:
Old television shows were shot using motion picture film of that era, and were shot at either 24 or 25 frames per second. Television video today is played at 30 frames per second.
The speed problems you see are artifacts of the conversion process. The shows you watched might have been converted to television during the 1950's when kinescopes were largely used to do the conversions.
Most television networks that show these old TV shows are using video footage that was already converted from film. If that conversion was off or done for a different format, then correcting the problem would reduce the quality of the video (which is already of poor quality).
It's like trying to fix a photocopy of a picture by making another photocopy. You're just moving away from the quality of the original.
So the television networks broadcast the video "as is" because it's the best copy they have.
| {
"pile_set_name": "StackExchange"
} |
Q:
Pagination and Grouping in SQL
I am trying to write an SQL script, but I am getting unexpected results. The @TotalResults give me 6 as the records, when I know there are 33 records returned.
Here's the code:
SELECT @TotalPages = CEILING(COUNT(a.MemberID)/@PageSize), @TotalResults = COUNT(a.MemberID)
FROM Member a
INNER JOIN MemberBusinessCat b ON b.MemberID = a.MemberID
INNER JOIN BusinessCat c ON c.BusinessCatID = b.BusinessCatID
WHERE a.SystemID = @SystemID
AND c.CategoryName LIKE '%' + @SearchStr + '%'
AND ( @ShowUnclaimed != 'N'
OR ( a.Claimed = 'Y' AND a.SBIcon = 'N' )
)
AND a.Viewable = 'Y'
GROUP BY a.MemberID, a.CreateDate, a.UserName, a.PrCity, a.MemberDisplayName, a.PrStateID, a.PrPhone, a.ShortDesc, a.PrCountryID;
WITH CoalPrepCategorySearch AS
(
SELECT ROW_NUMBER() OVER(ORDER BY a.MemberDisplayName ASC) AS RowNum,
a.MemberID,
a.UserName,
a.PrCity,
a.PrStateID,
a.PrPhone,
@TotalPages AS TotalPages,
a.MemberDisplayName AS DisplayName,
a.ShortDesc,
@TotalResults AS TotalResults,
a.PrCountryID
FROM Member a
INNER JOIN MemberBusinessCat b ON b.MemberID = a.MemberID
INNER JOIN BusinessCat c ON c.BusinessCatID = b.BusinessCatID
WHERE a.SystemID = @SystemID
AND c.CategoryName LIKE '%' + @SearchStr + '%'
AND ( @ShowUnclaimed != 'N'
OR ( a.Claimed = 'Y' AND a.SBIcon = 'N' )
)
AND a.Viewable = 'Y'
GROUP BY a.MemberID, a.CreateDate, a.UserName, a.PrCity, a.MemberDisplayName, a.PrStateID, a.PrPhone, a.ShortDesc, a.PrCountryID
)
SELECT *
FROM CoalPrepCategorySearch
WHERE RowNum BETWEEN (@PG - 1) * @PageSize + 1 AND @PG * @PageSize
ORDER BY DisplayName ASC
I am pretty sure it is related to the grouping. If it is, then how can I get the total results? What am I doing wrong?
Many thanks in advance.
neojakey
A:
Possible this be helpful for you -
;WITH cte AS
(
SELECT a.*
FROM dbo.Member a
JOIN dbo.MemberBusinessCat b ON b.MemberID = a.MemberID
JOIN dbo.BusinessCat c ON c.BusinessCatID = b.BusinessCatID
WHERE a.SystemID = @SystemID
AND c.CategoryName LIKE '%' + @SearchStr + '%'
AND a.Viewable = 'Y'
AND (
@ShowUnclaimed != 'N'
OR
a.Claimed + a.SBIcon = 'YN'
)
), CoalPrepCategorySearch AS
(
SELECT
ROW_NUMBER() OVER(ORDER BY a.MemberDisplayName ASC) AS RowNum,
a.MemberID,
a.UserName,
a.PrCity,
a.PrStateID,
a.PrPhone,
a.MemberDisplayName AS DisplayName,
a.ShortDesc,
a.PrCountryID
FROM (
SELECT DISTINCT
a.MemberDisplayName,
a.MemberID,
a.UserName,
a.PrCity,
a.PrStateID,
a.PrPhone,
a.ShortDesc,
a.PrCountryID
FROM cte a
) a
)
SELECT *
FROM CoalPrepCategorySearch t
CROSS JOIN (
SELECT
TotalPages = CEILING(COUNT(t2.MemberID) / @PageSize)
, TotalResults = COUNT(t2.MemberID)
FROM cte t2
GROUP BY t2.MemberID
) t2
WHERE RowNum BETWEEN (@PG - 1) * @PageSize + 1 AND @PG * @PageSize --??
ORDER BY t.DisplayName
| {
"pile_set_name": "StackExchange"
} |
Q:
Calling activity method in custom adapter class
So I am currently trying to refresh a recyclerview in another activity but the problem is it doesn't like the casting or the way I am calling it.
Any ideas?
A:
From the crashlog, it would appear you are passing an ApplicationContext to your recyclerview. Instead, you need to pass your activity as Context, and ensure your Activity implements the UserRView interface.
A cleaner way would be to pass a context and also pass a UserRView to your adapter, so you do not have to cast the context to UserRView, and you can continue to pass an application context if you prefer.
Edit: replace this code
adapter = new CustomRAdapter(list, getBaseContext());
with
adapter = new CustomRAdapter(list, this);
| {
"pile_set_name": "StackExchange"
} |
Q:
What is rev-manifest.json file in gulp?
I am new to Gulp. Can anyone explain what is rev-manifest.json file in gulp?
A:
There are only two hard things in Computer Science: cache invalidation and naming things.
-- Phil Karlton
Gulp has a plugin gulp-rev which is used to append the hash of the file content to the end of its name, so for example script.js would become something like script-134cfcc203.js.
The rev-manifest.json is the mapping that stores the original name and the current name of each file.
This is done for cache invalidation. This is how it's done:
You set the TTL (time to live) of your resources (styles, scripts, etc.) to infinite on your server (nginx, apache or whatever), which means that you're telling your clients' browsers that they should cache the resources forever.
Then when you change the content of a file (e.g. script.js) because you append its hash to its name, then the name would change. Then when the users' browsers request your page, they see it as a whole new file and therefore they're forced to re-download it. But until you don't change the contents of the file, they never re-download it and hence your page's loading speeds up.
Although note that you also need another plugin gulp-rev-rewrite, to search your html files and automatically replace the script.js with its new name, and this plugin uses the rev-manifest.json file to do that.
Hope it is clear enough. Or if it's not, feel free to ask and I will edit and clarify more
| {
"pile_set_name": "StackExchange"
} |
Q:
How do you protect against malicious PREPROCESSOR attacks in Oracle External Tables?
I'm a new DBA and I recently found out about the option of External Tables in Oracle using the PREPROCESSOR feature ( http://download.oracle.com/otndocs/products/database/enterprise_edition/utilities/pdf/xtables_preproc11g_1009.pdf )
Unfortunately, this feature, which seems quite useful in our lounge seems very dangerous, as someone with access to the OS (or remotely....) could exploit it to cause the database to get compromised, or even worst - the whole OS.
I have restricted the access to this feature to the minimum, and revoked any additional privileges which might allow outside access to the os (extproc, java, etc)
However, there are still times when we must use this feature, and this is where I ask you guys 2 main questions:
How do you protect against malicious attacks using this wicked feature ?
Assuming something has failed in the security mechanisms, what sort of ways are there to detect that someone used this feature in an evil way? What sort of queries (or content of them) could be seen ?
Thanks (:
A:
Oracle has had the capability of executing external code from SQL for a long time - EXTPROCs, data cartridges, and so on. You say
someone with access to the OS (or remotely....) could exploit it to
cause the database to get compromised
But what does this even mean? Someone with access to the OS as the oracle user can access your DBFs directly (they're just files on the disk), can attach directly to the SGA, can make a backup and copy it off, can snoop the network traffic (as root). In the case of a malicious developer, the can do whatever they want in PL/SQL and wrap it. I don't see how you are introducing a new vulnerability by using this feature. If it makes your job or your user's jobs easier, go for it.
A:
If a user has access to the OS and the the folder containing the PREPROCESSOR program then theoretically they can do anything the Oracle OS user can do.
Prevent this by preventing this level of access to the OS.
Monitor this by monitoring access to the OS and to the folder.
| {
"pile_set_name": "StackExchange"
} |
Q:
Changing the end point of a path in raphael
I am trying to change the end point of a path drawn on a raphael canvas but cannot get the syntax right. Here is the code. (The arguments of the function call arrow.attr are obviously wrong but I have tried numerous combinations to no avail):
window.onload = function() {
var paper = new Raphael(document.getElementById('canvas_container'), 500, 500);
var circle = paper.circle(100, 100, 80);
var arrow = paper.path("M 100 100 l -56.5 56.5 z");
arrow.attr({stroke: '#0a0', 'stroke-width': 3});
arrow.attr({'x2':80, 'y2':0});
} ;
The raphael reference is very limited and I would like to know if there is a better reference somewhere else.
A:
One way to modify a part of the path is to store the path details in an array, modify parts as required and reassign the path as string.
var paper = new Raphael(document.getElementById('canvas_container'), 500, 500),
pathArray = ['M', 100, 100, 'l', 100, 100, 0, 100, 'z'],
shape = paper.path(pathArray.join(' '));
// Modify the last point (0, 100) to (100, 0)
pathArray.splice(-3, 2, 100, 0);
// Reassign the path as string
shape.attr({path: pathArray.join(' ')})
Hope this helps.
| {
"pile_set_name": "StackExchange"
} |
Q:
Use Watir in Shoes (ruby)
I'm trying to do an application in ruby. I want to collect information from the user using some UI interface. Then use this info in my script to fill some form on a web page.
I use Shoes as UI
I use Watir as Browser "manager"
Here a simple sample of what i'm trying to do
Shoes.setup do
gem 'watir'
end
require 'watir'
Shoes.app do
stack do
edit_line do |e|
@url = e.text
end
button("Test"){
browser = Watir::Browser.new
browser.goto @url
#Do some stuff
}
end
end
But then When the application start it's trying to installing watir and freeze because of error:
http://screencast.com/t/XWmeMmPQEBc
A:
The error says that rake requires rubygems >= 1.3.2
You either need to upgrade rubygems or downgrade rake to a version compatible with your current rubygems.
Edit: or specifiy a version of watir that will run with an older rubygems & rake
| {
"pile_set_name": "StackExchange"
} |
Q:
Spark java.lang.NullPointerException when using tuples
I am using the GraphX API for spark to build a graph and process it with Pregel API. The error does not happen if I return an argument tuple from vprog function, but if I return a new tuple using the same tuple, I get null point error.
Here is the relevant code:
val verticesRDD = cleanDtaDF.select("ChildHash", "DN").rdd.map(row => (row(0).toString.toLong, (row(1).toString.toDouble,row(0).toString.toLong)))
val edgesRDD = (rawDtaDF.select("ChildHash", "ParentHash", "dealer_code", "dealer_customer_number", "parent_dealer_cust_number").rdd
.map(row => Edge(row.get(0).toString.toLong, row.get(1).toString.toLong, (row(3) + " is a child of " + row(4), " when dealer is " + row.get(2)))))
val myGraph = Graph(verticesRDD, edgesRDD)
def vprog(vertexId: VertexId, vertexDTA:(Double, Long), msg: Double): (Double, Long) = {
(vertexDTA._1, vertexDTA._2)
}
val result = myGraph.pregel(0.0, 1, activeDirection = EdgeDirection.Out)(vprog,t => Iterator((t.dstId, t.srcAttr._2)),(x, y) => x + y)
The error does not happen if I make a simple change to vprog(...)--not access the tuples' members:
def vprog(vertexId: VertexId, vertexDTA:(Double, Long), msg: Double): (Double, Long) = {
vertexDTA
}
The error is
[Stage 101:> (0 + 0) / 200][Stage 102:> (0 + 4) / 200]18/03/10 20:43:16 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 102.0 (TID 5959, ue1lslaved25.na.aws.cat.com, executor 146): java.lang.NullPointerException
at $line69.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw.vprog(<console>:60)
at $line70.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$2.apply(<console>:75)
at $line70.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$anonfun$2.apply(<console>:75)
at org.apache.spark.graphx.Pregel$$anonfun$1.apply(Pregel.scala:125)
at org.apache.spark.graphx.Pregel$$anonfun$1.apply(Pregel.scala:125)
at org.apache.spark.graphx.impl.VertexPartitionBaseOps.map(VertexPartitionBaseOps.scala:61)
at org.apache.spark.graphx.impl.GraphImpl$$anonfun$5.apply(GraphImpl.scala:129)
at org.apache.spark.graphx.impl.GraphImpl$$anonfun$5.apply(GraphImpl.scala:129)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:216)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:988)
at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:979)
at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:919)
at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:979)
at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:697)
at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:285)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
A:
This issue has a simple explanation. It's not related with Spark or Graphx.
Having the function (just strip out irrelevant items from the original):
def vprog(vertexDTA:(Double, Long)): (Double, Long) = {
(vertexDTA._1, vertexDTA._2)
}
If the arg vertexDTA is null, both vertexDTA._1 and vertexDTA._2 will throw NullPointerException.
If we change the function to
def vprog(vertexDTA:(Double, Long)): (Double, Long) = {
vertexDTA
}
when the arg is null, it simply returns it, there is no access to tuple's members, so no NPE.
| {
"pile_set_name": "StackExchange"
} |
Q:
Compute frequency of sinusoidal signal, c++
i have a sinusoidal-like shaped signal,and i would like to compute the frequency.
I tried to implement something but looks very difficult, any idea?
So far i have a vector with timestep and value, how can i get the frequency from this?
thank you
A:
If the input signal is a perfect sinusoid, you can calculate the frequency using the time between positive 0 crossings. Find 2 consecutive instances where the signal goes from negative to positive and measure the time between, then invert this number to convert from period to frequency. Note this is only as accurate as your sample interval and it does not account for any potential aliasing.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can you use そうだ (appear, seem) with kango adjectival nouns?
Of course it is common to attach そうだ to na-adjectives or i-adjectives (高そうな車, 元気そうな人 etc), but it doesn't seem to be the case for kango words which can operate as either nouns or adjectives.
For example, it sounds natural (to me) to say 良さそうな車 but it sounds unnatural (to me) to say 良質そうな車. I was thinking about describing an oncoming typhoon as 強烈そうな台風 but again it sounds a little strange to me. Is is simply that it is less common to attach そう to kango adjectival nouns, or is it actually unnatural to do so?
A:
You seems to have misunderstood the concept of na-adjective. All na-adjectives accept -だ, so just because you can say 強烈だ does not mean 強烈 also works as a simple noun. A simple noun can take case particles like は/が/を, but you cannot say 強烈がある or 強烈を見る, right?
元気: a na-adjective that also works as a standalone noun.
元気な人。彼は元気だ。元気がある。元気を出す。
良質: a na-/no-adjective that can only describe a noun (attributively or predicatively). It very rarely works as a standalone noun.
良質なダイヤ。良質のダイヤ。このダイヤは良質だ。(×良質がある。)
強烈: a na-adjective that can describe a noun (attributively or predicatively) or a verb or another adjective (adverbially). It cannot work as a standalone noun.
強烈な台風。この台風は強烈だ。彼を強烈に殴る。これは強烈に辛い。(×強烈がある。)
So quite contrary to your statements, 良質 and 強烈 are "purer" na-adjectives, and your question seems baseless to me. If a word works as a na-adjective, you can attach そうな to it, at least grammatically. But it does not mean na-adj + そうな must always make sense. 良質そうな車 and 強烈そうな台風 are unnatural for different reasons.
良質そうな車 is unnatural because 良質 usually describes natural products like fruit, ore, oil, etc. But you can safely say 高級そうな車 instead although 高級 is a two-kanji kango word.
強烈そうな台風 is almost always unnatural because そう always involves a direct observation, and ordinary people cannot directly observe a typhoon and say "That typhoon appears to be furious" before knowing about it on TV or the internet. If you were an astronaut and noticed a large typhoon on the earth, you could safely say 強烈そうな台風が見えます.
And statistically speaking, I think そう tends to be used with i-adjectives more often, because observable simple concepts are covered by native Japanese vocabulary (wago). Kango words are better at describing abstract concepts, but they are unlikely to be used with そう anyway.
| {
"pile_set_name": "StackExchange"
} |
Q:
How many Python applications can I run at the same time per interpreter?
How many Python applications can I run at the same time per interpreter?
Is it just one?
Or could the Python interpreter run more than one at the same time?
A:
You can run only one Python application per interpreter, and you can only run one interpreter per process. If you want to run multiple applications then you will need to run multiple processes.
| {
"pile_set_name": "StackExchange"
} |
Q:
Draw on HTML5 Canvas using a mouse
I want to draw on a HTML Canvas using a mouse (for example, draw a signature, draw a name, ...)
How would I go about implementing this?
A:
Here is a working sample.
<html>
<script type="text/javascript">
var canvas, ctx, flag = false,
prevX = 0,
currX = 0,
prevY = 0,
currY = 0,
dot_flag = false;
var x = "black",
y = 2;
function init() {
canvas = document.getElementById('can');
ctx = canvas.getContext("2d");
w = canvas.width;
h = canvas.height;
canvas.addEventListener("mousemove", function (e) {
findxy('move', e)
}, false);
canvas.addEventListener("mousedown", function (e) {
findxy('down', e)
}, false);
canvas.addEventListener("mouseup", function (e) {
findxy('up', e)
}, false);
canvas.addEventListener("mouseout", function (e) {
findxy('out', e)
}, false);
}
function color(obj) {
switch (obj.id) {
case "green":
x = "green";
break;
case "blue":
x = "blue";
break;
case "red":
x = "red";
break;
case "yellow":
x = "yellow";
break;
case "orange":
x = "orange";
break;
case "black":
x = "black";
break;
case "white":
x = "white";
break;
}
if (x == "white") y = 14;
else y = 2;
}
function draw() {
ctx.beginPath();
ctx.moveTo(prevX, prevY);
ctx.lineTo(currX, currY);
ctx.strokeStyle = x;
ctx.lineWidth = y;
ctx.stroke();
ctx.closePath();
}
function erase() {
var m = confirm("Want to clear");
if (m) {
ctx.clearRect(0, 0, w, h);
document.getElementById("canvasimg").style.display = "none";
}
}
function save() {
document.getElementById("canvasimg").style.border = "2px solid";
var dataURL = canvas.toDataURL();
document.getElementById("canvasimg").src = dataURL;
document.getElementById("canvasimg").style.display = "inline";
}
function findxy(res, e) {
if (res == 'down') {
prevX = currX;
prevY = currY;
currX = e.clientX - canvas.offsetLeft;
currY = e.clientY - canvas.offsetTop;
flag = true;
dot_flag = true;
if (dot_flag) {
ctx.beginPath();
ctx.fillStyle = x;
ctx.fillRect(currX, currY, 2, 2);
ctx.closePath();
dot_flag = false;
}
}
if (res == 'up' || res == "out") {
flag = false;
}
if (res == 'move') {
if (flag) {
prevX = currX;
prevY = currY;
currX = e.clientX - canvas.offsetLeft;
currY = e.clientY - canvas.offsetTop;
draw();
}
}
}
</script>
<body onload="init()">
<canvas id="can" width="400" height="400" style="position:absolute;top:10%;left:10%;border:2px solid;"></canvas>
<div style="position:absolute;top:12%;left:43%;">Choose Color</div>
<div style="position:absolute;top:15%;left:45%;width:10px;height:10px;background:green;" id="green" onclick="color(this)"></div>
<div style="position:absolute;top:15%;left:46%;width:10px;height:10px;background:blue;" id="blue" onclick="color(this)"></div>
<div style="position:absolute;top:15%;left:47%;width:10px;height:10px;background:red;" id="red" onclick="color(this)"></div>
<div style="position:absolute;top:17%;left:45%;width:10px;height:10px;background:yellow;" id="yellow" onclick="color(this)"></div>
<div style="position:absolute;top:17%;left:46%;width:10px;height:10px;background:orange;" id="orange" onclick="color(this)"></div>
<div style="position:absolute;top:17%;left:47%;width:10px;height:10px;background:black;" id="black" onclick="color(this)"></div>
<div style="position:absolute;top:20%;left:43%;">Eraser</div>
<div style="position:absolute;top:22%;left:45%;width:15px;height:15px;background:white;border:2px solid;" id="white" onclick="color(this)"></div>
<img id="canvasimg" style="position:absolute;top:10%;left:52%;" style="display:none;">
<input type="button" value="save" id="btn" size="30" onclick="save()" style="position:absolute;top:55%;left:10%;">
<input type="button" value="clear" id="clr" size="23" onclick="erase()" style="position:absolute;top:55%;left:15%;">
</body>
</html>
A:
Here's the most straightforward way to create a drawing application with canvas:
Attach a mousedown, mousemove, and mouseup event listener to the canvas DOM
on mousedown, get the mouse coordinates, and use the moveTo() method to position your drawing cursor and the beginPath() method to begin a new drawing path.
on mousemove, continuously add a new point to the path with lineTo(), and color the last segment with stroke().
on mouseup, set a flag to disable the drawing.
From there, you can add all kinds of other features like giving the user the ability to choose a line thickness, color, brush strokes, and even layers.
A:
I think, other examples here are too complicated. This one is simpler and JS only...
// create canvas element and append it to document body
var canvas = document.createElement('canvas');
document.body.appendChild(canvas);
// some hotfixes... ( ≖_≖)
document.body.style.margin = 0;
canvas.style.position = 'fixed';
// get canvas 2D context and set him correct size
var ctx = canvas.getContext('2d');
resize();
// last known position
var pos = { x: 0, y: 0 };
window.addEventListener('resize', resize);
document.addEventListener('mousemove', draw);
document.addEventListener('mousedown', setPosition);
document.addEventListener('mouseenter', setPosition);
// new position from mouse event
function setPosition(e) {
pos.x = e.clientX;
pos.y = e.clientY;
}
// resize canvas
function resize() {
ctx.canvas.width = window.innerWidth;
ctx.canvas.height = window.innerHeight;
}
function draw(e) {
// mouse left button must be pressed
if (e.buttons !== 1) return;
ctx.beginPath(); // begin
ctx.lineWidth = 5;
ctx.lineCap = 'round';
ctx.strokeStyle = '#c0392b';
ctx.moveTo(pos.x, pos.y); // from
setPosition(e);
ctx.lineTo(pos.x, pos.y); // to
ctx.stroke(); // draw it!
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Spark: Round to Decimal in Dataset
I have a dataset like below where in case of DataFrame I'm able to easily round to 2 decimal places
but just wondering if there is any easier way to do the same while using typed dataset.
Here is my code snippet:
import org.apache.spark.sql.{DataFrame, Dataset}
import org.apache.spark.sql.expressions.scalalang.typed.{sum => typedSum}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.{DecimalType}
case class Record(BOOK: String,ID: String,CCY: String,AMT: Double)
def getDouble(num: Double) = {BigDecimal(num).setScale(2, BigDecimal.RoundingMode.HALF_UP).toDouble}
val ds = Seq(
("ALBIBC","1950363","USD",2339055.7945),
("ALBIBC","1950363","USD",78264623778.813345),
("ALBIBC","1950363","USD",45439055.222),
("ALBIBC","1950363","EUR",746754759055.343),
("ALBIBC","1950363","EUR",343439055.88780),
).toDS("BOOK","ID","CCY","AMT")
Dataframe way produces the following output:
val df: DataFrame = data.groupBy('BOOK,'ID,'CCY).agg(sum('AMT).cast(DecimalType(38,2)).as("Balance"))
df.show()
+------+-------+---+---------------+
| BOOK| ID|CCY| Balance|
+------+-------+---+---------------+
|ALBIBC|1950363|USD| 78312401889.83|
|ALBIBC|1950363|EUR|747098198111.23|
+------+-------+---+---------------+
How would I go about rounding the balance to 2 decimal places in case of dataset?
val sumBalance = typedSum[Record](_.AMT).as[Double].name("Balance")
val ds = data.groupByKey(thor => (thor.BOOK, thor.ID, thor.CCY)).agg(sumBalance.name("Balance"))
.map{case(key,value) => (key._1,key._2,key._3,getDouble(value))}
ds.show()
+------+-------+---+------------------+
| _1| _2| _3| _4|
+------+-------+---+------------------+
|ALBIBC|1950363|USD| 7.831240188983E10|
|ALBIBC|1950363|EUR|7.4709819811123E11|
+------+-------+---+------------------+
I can go the dataframe way but just curious to know while using Datasets?
Any advice on this please.
Thanks
A:
Your mistake is conversion back to Double. Floating point representation cannot represent all possible numbers.
Redefine (and probably rename) your function to:
def getDouble(num: Double) = BigDecimal(num).setScale(
2, BigDecimal.RoundingMode.HALF_UP
)
Example:
Seq(7.831240188983E10, 7.4709819811123E11).toDS.map(getDouble).show
// +---------------+
// | value|
// +---------------+
// | 78312401889.83|
// |747098198111.23|
// +---------------+
| {
"pile_set_name": "StackExchange"
} |
Q:
Which algorithm to use for alphabetical sort?
A lot of sorting algorithms are based on comparisons of numbers. If I correctly understand, when we are using comparison algorithms for alphabetical sort, we compare char codes(their integer presentation) and sort depending on their values. (That's why at ASCII table letter B has a bigger code then A). But during this comparing we sort only by the first letter and not the whole word. When we use db query with ORDER BY we are getting sorting for the whole words. (As I understand the reason of it is db background mechanisms like indexes etc.). Also I heard about Radix sort (sorry, but never used it before) and as I can see it can help with alphabetical sort(maybe I'm wrong).
What algorithm is better to use for sorting by the whole words?
Not correct:
Adam
Aaron
Antony
Correct:
Aaron
Adam
Antony
And am I correct with my assumptions about the whole workflow?
A:
You're not quite correct with the assumption about "Compare only the first letter". The algorithm is - if the first letters are the same, compare the next letter. And the next. And the next. Until either you find some letters that are different, or one of the strings runs out.
Also note that simply comparing by ASCII codes is not always enough. Sometimes you need to do a case-insensitive comparison where you consider A to be equal to a. Sometimes you need to do accent-insensitive comparison where you consider ā to be equal to a. And sometimes you need to account for crazy language shit where ß is equal to ss or worse.
My advice is - your programming language should probably have some mechanism for comparing strings. Use that. Don't roll out your own.
After that, any sorting algorithm will work. They all use one simple assumption - that you can compare the items that you sort. Whether they are integers, strings or complex objects, is irrelevant. As long as you can take any two objects and say "this one is bigger and this one is smaller", you're good to go.
(Note also that you need to be consistent about it. If A==B and B==C, then also you need to make sure that A==C. Similarly if A < B and B < C, then you must A < C. Etc.)
A:
No, the sorting is not based on first character or length. Alphabetical or better to put it as lexicographical ordering are done in the following way,
In C++ the comparison function would look like this,
bool operator<(const string &a, const string &b){
int l = min(a.size(),b.size());
for(int i = 0; i < l; i++){
if( a[i] > b[i]) return false; // a is greater than b
if( b[i] > a[i]) return true; // b is greater than a
}
if ( a.size() > l) return false; // a is greater than b
return true; // b is greater than a
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to make your paper's materials available online? (personal webpages, Institutional server, etc?)
I would like to distribute some of the materials related to my paper (such as source code) available online. I have the option of uploading them to my personal website or to a server in my institution, (or any better ways you suggest?)
I am concerned that if I upload it on the institution personal space, after graduating I won't have access to the server anymore or my page might be closed.
and I don't want to disappoint my readers by providing a link which may expire in a few years.
Personal website seems a good option so I fully have control over the materials.
I would like to know if there is any better option or online services recognized especially for hosting academic materials (here my code) which I can link them in my paper?
A:
Try something like github which will provide not only a public place for storing your code, but also will give you version control.
Other similar sites exist such as GoogleDocs, FlipDrive, DropBox, ...
A:
I would suggest that if you are able to keep them up-to-date, a personal website containing a guide/summary of your works with GitHub (or similar) repositories to host code.
The benefit of this is that it allows you the best of both worlds. I would suggest that your personal site would be the "professional" portal to your academic papers etc, with blurbs / 'about me' etc while the source code and documentation being managed on GitHub (or similar).
| {
"pile_set_name": "StackExchange"
} |
Q:
Check Home button is Pressed in Galaxy note
I have an app for Galaxy Note in which i have to check that "home/Center" key is pressed. Is there any function for it like for when back button there is a default function onBackPressed. Any help would be appreciated.
A:
There is no way to catch the home-key in android. Ideally it is not for developers.
But you can do the following
onPause will be called if following events occur
a. User Press Home Key
b. User press back Key
c. External interrupt like calls
So since you can handle backpress, and listen on incoming call broadcast, then you can determine in the onPause method if it is due to home press.
Parse the logcat and then do necessary action in the background service
| {
"pile_set_name": "StackExchange"
} |
Q:
Change Github online editor tab indent size
I am writing blog using Jekyll on github page.
Instead of using offline editors, I use github online editor which supports markdown very well.
But I when I edit posts, github editor set the default indent size to 8. I tried to change it to 4, but after I save it and reedit this post, github editor reset the indent size to 8.
I am wondering why this happens. How can I set the default indent size to 4 for every file?
Thanks!
A:
It's not a permanent setting. You can change it for the edit mode with the dropdown you mention and for blob mode by appending ?ts=4 to the URL on a per-file basis.
If you're using chrome, you could use Sindre Sorhus' tab-size-on-github extension that does exactly what you want: Force all tabs to represent 4 spaces always.
| {
"pile_set_name": "StackExchange"
} |
Q:
Angular2: How to implement more than 2 method
how can i implement Oninit & Ondestroy. I tried below but didn't work for me.
I am extracting id from URL for that i want to use OnDestroy to avoid memory problem and need to call service onInit
import { Component, OnInit, OnDestroy } from '@angular/core';
import { DetailService } from '../detail.service';
import { Router, ActivatedRoute } from '@angular/router';
import { Subscription } from 'rxjs';
@Component({
selector: 'app-detail-page',
templateUrl: './detail-page.component.html',
styleUrls: ['./detail-page.component.css'],
providers: [DetailService]
})
export class DetailPageComponent implements OnInit, OnDestroy {
detailData: any;
id:string;
private subscription: Subscription;
constructor(private detailService: DetailService, private router: Router, private activatedRoute: ActivatedRoute) {
this.subscription = this.activatedRoute.queryParams.subscribe(
(param: any) => this.id = param['id']
);
}
ngOnInit() {
this.detailService.getDetailData()
.subscribe(
data => this.detailData = data
);
}
ngOnDestroy(){
this.subscription.unsubscribe();
}
}
A:
Your code syntax looks fine. Did you include DetailService into the snippet?
Notice: Unsubscribing from Activated route is not necessary:
The ActivatedRoute and its observables are insulated from the Router itself. The Router destroys a routed component when it is no longer needed and the injected ActivatedRoute dies with it.
UPDATED:
Instead of activatedRoute.queryParams you might want to use activatedRoute.params
constructor(route: ActivatedRoute) {
this.params = route.params.subscribe(
params => this.id = params['id']
);
}
UPDATE2:
Use params when it's a part of the path, e.g {path: 'page/:id'}
Use queryParams when it's an optional query param, e.g: page?id=1
More info is here
| {
"pile_set_name": "StackExchange"
} |
Q:
How to give list view Pull Down to Refresh functionality in android
How can I create a pull down to refresh list in android?
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.pull_to_refresh);
// Set a listener to be invoked when the list should be refreshed.
((PullToRefreshListView) getListView()).setOnRefreshListener(new OnRefreshListener() {
@Override
public void onRefresh() {
// Do work to refresh the list here.
new GetDataTask().execute();
}
});
mListItems = new LinkedList<String>();
mListItems.addAll(Arrays.asList(mStrings));
ArrayAdapter<String> adapter = new ArrayAdapter<String>(this,
android.R.layout.simple_list_item_1, mListItems);
setListAdapter(adapter);
}`
A:
This is not an android design pattern. However, this excellent library lets you do it easily. Have a look at the examples.
Hope I helped.
Edit -- 12/06/2015 -- Disregard the previous statement:
This is now a design pattern that is fully supported by the SDK on Android.
It is very simple, you need to use a SwipeRefreshLayout as the parent view to your list (or other data you might want to refresh). You can put any view as a child, it will create a Pull-To-Refresh animation for that view.
Aftewards, you just need to implement SwipeRefreshLayout.OnRefreshListener to handle the network code of the actual data refresh:
public class MainActivity extends FragmentActivity implements OnRefreshListener {
private SwipeRefreshLayout _pullToRefreshLayout;
@Override
protected void onCreate(Bundle savedInstanceState) {
setContentView(R.layout.activity_main);
_pullToRefreshLayout = (SwipeRefreshLayout) findViewById(R.id.swipe_layout);
_pullToRefreshLayout.setOnRefreshListener(this);
super.onCreate(savedInstanceState);
}
@Override
public void onRefresh() {
//When this is called, your view has a little loader showing to show the user that a network call is in progress
Log.i("SO17065814", "Starting refresh...");
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
mSwipeRefreshLayout.setRefreshing(false); //This stops the refresh animation
Log.i("SO17065814", "Ending refresh...");
}
}, 5000);
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Including Gloobal variable inside function php
I'm trying to include globals variable inside function in php but something doesn't work.
Tried but don't work:
$row = fetch_row("SELECT id,name FROM $GLOBALS['var'] where id = $id");
or
$row = fetch_row("SELECT id,name FROM `$GLOBALS['var']` where id = $id");
or
$row = fetch_row("SELECT id,name FROM '$GLOBALS['var']' where id = $id");
or
$row = fetch_row("SELECT id,name FROM $GLOBALS[\"var\"] where id = $id");
A:
As you are using " " you can include variables inside a string, however, because you're trying to access an array value this will throw an error.
What you can do in this situation is either concatenate the string or wrap the variable in { }.
So this:
"SELECT id,name FROM " . $GLOBALS['var']. " where id = $id"
or
"SELECT id,name FROM {$GLOBALS['var']} where id = $id"
Furthermore, you're not actually inside the function at this point, you're just passing that string to it.
Hope this helps!
| {
"pile_set_name": "StackExchange"
} |
Q:
Sqlite: How do I reset all database tables?
I want a debug function to do this, but I'm unaware of whether one already exists. Going through and using 'drop table' for each of my tables will be a pain.
Help appreciated.
A:
Since the database is just one file, you can indeed just erase it. If you want something more automatic, you can use the following to do it all programmatically:
Recover your schema:
SELECT group_concat(sql,';') FROM sqlite_master;
Disconnect from the database
Delete the database file
Create your schema again with what was returned from the above query
If you used any particular options for your original database (page_size, etc), they will have to be declared manually as well.
A:
to "drop database" for sqlite, simply delete the database file (and recreate if needed)
| {
"pile_set_name": "StackExchange"
} |
Q:
I am facing "SSH access" issue while accessing linux server
I am unable to access from A Server (10.61.8.XXX) to B Server (10.61.16.XX).
Whenever try to access B server from A it keep on asking password even after SSH keys pasted in A server generated from B server.However i can able to access A server from B server without password. I don't want to regenerate SSH keys again because jobs existing in jenkins running based upon these old keys. Pls leet me know the status to confirm and commands to make this works.
Thanks in advance !!
A:
Did you check file and directory permissions on host B?
On host B, fix them with
$ chmod go-w $HOME $HOME/.ssh
$ chmod 600 $HOME/.ssh/authorized_keys
$ chown `whoami` $HOME/.ssh/authorized_keys
(see the OpenSSL FAQ).
| {
"pile_set_name": "StackExchange"
} |
Q:
Android ListView that scrolls from bottom to top?
I'd like to build a ListView that works like a normal ListView, except the first item is on the bottom and grows up, instead of the first item being on the top and growing down.
Of course, I can just use a normal ListView, reverse the order of the adapter, and have it scroll to the last item on the list... but this is inelegant in a number of ways, particularly when adding new rows.
For instance, I would like the ListView to remember the position, but the position from the bottom instead of from the top. So, if they are at the bottom, they should remain at the bottom, but if they are looking 3 up from the bottom, they should still be looking at that item when the new item is added. (Exactly how it works in ListView normally, but reversed.)
Wondering if anybody has any clever tricks to reverse the polarity of an Android ListView?
The classic use case for this would be a list of chat messages, where the most recent one is on the bottom, and items are generally added to the bottom. If someone is at the end of the list, they still want to be at the end of the list when a new message comes in. But if they have scrolled away from the bottom, they don't want their scroll position to be randomly reset.
A:
Ah, AbsListView has the attribute:
android:stackFromBottom="true"
which seems to do exactly what I wanted.
| {
"pile_set_name": "StackExchange"
} |
Q:
Cannot resolve symbol 'google'
I recently checked out some code from Git on Android Studio. The project uses Google Maps but when I check my imports such as import com.google.android.gms.maps.GoogleMap; I get an error when I highlight over google saying 'Cannot resolve symbol 'google''. Might anyone know why? Thanks!
A:
You are suppose to add the google-play-services_lib library in your project. Also Add google-play-services.jar in the "libs" folder of your project.
Right click on your projects ---> properties ---> android --> select target name Google ApIs. And Clean projects and Build the projects.
EDITED:
Check out the Google Map Quick start.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to identify an element using an onclick event handler
I want to send the ID of an element to the function to know what button pressed it.
<input type="submit" id="foo" onclick="function(abcxyz)" value="abc" />
<input type="submit" id="foo" onclick="function(xyz)" value="x" />
function(str){
if(str == abcxyz){
//do this.
}
else if(str == xyz){
//do that.
}
}
or if it's possible, its okay that the value abc or x will be send to the function.
so str = abc or str = x.
A:
If I have understood your problem correctly...
You are missing a function name
The two inputs have the same id
using submit for type doesn't make sense in your case
The values of the inputs aren't needed for this task
<input type="button" id="foo1" onclick="button_click('1')" />
<input type="button" id="foo2" onclick="button_click('2')" />`
function button_click(str){
if(str == '1'){
alert('Button with id foo1 was pressed');
}
else if(str == '2'){
alert('Button with id foo2 was pressed');
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Isolating powered-off raspberry-pi gpio pin from rest of circuit (that is powered on)
First of all sorry for the long title, I hope it is descriptive enough.
I am planning to connect a TSSOP4838 (IR Receiver) to a Raspberry PI GPIO input pin. My problem is that the TSSOP4838 will (at times) be powered on while the PI is powered off. The TSSOP4838 has an open-collector output with a 33k pullup. The same output will be connected to an input pin of an ATTiny85 (also powered on). As far as I can tell this is going to be a problem, as the output of the TSSOP4838 (normally high when idle) will feed power to the PI through the input pin clamp diode. Of course the power won't be enough to bring the PI up due to the 33k pullup.
My question is how to properly isolate the PI input pin while it is powered off, so that the TSSOP4838 output can still be read by the ATTiny and yet when the PI is powered on be able to read it from the PI as well.
I have not yet tried anything in fear of smoking my PI.
All supplies are 3.3v so no level shifting required.
Edit:
Would a simple level shifter like this work?
Simple MOSFET level shifter
RPi side would go on the left (low voltage side). This would also take care of minor differences in supply voltage. My worry is that if RPi output is not HiZ when powered off, it could bring the level down to 0 on the other side.
A:
you aren't going to smoke your pi with 3.3v on a gpio pin... I am guessing that you can hold power on a gpio pin with the pi off and it will be in a high impedance mode and not affect anything... you can check the impedance with a multimeter.
if it isn't in high impedance mode then you should worry about power consumption rather than burning it up...
but probably your design is somewhat flawed anyway and you should be going through a transistor or diode or some combination of those things anyway (just guessing without a diagram)
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I connect to an external database from a sql statement or a stored procedure?
When running a SQL statement or a stored procedure on a database, can you connect to an external database and pull data from there?
something like:
SELECT a.UserID, b.DataIWantToGet
FROM mydb.Users as a, externaldb.Data as b
A:
You'll need to setup a Linked Server instance. Then you can reference the external database as though it were a SQL Server database.
A:
Yep -- there's two methods: either use the function OPENROWSET, or use linked servers. OPENROWSET is useful for ad-hoc single statements, but if you're going to be doing this regularly, read up on linked servers as they allow you to do exactly what you've specified in your SQL Statement ... e.g.,
SELECT database.owner.table for local data
SELECT server.database.owner.table for remote data
And yes, you can mix and match to do joins twixt local and remote. Note though that you'll need to be caureul if you do joins against large tables that exist on the remote server as the query could take a long time to exexute...
A:
Yes, you can. You should take a look at linked servers for starters. You can also use OPENROWSET to hit them directly with no linked server.
| {
"pile_set_name": "StackExchange"
} |
Q:
Cloudera Hadoop : File reading/ writing in HDFS
I have this scala and Java code running in Spark on Cloudera platform whose simple task is to perform Word count on the files in HDFS. My question is : What's the difference in reading the file with this code snippet -
sc.textFile("hdfs://quickstart.cloudera:8020/user/spark/InputFile/inputText.txt")
as opposed to reading from local drive over cloudera platform?
sc.textFile("/home/cloudera/InputFile/inputText.txt")
Is it not that in both cases the file is saved using HDFS and wouldn't make any difference reading/ writing either ways? These both read/write to HDFS, right? I referred this thread, but no clue.
Cloudera Quickstart VM illegalArguementException: Wrong FS: hdfs: expected: file:
Could you please tell me at least a single case where using hdfs:// implies something else?
Thank You!
A:
As per my knowledge,
sc.textFile("hdfs://quickstart.cloudera:8020/user/spark/InputFile/inputText.txt") in this line hdfs://quickstart.cloudera:8020 refers to HDFS directory or file /user/spark/InputFile/inputText.txt.
sc.textFile("/home/cloudera/InputFile/inputText.txt") in this line '/home/cloudera/InputFile/inputText.txt' refers to your local unix/linux file system.
So if you want to use/read/write into HDFS file then you need to use hdfs://namenodeHost:port as per hadoop configuration.
Hope this clarify your doubt !!
| {
"pile_set_name": "StackExchange"
} |
Q:
View is loading very slow for the first time in a Prism(MEF v.5) wpf application
First time loading a View takes a 2-5 sec dependence a view content. But the second time it is load immediately.
The most "heavy" content has only a RadGridView, but assembly and all data (empty data) already loaded from database during initialization.
private void Navigate(NavigateInfo info)
{
_workingNavigateInfo = info;
_regionManager.RequestNavigate(MAIN_REGION_NAME, new Uri(info.NextViewName, UriKind.Relative), NavigationCompleted);
}
I init a view and viewmodel during app initialization process
var jobB = _container.GetExportedValue<ViewB>();
var jobBModel = _container.GetExportedValue<ViewBModel>();
jobB.DataContext = jobBModel;
Here an example of my ViewModels
[Export]
[PartCreationPolicy(CreationPolicy.Shared)]
public class ViewBModel : NavigationViewModel
{
private readonly IRegionManager _regionManager;
private readonly NavigationService<ViewB> _navigation;
[ImportingConstructor]
public ViewBModel(IRegionManager regionManager)
{
this._regionManager = regionManager;
this.GotoA = new DelegateCommand<object>(this.ExecuteGotoA);
this.GotoBack = new DelegateCommand<object>(this.ExecuteGotoBack);
_navigation = new NavigationService<ViewB>(regionManager);
}
public DelegateCommand<object> GotoA { get; private set; }
public DelegateCommand<object> GotoBack { get; private set; }
private void ExecuteGotoA(object notused)
{
_navigation.NavigateToPage("ViewA");
}
private void ExecuteGotoBack(object notused)
{
_navigation.NavigateBack();
}
}
and View
[Export]
public partial class ViewB : UserControl
{
public ViewB()
{
InitializeComponent();
}
}
since navigation didnt work without [Export("ViewB", typeof(ViewB))] attribute, i create a new MefServiceLocatorAdapter to avoid not found error
public class MyMefServiceLocatorAdapter : MefServiceLocatorAdapter
{
CompositionContainer _container;
public MyMefServiceLocatorAdapter(CompositionContainer container): base(container)
{
_container = container;
}
protected override object DoGetInstance(Type serviceType, string key)
{
IEnumerable<Lazy<object, object>> exports = this._container.GetExports(serviceType, null, key).ToList();
if ((exports != null) && (exports.Count() > 0))
{
// If there is more than one value, this will throw an InvalidOperationException,
// which will be wrapped by the base class as an ActivationException.
return exports.Single().Value;
}
var extended = this._container.Catalog.Where(x => x.ExportDefinitions.Any(y => y.ContractName.EndsWith(key))).ToList();
if ((extended != null) && (extended.Count() > 0))
{
var type = ReflectionModelServices.GetPartType(extended.Single()).Value;
var serviceTypeIdentity = AttributedModelServices.GetTypeIdentity(type);
return _container.GetExports(serviceType, null, serviceTypeIdentity).First().Value;
}
throw new ActivationException(FormatActivationExceptionMessage(new CompositionException("Export not found"), serviceType, key));
}
}
I found a nice article how to make navigation faster Navigate faster with Prism and WPF
but id doesn't give me any improvements.
I used a performance profiler Redgate's ANTS and it shows me that during the first time navigation the methods LoadContent and RequestCanNavigateFromOnCurrentlyActiveViewModel(dont understand why) run 1 sec, but the second time it took less then 1 mls.
I tried to do LoadContent during initialization and added to region, but i coudnt load and add all Views to region. And unfortunately this strategy didnt give me any improvements.
A:
The reason it runs slow the first time if because of the Telerik grid. Once the grid has been rendered and all assemblies loaded, the second time it loads is much faster. I can almost guarantee you that if you remove the Telerik grid from your view and run your app, the view will load much faster.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can i set right to left for asp.net respone.write?
I'm write simple web application, and show me this out put:
that shape use respone.write,every thing is ok but i want set respone.write to right to left,and i want this:
How can i solve that?thanks.
A:
There are several ways to do that but it is not recommended to use Response.Write() to add content to the page
instead it should be better use an HTML control or server control
In the web form:
<div runat="server" id="contentHolder" style="text-align:right;"></div>
In the code behind:
contentHolder.innerHTML = "your content";
the other way is write content like this:
Response.Write("<div style=\"text-align:right;\">سلام</div>");
| {
"pile_set_name": "StackExchange"
} |
Q:
Rails Factory Girl: Create record associated with user but with role
I have the following factory girl for create user:
FactoryGirl.define do
factory :user do
first_name 'Colin'
last_name 'Ambler'
password 'dell13a1'
address_zip '60657'
trait :event_operator do
role Role.find_or_create_by(role_title: "Event Operator", name: "event_operator")
email "[email protected]"
end
trait :athletic_trainer do
role Role.find_or_create_by(role_title: "Athletic Trainer", name: "athletic_trainer")
email "[email protected]"
end
end
end
As you can see I use the traits for define the role of the user, and set up a different email address.
I have another model called Event, an event belongs to a athletic_trainer that's the name of the association, it just points to the User model, so I use Factory Girl for create the event like this:
FactoryGirl.define do
factory :event do
event_type Event.event_types[:tournament]
sport Event.sports[:lacrosse]
event_title "NXT Cup"
event_logo { Rack::Test::UploadedFile.new(File.join(Rails.root, 'spec', 'support', 'images', 'nxt_cup_logo.png')) }
gender Event.genders[:boys]
event_pay_rate 40
event_participant_numbers 3
event_feature_photo { Rack::Test::UploadedFile.new(File.join(Rails.root, 'spec', 'support', 'images', 'nxt_cup_feature.jpg')) }
event_description "We roll out the red carpet for North America's top youth talent. The nations top youth teams descend on the premier facilities in the Philadelphia region to battle it out and stake claim to one of the summers most highly coveted trophies...the NXT Cup."
event_operator
end
end
as you can see I add the event_operator at the end of the event's factory girl for specify the user association, however this way I do not have access to the user associated in the factory girl.
is it possible to send the athletic_trainer as a parameter to the event's factory girl? I mean I want to create the user using factory girl and then create the event but associate the user I just created, is that possible?
A:
You can always override fields by passing them to the build or create methods. To set event_operator to your desired user, you would do this:
user = build(:user)
event = build(:event, event_operator: user)
| {
"pile_set_name": "StackExchange"
} |
Q:
Regex matching (greedy/ungreedy?)
I'm experiencing some trouble 'picking' this data 'apart'. Altough helper functions etc. are an option, I would really like to solve this using a regex only (and processing the matchgroups after matching).
This is (part of) the data I have:
Belgium
Belgium M_Foo
Belgium A_Bar
Belgium M_FooBar
Belgium S_Whooptee Doo
Belgium Xxx
Belgium S_Foo Bar
United Kingdom
United Kingdom W_Foo-Bar
United Kingdom M_Yay
United Kingdom Xxx
United Kingdom S_Derp
United Kingdom F_Doh Lorem
United Kingdom S_Ipsum Dolor
United States of America L_Foo
Macedonia F.Y.R. Xxx
Macedonia F.Y.R. S_Foo Bar
Cyprus (Greek) M_Foo
Congo (Democratic Republic of)
Congo (Democratic Republic of) Q_Yolo
Essentially this is a "key / value" sort of array of strings. It contains a countryname (which is not normalized so I can't use hard-coded countrynames or 'lookups', it might as well be some other string than a countryname) and is optionally followed by either keyword Xxx or <random_upcase_char>_<random_text>.
I have come up with the following regex:
^(.+?)(?:\s+(Xxx|[A-Z]_.*)?)
or, small difference in the first matchgroup:
^(.*?)(?:\s+(Xxx|[A-Z]_.*)?)
This works fine for the first strings starting with Belgium. It returns, for these records, the following results:
Group 1 Group 2
================================
Belgium
Belgium M_Foo
Belgium A_Bar
Belgium M_FooBar
Belgium S_Whooptee Doo
Belgium Xxx
Belgium S_Foo Bar
However, the following lines cause trouble:
Group 1 Group 2
================================
United
United
United
United
United
United
United
United
Macedonia
Macedonia
Cyprus
Congo
Congo
What I'd like the regex to do is the following:
Group 1 Group 2
================================================
United Kingdom
United Kingdom W_Foo-Bar
United Kingdom M_Yay
United Kingdom Xxx
United Kingdom S_Derp
United Kingdom F_Doh Lorem
United Kingdom S_Ipsum Dolor
United States of America L_Foo
Macedonia F.Y.R. Xxx
Macedonia F.Y.R. S_Foo Bar
Cyprus (Greek) M_Foo
Congo (Democratic Republic of)
Congo (Democratic Republic of) Q_Yolo
But I can't get the first part to match. I'm pretty sure it has something to do with greedy/ungreedy options for the first matchgroup but after fiddling around for some time I can't get it to work...
I don't care if extra/other/more matchgroups are returned. The regex is intended to be used in a .Net C# application (in case you're wondering which 'dialect' this is).
Any help would be very much appreciated.
A:
Sometimes, with non-greedy matches, the anchoring is extremely important. In this case, anchoring to the end of the line solves the problem. Your regexp should be:
^(.+?)(?:\s+(Xxx|[A-Z]_.*))?$
Note that I also moved the optional (?) quantifier outside one more grouping level, so the space is optional.
| {
"pile_set_name": "StackExchange"
} |
Q:
Change text on html button with class
I have a button that i use to call code behind in my aspx file. I am trying to change the text on the button to no avail.
HTML:
<button class="radius button" runat="server" id="buttonUpload" onServerClick="buttonUpload_Click" >Execute SQL</button>
Here is the javascript:
<asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server">
<%--<asp:ScriptManager ID="scripman1" runat="server" EnablePageMethods="True">
</asp:ScriptManager>--%>
<link rel="stylesheet" href="stylesheets/foundation.min.css">
<link rel="stylesheet" href="stylesheets/app.css">
<script src="javascripts/modernizr.foundation.js"></script>
<style >
@import "css/datatables/demo_page.css";
@import "css/datatables/demo_table.css";
@import "Table Sorter/style.css";
</style>
<script type="text/javascript" src="js/tablesorter/jquery.tablesorter.js"></script>
<script type="text/javascript" src="js/vendor/jquery.js"></script>
<script type="text/javascript">
$(document).ready(function () {
$("#buttonUpload").text('Save');
});
I have tried $("#buttonUpload").html('Save'). Nothing seems to work.
A:
I found out the problem was the runat"server" property
<button class="radius button" runat="server" id="buttonUpload" onServerClick="buttonUpload_Click" >Execute SQL</button>
When you remove that the button's text updates fine.
| {
"pile_set_name": "StackExchange"
} |
Q:
Using list vs div for gallery like items display
I'm making a webpage for the company i'm working in, and I've been tasked with creating two display for items in content (it's a search result display page):
Gallery and List.
The thing is that I've used divs, but seen in Google images that they're using unordered list for displaying results.
I'd like to know if there's any advantage of using list instead of divs like I did.
Thanks a lot.
Demo: http://jsfiddle.net/USX34/
Expand the HTML result so you can see clearer.
Thanks a lot.
PS: I have posted question here but I'm not certainly sure that it's the right place, but I think it will be closed if posted i
A:
Your images are part of a set of items, and a list is more semantically appropriate, as noted in the comments.
Have a look at the way the excellent Twitter Bootstrap (version 2) handles groups of thumbnails:
<ul class="thumbnails">
<li class="span3">
<a href="#" class="thumbnail">
<img src="http://placehold.it/260x180" alt="">
</a>
</li>
<li class="span3">
<a href="#" class="thumbnail">
<img src="http://placehold.it/260x180" alt="">
</a>
</li>
<li class="span3">
<a href="#" class="thumbnail">
<img src="http://placehold.it/260x180" alt="">
</a>
</li>
<li class="span3">
<a href="#" class="thumbnail">
<img src="http://placehold.it/260x180" alt="">
</a>
</li>
</ul>
Version 3 of Twitter Bootstrap I believe would mark require it to be marked up differently.
A:
I would go with using unordered <li> lists you have the option to setup jQuery plugins which can sort and filter the lists. If you create dynamic ID's or Classes on each list element you can work with them even more. It's better than using just <div> tags.
| {
"pile_set_name": "StackExchange"
} |
Q:
Do edits affect rel=nofollow addition?
I answered a question here: Parsing HTML to fix microtypography & glyph issues and remember remarking to my "somewhat-friend" that I was happy because the links didn't contain rel=nofollow. He's done some good work, the links are relevant, and I think he deserves the the search engine juice, I was ecstatic that the SO algs. agreed.
Recently my post was edited to be in a form that the editor found more attractive. Now the links are all tagged rel=nofollow.
My questions are:
Does editing a post reset or affect the decision to apply rel=nofollow to outgoing links
Even if the links are all to the same place as they were pre-edit?
Can I revert the edit to get the original state back
(or was I just mis-reading code a while back and they've always been tagged)
A:
While I can't give out specific details of how nofollow removal works due to exploitation, I can say that an edit does push links in a post back in to nofollow territory. After a bit of time and community vetting nofollow will be removed again.
| {
"pile_set_name": "StackExchange"
} |
Q:
Saving many subsets as dataframes using "for"-loops
this question might be very simple, but I do not find a good way to solve it:
I have a dataset with many subgroups which need to be analysed all-together and on their own. Therefore, I want to use subsets for the groups and use them for the later analysis. As well, the defintion of the subsets as the analysis should be partly done with loops in order to save space and to ensure that the same analysis has been done with all subgroups.
Here is an example of my code using an example dataframe from the boot package:
data(aids)
qlist <- c("1","2","3","4")
for (i in length(qlist)) {
paste("aids.sub.",qlist[i],sep="") <- subset(aids, quarter==qlist[i])
}
The variable which contains the subgroups in my dataset is stored as a string, therefore I added the qlist part which would be not required otherwise.
A:
Make a list of the subsets with lapply:
lapply(qlist, function(x) subset(aids, quarter==x))
Equivalently, avoiding the subset():
lapply(qlist, function(x) aids[aids$quarter==x,])
It is likely the case that using a list will make the subsequent code easier to write and understand. You can subset the list to get a single data frame (just as you can use one of the subsets, as created below). But you can also iterate over it (using for or lapply) without having to construct variable names.
To do the job as you are asking, use assign:
for (i in qlist) {
assign(paste("aids.sub.",i,sep=""), subset(aids, quarter==i))
}
Note the removal of the length() function, and that this is iterating directly over qlist.
| {
"pile_set_name": "StackExchange"
} |
Q:
Trouble getting emails to send via php script/html form
I have created a simple html form to send an email using php. Currently whenever I try to send the info I just get redirected to my process.php page with and browser error (myserver.com unable to handle this request).
I have already tried sending a test email by making my php page just the mail() function and it does indeed work so it has to do with my code somewhere. I'm sure it's something simple so here is my code.
HTML (contact.html):
<!-- Contact form -->
<form id="form" action="process.php" method="post">
<div>
<label for='name'><span class='required'></span></label>
<input id="Field1" type='text' name='name' placeholder='Type your Email Here' required/>
</div>
<div>
<label for='message'><span class='required'></span></label>
<textarea id="Field2" name='message' placeholder="Type a Message for us Here" required></textarea>
</div>
<div>
<button type='submit'>SEND MESSAGE</button>
</div>
</form>
PHP (process.php):
<?php
//if "email" variable is filled out, send email
if(isset($_POST['name']) && isset($_POST['message'])) {
//Email information
$admin_email = "[email protected]";
$email = $_POST['name'];
$subject = "Email from contact form";
$comment = $_POST['message'];
//send email
if(mail($admin_email, $subject, $comment, "From:" . $email)) {
echo '<p>Success</p>';
header('Location: contact.html');
} else {
echo '<p>Error sending message</p>';
}
} else {
echo '<p>Please fully fill out the form</p>';
}
?>
A:
You have syntax error in the PHP code you have shared - You are missing ; in the $subject = "Email from contact form" line.
Please see bellow -
<?php
//if "email" variable is filled out, send email
if (isset($_POST['name'], $_POST['message'])) {
//Email information
$admin_email = "[email protected]";
$email = $_POST['name'];
$subject = "Email from contact form";
$comment = $_POST['message'];
// //send email
if(mail($admin_email, $subject, $comment, "From:" . $email)) {
echo '<p>Success</p>';
header('Location: contact.html');
} else {
echo '<p>Error sending message</p>';
}
} else {
echo '<p>Please fully fill out the form</p>';
}
?>
| {
"pile_set_name": "StackExchange"
} |
Q:
Checking input of QT inputs
I have a fairly complex dialog which the inputs are numbers with different allowed ranges. I was wondering what is the cleanest pattern to guarantee that my QLineEdits have correct input values.
The obvious way of doing this seems to check input values when the user clicks the OK button. Problem I have is that some of the GUI controls depend on the value of other inputs. So the code seems to be getting a bit nasty by having me to branch the logic of the controls for all the cases where a input has a wrong value.
Is there a nice pattern for this type of situation?
I was thinking about subclassing QLineEdit and using the focusOutEvent to check for the input of the dialogs. If the input is incorrect I would default the value and trigger the logic. This would guarantee that each lineedit is responsible for it's own validation. Is there an obvious pitfall by doing this?
QValidators are awesome, problem is when their state is intermediate.
A:
Use the signals provided by QLineEdit and build a small validation class of slots. It'll be easier than subclassing them directly, and allow you more fine grained control.
| {
"pile_set_name": "StackExchange"
} |
Q:
Parceler: Unable to find read/write generator for type MutableLiveData in Android ViewModel
I'm getting a compilation build error when I add a MutableLiveData object to my view model in my Android Studio project. I'm not calling getUser() or setUser() anywhere yet and I added the exact same object to a different view model in my project and haven't gotten an error, so I'm not sure what the problem is.
Error:
error: Parceler: Unable to find read/write generator for type androidx.lifecycle.MutableLiveData<com.example.demometvtest1.User> for com.example.demometvtest1.RegisterViewModel.user
RegisterViewModel.java:
import androidx.lifecycle.MutableLiveData;
import androidx.lifecycle.ViewModel;
@Parcel
public class RegisterViewModel extends ViewModel {
public MutableLiveData<User> user = new MutableLiveData<>();
public void setUser(String user) {
return user;
}
public MutableLiveData<User> getUser() {
this.user.setValue(user);
}
}
A:
The problem is the annotation @Parcel: you are trying to automatic generate writeToParcel() & createFromParcel() and the annotation processor doesn't find a read/write implementation for MutabileLiveData (that it's not parcelable).
Remove the annotation, make the class implement the parcelable interface and make your own implementation of parcelable metods writeToParcel() & createFromParcel() if you need It or simply remove the annotation.
| {
"pile_set_name": "StackExchange"
} |
Q:
Necessary conditions for this integral to vanish.
Suppose $f(s)$ is a holomorphic function of a complex variable $s$. What are the necessary requirements for $f$ such that
$\int_1^\infty f(s) \mathrm {d}s = 0$ ?
assuming that the integral is complex ?
A:
This might help: You can calculate integrals such as $\int_1^\infty f(s) ds$ with the residue theorem. See this example for an integral of the form $\int_{-\infty}^\infty g(s) ds$. This connects the value of the integral with the function's residues. By taking a certain contour you can establish certain necessary conditions for the residues of $f$ (when $f$ is nice enough so that you can use the same method as in the mentioned example). You can find in this pdf and in https://en.wikipedia.org/wiki/Methods_of_contour_integration additional examples for this method.
| {
"pile_set_name": "StackExchange"
} |
Q:
what is the difference between perspective origin and transform origin in css
As i understand it is probably how it looks is perspective, and how it actually moves around in its space is origin.
I am more clear on transform origin and not so much on perspective origin.
Could someone give me links or explanation what perspective it and how origin matters.
A:
perspective is used to set the view angle for a element's children.
perspective origin is the point in space from where you are looking at the element
transform is the simple coordinate thing which is used to rotate/translate object(element).
transform origin sets the point about which you are translating/rotating an object.
for an instance you want to rotate an div at 45 degrees about x-axis(horizontal axis).
you use transform:rotateX(45deg);
div will rotate but you still see a rectangle on screen as you are looking at it from z axis with no perspective.
but when you increase perspective, you will see the 3d view of the div.
the center of your eye is set by perspective-origin.
default value is 50% 50% means center.
increasing and decreasing x or y value will move your "eye" accordingly.
transform origin on the other hand sets the point of transform for example if you need to rotate an rectangle about any other point other than its center, the you will use transform-origin property
| {
"pile_set_name": "StackExchange"
} |
Q:
Groovy : Getting subLists from a List, based on a field value
How should I group a list of user objects, based on their type. For example, i have a list as below.
List userList = [userA, userB, userC, userD, userE, userF, userH]
And i would like to convert this list into a list.
List userList = [ [userA, userB], [userC, userD, userE, userF], [userH] ]
Grouped based on the type field of user.
A:
You can use the groupyBy() method:
Map map = users.groupBy { it.type }
This returns a map of user objects grouped by type.
The map content looks like this:
[
typeA : [list of users of typeA],
typeB : [list of users of typeB],
..
]
This map can be easily transformed to a list like you need using the collect method:
List l = m.collect { key, value -> value }
Now l is a list that looks like this:
[ [list of users of typeA], [list of users of typeB], .. ]
All in one line:
def list = users.groupBy { it.type } .collect { k, v -> v }
| {
"pile_set_name": "StackExchange"
} |
Q:
What would it take to build a simple proxy server using ServiceStack?
I'm wondering how difficult it would be to build a proxy service upon/with ServiceStack. Considering how fast ServiceStack is with Redis / serialization / etc., and how simple it is to implement the stack in general, this seems very tempting. Anyone else attempt this? How difficult would this be?
A:
A new Proxy Feature was added in ServiceStack v4.5.10 that greatly simplifies creating a proxy where you can create a proxy by just registering a plugin, e.g:
Plugins.Add(new ProxyFeature(
matchingRequests: req => req.PathInfo.StartsWith("/proxy"),
resolveUrl:req => $"http://remote.server.org" + req.RawUrl.Replace("/proxy","/"))
{
// Use TransformResponse to rewrite response returned
TransformResponse = async (res, responseStream) =>
{
using (var reader = new StreamReader(responseStream, Encoding.UTF8))
{
var responseBody = await reader.ReadToEndAsync();
var replacedBody = responseBody.Replace(
"http://remote.server.org",
"https://external.domain.com/proxy");
return MemoryStreamFactory.GetStream(replacedBody.ToUtf8Bytes());
}
}
})
Which fill forward all requests to /proxy in your ServiceStack instance to http://remote.server.org.
Manually Creating a Reverse Proxy
The first entry in ServiceStack's Request Pipeline lets your register Raw ASP.NET IHttpHandlers which can execute raw ASP.NET Handlers and take over executing the request from ServiceStack.
This will let you use a use an ASP.NET IHttpHandler proxy like this by registering it your AppHost, e.g:
this.RawHttpHandlers.Add(httpReq =>
httpReq.PathInfo.StartsWith("/proxy")
? new ReverseProxy()
: null);
This would tell ServiceStack to execute requests starting with /proxy with the custom ReverseProxy IHttpHandler.
If you want to use it in ServiceStack's self-hosts you would also have to change ReverseProxy to also inherit from ServiceStack's convenient HttpAsyncTaskHandler base class (or just implement IServiceStackHandler), e.g:
public class ReverseProxy : HttpAsyncTaskHandler
{
public override void ProcessRequest(IRequest req, IResponse res,
string operationName)
{
var httpReq = (IHttpRequest)req; //Get HTTP-specific Interfaces
var httpRes = (IHttpResponse)res;
// Create a connection to the Remote Server to redirect all requests
var server = new RemoteServer(httpReq, httpRes);
// Create a request with same data in navigator request
HttpWebRequest request = server.GetRequest();
// Send the request to the remote server and return the response
HttpWebResponse response = server.GetResponse(request);
byte[] responseData = server.GetResponseStreamBytes(response);
// Send the response to client
res.ContentType = response.ContentType;
res.OutputStream.Write(responseData, 0, responseData.Length);
server.SetContextCookies(response); // Handle cookies to navigator
res.EndHttpHandlerRequest(); // End Request
}
public override void ProcessRequest(HttpContextBase context)
{
var httpReq = context.ToRequest("CustomAction");
ProcessRequest(httpReq, httpReq.Response, "CustomAction");
}
....
}
You would also have to refactor the implementation of RemoteServer in the example to work with ServiceStack's IHttpRequest / IHttpResponse interfaces.
If it's needed you can also access the underlying ASP.NET (or HttpListener) request objects with:
var aspNetReq = httpReq.OriginalRequest as HttpRequestBase;
var aspNetRes = httpRes.OriginalResponse as HttpResponseBase;
| {
"pile_set_name": "StackExchange"
} |
Q:
taking output 2D array in matrix form
I need to print out/create (.CSV format) an array of size 1940(R) by 512(C). I have a large program where this matrix is created by using different other matrices. On execution, it shows all the data in one line. I need this data in .csv or excel format to use it.
Can anybody help me what to do here?
printf("\n") command is working for printing on screen in matrix form, but in output file it's all in one line.
for ( i = 0; i < 1940; i++ ) {
for ( j = 0; j < 512; j++ ) {
fprintf(pFile,",%d\t ",X7[i][j]);
//printf("%d\t ", X8[i][j]);
}
printf("\n");
}
A:
You need to write the "newline" to the file, not to the console.
Instead of
printf("\n");
write
fprintf(pFile, "\n");
| {
"pile_set_name": "StackExchange"
} |
Q:
Get screen diagonal size in Qt
Is there any way to get size of diagonal of the screen (in inches) in Qt?
I need it to calculate how big is my pixel. Or maybe I can get pixel size directly?
A:
In order to get screen size, you can use QScreen::physicalSize.
Also, you can get the number of pixels per inch, by using QScreen::physicalDotsPerInch
QScreen *screen = qApp->screens().at(0);
qDebug() << screen->geometry() << screen->physicalSize() << screen->physicalDotsPerInch();
| {
"pile_set_name": "StackExchange"
} |
Q:
How to force angular to use jqlite or manually reference the jquery object
I'm stuck with an incompatible version of jQuery in my Angular app, I can't upgrade jQuery but can load the latest jQuery version side by side using the noConflict method but I cant seem to find a way to force Angular.js to use the newer jQuery version. Is there such a method available?
Flow:
<head>
<script src="jQuery 1.3.2">
<script src="old jquery code">
</head>
<body>
…
<script src="jQuery 1.10.2"/>
<script>
var newjquery = jQuery.noConflict();
</script>
<script src="angular.js"/>
<script>
// angular code
</script>
</body>
A:
From the docs:
Does Angular use the jQuery library?
Yes, Angular can use jQuery if
it's present in your app when the application is being bootstrapped.
If jQuery is not present in your script path, Angular falls back to
its own implementation of the subset of jQuery that we call jQLite.
Try to change the position in which you import the scripts:
new jquery
angular
older jquery
I am not sure it would work, but from my understanding Angular should use the already present jQuery.
A:
you can use the new directive ng-jq with angular 1.4.x to set the Version of switch to jQLite (angular internal jQuery)
| {
"pile_set_name": "StackExchange"
} |
Q:
jQuery function only inside a specific div ID
I have the following function that I only want to run inside <div id="imgWrapper">
I have tried a few things with parent but I cannot seem to figure this out.
<script type="text/javascript" language="javascript">
$(document).ready(
function () {
$("img").each(function () {
var src = $(this).attr("src");
if (src.substring(0, 1) == "/")
$(this).attr("src", "http://serverName.com/" + src.substring(1))
});
}
);
</script>
A:
If you put space in the selector it's a descendant-selector:
$(document).ready(
function () {
$("#imgWrapper img").each(function () { // <<<<<<<=======
var src = $(this).attr("src");
if (src.substring(0, 1) == "/")
$(this).attr("src", "http://serverName.com/" + src.substring(1))
});
}
);
descendant-selector docs:
Description: Selects all elements that are descendants of a given ancestor.
| {
"pile_set_name": "StackExchange"
} |
Q:
What were the odds that ʻOumuamua passed so close to the Sun?
ʻOumuamua passed its closest point at about 0.25 UA from the Sun.
Has anyone researched what the odds were that the first ever spotted interstellar object passed so close to the Sun?
Being the interstellar space almost totally empty, I would intuitively suppose that such odds would be extremely low.
A:
How could it be found otherwise? It is currently (November 2018) only at about the orbital distance of Mars and cannot be detected by the world's largest telescopes.
Such objects are small and only seen in reflected light - they have to get close to the Sun (and Earth) to be seen. There may well be a substantial population of interstellar rocks passing through the solar system unseen at greater distances.
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.