text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
Using intermediate value arguments at limits rather than finding explicit bounds
Again, I apologize for what looks like a very narrow question. But there's possibly a general principal at work here that I'm not grasping.
I understand the answer provided for exercise 3 in chapter 7 of Spivak's Calculus (4E, p.130), but wonder if another approach might work (and be closer to the spirit of the chapter). For example for (ii), to show that $$\sin x = x-1$$ has a solution, can't I take $$f(x)=\sin x - x+1$$ and argue that for large $|x|$, $x>0 \Rightarrow f(x)<0$ and $x<0 \Rightarrow f(x)>0$, so that I can use Theorem 1 (p. 122: $f(x)$ continuous on $[a,b]$ and $f(a)<0<f(b) \Rightarrow \exists x \in [a,b] : f(x)=0$ )?
A:
I am currently teaching a course from Spivak's Calculus, and I think your solution to this problem is entirely correct.
You considered the auxiliary function $f(x) = \sin x - x + 1$ and showed that it is continuous and takes negative values for sufficiently large, positive $x$ and also positive values for sufficiently large, negative $x$. So by IVT it must take on the value $0$.
You can be a little more explicit about the sufficiently large business though. For instance, you know that $-1 \leq \sin x \leq 1$ for all $x$, so
$-x \leq f(x) \leq -x + 2$.
From this one sees that $f(x)$ is non-negative for all $x \leq 0$ and non-positive for all $x \geq 2$. In this sort of problem, the more complicated the function gets, the more you want to call on "general principles" in order to give you the estimates you need. For instance, if you had a very complicated polynomial $p(x)$ of degree $19$ in place of $x$, you probably don't want to give explicit values but just use the fact that as $x$ approaches $\pm \infty$, so does $p(x)$, while $\sin x$ stays bounded.
Final comment: to be sure, you don't have to find explicit values for this problem. But in order to be best understood you should probably say something in the way of justification of what happens for sufficiently large $x$. Note that in the above paragraph I gave a less explicit answer for a more general class of problems, and as you know there are other problems in the text which are like the one I made up above. But -- and this is an issue of effective mathematical writing and communication rather than mathematical correctness -- there is a sort of principle of economy at work here. In order to be best understood, it's generally a good idea to use the simplest arguments you can think of that justify a given claim, and a lot of people find more explicit arguments to be simpler than less explicit ones. Anyway, not here but sometimes you do have to be explicit and concrete, so it's a good idea to cultivate the ability to do so...
| {
"pile_set_name": "StackExchange"
} |
Q:
Hit by a vehicle at a crosswalk
This happened today in L.A. county in los angeles (just outside culver city). I was walking my bicycle down the sidewalk on the left side of the road, saw the green light and white pedestrian icon. I hopped on, and proceeded to bike down the crosswalk slowly. A man was stopped at the red light looking left for about five seconds. I assumed he was waiting since he probably saw the pedestrians behind me and the white light. He turned right without looking right and collided with me. My bike slammed against the road, and my leg twisted a bit. I had some scrapes and scratches but I didn't land on my head. I got his information and a couple of people saw it.
He apologized, and admitted that he didn't look before turning on red.
I read online that it's illegal to cycle through a crosswalk at an intersection, and that it also wasn't illegal. (so which is it?)
My bike has some damages and i want to get my foot checked out, but what is exactly is the law in this case? I was walking and then once it was green with the pedestrian icon, i hopped on and cycled.
Thanks
A:
I don't know the specific laws in CA for riding in a crosswalk, but here in AZ (both CA and AZ are in the USA) it is actually ambiguous. While on a bicycle, you become not a pedestrian, which in AZ means that you aren't afforded the legal protections of a pedestrian, which means that in cases like yours there is no legal recourse.
However, riding a bicycle on a sidewalk here in AZ is illegal, and for good reason.
Section 275 of the CA motor vehicle code states that the crosswalk is an extension of the sidewalk, so if riding on the sidewalk is legal where you live, then you had the right of way.
After looking around, it appears that CA has a hodge podge of local ordinances pertaining to the legality of riding on the sidewalk.
Culver city ordinance §7.04.250 RIDING ON SIDEWALKS. states
A. No person shall ride a bicycle upon a sidewalk within any
business district or upon the sidewalk adjacent to any public school
building, church, recreation center or playground or upon a walkway
specifically designated by resolution of the City Council as closed to
all vehicular or bicycle traffic. B. Whenever any person is riding a
bicycle upon a sidewalk such person shall yield the right-of-way to
any pedestrian and when overtaking and passing a pedestrian, after
giving an audible signal, shall at all times pass to the left of such
pedestrian.
while the ordinance for LA county, §15.76.080 Driving or riding vehicles on sidewalk. states
A person shall not operate any bicycle or any vehicle or ride any
animal on any sidewalk or parkway except at a permanent or temporary
driveway or at specific locations thereon where the commissioner finds
that such locations are suitable for, and has placed appropriate signs
and/or markings permitting such operation or riding.
and for the City of Los Angeles SEC. 56.15. BICYCLE RIDING – SIDEWALKS. states
No person shall ride, operate or use a bicycle, unicycle, skateboard, cart, wagon, wheelchair, rollerskates, or any other device
moved exclusively by human power, on a sidewalk, bikeway or boardwalk
in a willful or wanton disregard for the safety of persons or
property. (Amended by Ord. No. 166,189, Eff. 10/7/90.)
No person shall ride, operate or use a bicycle or unicycle on Ocean Front Walk between Marine Street and Via Marina within the City
of Los Angeles, except that bicycle or unicycle riding shall be
permitted along the bicycle path adjacent to Ocean Front Walk between
Marine Street and Washington Boulevard. (Amended by Ord. No. 153,474,
Eff. 4/12/80.)
No person shall operate on a beach bicycle path, or on an area of a beach which is set aside for bicycle or unicycle use, any bicycle
or tricycle which provides for side-by-side seating thereon or which
has affixed thereto any attachment or appendage which protrudes from
the side of the bicycle or tricycle and is used or designed to carry
another person or persons thereon.
For the purposes of this section motorized bicycles as defined by Section 406 of the California Vehicle Code shall be included within
the terms “motor vehicle” as defined in Section 415 of the Vehicle
Code and as used in Section 21663 of the Vehicle Code.
So it appears that it depends...
| {
"pile_set_name": "StackExchange"
} |
Q:
Rails nested routes not working as expected
I have nested resources in my routes like so. These work perfectly on my other rails 5 app, but not on my rails 6 app. I cannot figure out why it recognizes only the first level of nested stuff.
resources :blogs do
member do
put 'like', to: 'blogs#upvote'
put 'dislike', to: 'blogs#downvote'
end
resources :comments
member do
put 'like', to: 'comments#upvote'
put 'dislike', to: 'comments#downvote'
end
resources :notations
end
Here is what rake routes gives me:
blogs_user GET /users/:id/blogs(.:format) users#blogs
like_blog PUT /blogs/:id/like(.:format) blogs#upvote
dislike_blog PUT /blogs/:id/dislike(.:format) blogs#downvote
blog_comments GET /blogs/:blog_id/comments(.:format) comments#index
POST /blogs/:blog_id/comments(.:format) comments#create
new_blog_comment GET /blogs/:blog_id/comments/new(.:format) comments#new
edit_blog_comment GET /blogs/:blog_id/comments/:id/edit(.:format) comments#edit
blog_comment GET /blogs/:blog_id/comments/:id(.:format) comments#show
PATCH /blogs/:blog_id/comments/:id(.:format) comments#update
PUT /blogs/:blog_id/comments/:id(.:format) comments#update
DELETE /blogs/:blog_id/comments/:id(.:format) comments#destroy
PUT /blogs/:id/like(.:format) comments#upvote
PUT /blogs/:id/dislike(.:format) comments#downvote
notations GET /blogs/:id/notations(.:format) notations#index
POST /blogs/:id/notations(.:format) notations#create
new_notation GET /blogs/:id/notations/new(.:format) notations#new
edit_notation GET /blogs/:id/notations/:id/edit(.:format) notations#edit
notation GET /blogs/:id/notations/:id(.:format) notations#show
PATCH /blogs/:id/notations/:id(.:format) notations#update
PUT /blogs/:id/notations/:id(.:format) notations#update
DELETE /blogs/:id/notations/:id(.:format) notations#destroy
On my other app, for example, it would produce
/blogs/:blog_id/comments/:id/like
A:
I make a copy of your routes and replicated in two apps (Rails 5 and Rails 6) and both produced same routes (without three nested level). If you want the /blogs/:blog_id/comments/:id/like route, you must do a small change.
resources :blogs do
member do
put 'like', to: 'blogs#upvote'
put 'dislike', to: 'blogs#downvote'
end
resources :comments do
member do
put 'like', to: 'comments#upvote'
put 'dislike', to: 'comments#downvote'
end
end
resources :notations
end
| {
"pile_set_name": "StackExchange"
} |
Q:
Merging more than two cells using xlwt python
I am trying to merge 4 columns in a row of an excel file. This link has some suggestions on how to do it for two rows: How to write a cell with multiple columns in xlwt?
The line it uses to merge two columns and two rows:
sheet.write_merge(0, 0, 0, 1, 'Long Cell')
However, the same code syntax does not work when i try to merge 4 rows only. My code:
sh.write_merge(0,4,0,5,0,6,0,7,'Start point\nCo-ordinates')
I basically need something like this.
A:
maybe this?
sh.write_merge(0, 0, 4, 7, 'Start point\nCo-ordinates')
| {
"pile_set_name": "StackExchange"
} |
Q:
Pandas DataFrame Style for this condition
How can I use df.style for subsets of a DataFrame based on this given condition?
df = DataFrame({'A':[3,4,5],'B':[9,10,15],'C':[3,4,5]})
df
A B C
0 3 9 3
1 4 10 4
2 5 15 1
df1 = df.eq(df.iloc[:, 0], axis=0)
df1
A B C
0 True False True
1 True False True
2 True False True
I want to highlight the cells in which it is False. But make changes to df, not just df1
Have edited the question. It is different from the previous questions because they are only dealing with element-wise coloring. But I want to color based the above condition.
A:
You need create DataFrame of background-colors with style.Styler.apply:
def highlight(x):
c1 = 'background-color: red'
c2 = ''
m = x.eq(x.iloc[:, 0], axis=0)
df1 = pd.DataFrame(c2, index=x.index, columns=x.columns)
#add color red by False values
df1 = df1.where(m, c1)
return df1
df.style.apply(highlight, axis=None)
| {
"pile_set_name": "StackExchange"
} |
Q:
What will be the SSIS Package Path?
I have Created a SSIS package and run it it worked fine then i deploy the project it is showing me the package in SSISDB
There I execute the package it works fine then i try to execute it through asp.net page using this code
Application app = new Application();
string path = @" I don't Know the package path";
Package package = null;
try
{
package = app.LoadPackage(path, null);
package.Execute();
}
catch (Exception)
{
throw;
}
kindly guide me how do i get the package path
A:
The packages live in a database, you can run them by calling a stored procedure to execute them, there's a good walkthrough here
| {
"pile_set_name": "StackExchange"
} |
Q:
VS2012: An Error occurred while finding the resource dictionary
I am working on a Windows Phone 8 project, project was created using C# Windows Phone Blank App from templates, I have added a simple ResourceDictionary entitled GPResources.xaml(manually created) and then referenced that file in App.xaml, the file I created is located in the root folder, code below:
<!-- VS2012 saying FontFamily and FontSize properties are not recognized or are not accessable, not sure why... ANYONE?-->
<Style TargetType="{StaticResource GPFontFamily}">
<Setter Property="FontFamily" Value="CalifR.ttf"/>
</Style>
<Style TargetType="{StaticResource GPFontSizeSmall}">
<Setter Property="FontSize" Value="12"/>
</Style>
<Style TargetType="{StaticResource GPFontSizeMedium}">
<Setter Property="FontSize" Value="18"/>
</Style>
<Style TargetType="{StaticResource GPFontSizeLarge}">
<Setter Property="FontSize" Value="22"/>
</Style>
App.Xaml:
<ResourceDictionary x:Key="GPResources">
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="GPResources.xaml"/>
</ResourceDictionary.MergedDictionaries>
</ResourceDictionary>
</Application.Resources>
VS2012 keeps giving me the error: "An Error occurred while finding the resource dictionary" in the App.xaml file, I cannot think of what the problem is, can anyone point me in the right direction?
Cheers
A:
I think you created Resource Dictionary in a wrong way. Following is an example on how to define a style named "GPTextBlock" that will style a TextBlock where it applied, to have a FontSize = 12 and displaying text in Red color.
<ResourceDictionary
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Style TargetType="TextBlock" x:Key="GPTextBlock">
<Setter Property="Foreground" Value="Red"/>
<Setter Property="FontSize" Value="12"/>
</Style>
<Style ...>
...
...
</Style>
....
....
</ResourceDictionary>
And overall structure of GPResources.xaml content you had should look like above sample structure. This is one of resources I found explaining about ResourceDictionary you may want to take a look : MSDN Blog
| {
"pile_set_name": "StackExchange"
} |
Q:
How can two different class extend same class return same type in Scala
I'd like to be able to do something like this:
Suppose we have two Scala packages A and B. In B, I have two classes, like this:
class Structure{
case class StructureA(x:String, y:String)
case class StructureB(x:Int, y:Int)
}
class OperationB extend Structure{
def optB(someData:String): Array[(StructureA,StructureB)] = {...}
}
and in A, I have one class like this:
import B.Structure
class OperationA extend Structure {
def optA(data:Array[(StructureA,StructureB)]): Array[(StructureA,StructureB)] = {...}
}
And below is my project entry:
import B.{Structure,OperationB }
import A.OperationA
object Main {
def main(args: Array[String]): Unit = {
val BInstance = new OperationB()
val BResult = BInstance.optB(someData)
val AInstance = new OperationA()
val AResult = AInstance.optA(BResult)
}
}
The problem is:
BResult is typeof Array[(StructureA,StructureB)] but it can't be compiled, the error message is
type mismatch, expected Array[(A.StructureA,A.StructureB)] actual Array[(B.StructureA,B.StructureB)]
Actually I add this code in it, and it can be compiled successfully, but I think that is not the best solution.
import B.{Structure,OperationB }
import A.OperationA
object Main {
def main(args: Array[String]): Unit = {
val BInstance = new OperationB()
val AInstance = new OperationA()
// here convert it into proper type.
val BResult = BInstance.optB(someData).map{
case (a,b) => (a.asInstanceOf[AInstance.StructureA],b.asInstanceOf[AInstance.StructureB])
}
val AResult = AInstance.optA(BResult)
}
}
It has been bothering me for a long time, anyone can help me ?
A:
Your code makes each instance of Structure get its own StructureA and StructureB types. Since they don't access Structure, there's no point doing so. Trying to extend Structure to save on imports is just a bad idea.
Instead
package structure // or could be B.structure, or directly B
case class StructureA(x:String, y:String)
case class StructureB(x:Int, y:Int)
// in A
package A
import structure._
class OperationA {
def optA(data:Array[(StructureA,StructureB)]): Array[(StructureA,StructureB)] = {...}
}
// in B
package B
import structure._
class OperationB {
def optB(someData:String): Array[(StructureA,StructureB)] = {...}
}
Another option is
object Structure {
case class StructureA(x:String, y:String)
case class StructureB(x:Int, y:Int)
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Disable textboxes on check changed
I have a checkbox that disables 2 textboxes when checked. When it is unchecked it enables the checkboxes. Here is the JavaScript:
function enableField() {
prePracticeCodeTextBox = document.getElementById('prePracticeCodeTextBox');
preContactTextBox = document.getElementById('preContactTextBox');
checkTheBox = document.getElementById('CheckBox1');
if (checkTheBox.checked == true) {
prePracticeCodeTextBox.disabled = false;
preContactTextBox.disabled = false;
}
else {
prePracticeCodeTextBox.disabled = true;
preContactTextBox.disabled = true;
}
}
Here is the HTML:
<dl>
<dt><label for="CheckBox1">PreAnalytical?</label></dt>
<dd> <asp:CheckBox ID="CheckBox1" runat="server" CausesValidation="false"
Visible="true" OnCheckChanged="enableField()"/></dd>
</dl>
<dl>
<dt><label for="prePracticeCodeTextBox">Practice Code:</label></dt>
<dd><asp:TextBox ID="prePracticeCodeTextBox" runat="server" Enabled="False" /></dd>
</dl>
<dl>
<dt><label for="preContactTextBox">Contact:</label></dt>
<dd><asp:TextBox ID="preContactTextBox" runat="server" Enabled="False" /></dd>
</dl>
The JavaScript function is not being called at all.
What am I doing wrong?
A:
Try to use onclick instead. Use the following code to register it on your code behind :
CheckBox1.Attributes.Add("onclick", "enableField();");
BTW, you won't be able to reach the elements as you do on asp.net web forms application with default settings. You need to get the ClientIDs of the elements which will be rendered :
function enableField() {
prePracticeCodeTextBox = document.getElementById('<%=prePracticeCodeTextBox.ClientID%>');
preContactTextBox = document.getElementById('<%=preContactTextBox.ClientID%>');
checkTheBox = document.getElementById('<%=CheckBox1.ClientID%>');
if (checkTheBox.checked == true) {
prePracticeCodeTextBox.disabled = false;
preContactTextBox.disabled = false;
}
else {
prePracticeCodeTextBox.disabled = true;
preContactTextBox.disabled = true;
}
}
If you are developing on .net 4, read the below article :
http://www.tugberkugurlu.com/archive/we-lovenet-4-clean-web-control-ids-with-clientidmode-property-to-static-and-predictable
| {
"pile_set_name": "StackExchange"
} |
Q:
How can i get the Avro schema object from the received message in kafka?
I try to publish/consume my java objects to kafka. I use Avro schema.
My basic program works fine. In my program i use my schema in the producer (for encoding) and consumer (decoding).
If i publish different objects to different topics( eg: 100 topics)at the receiver, i do not know, what type of message i received ?..I would like to get the avro schema from the received byte and would like to use that for decoding..
Is my understand correct? If so, how can i retrieve from the received object?
A:
You won't receive the Avro schema in the received bytes -- and you don't really want to. The whole idea with Avro is to separate the schema from the record, so that it is a much more compact format. The way I do it, I have a topic called Schema. The first thing a Kafka consumer process does is to listen to this topic from the beginning and to parse all of the schemas.
Avro schemas are just JSON string objects -- you can just store one schema per record in the Schema topic.
As to figuring out which schema goes with which topic, as I said in a previous answer, you want one schema per topic, no more. So when you parse a message from a specific topic you know exactly what schema applies, because there can be only one.
If you never re-use the schema, you can just name the schema the same as the topic. However, in practice you probably will use the same schema on multiple topics. In which case, you want to have a separate topic that maps Schemas to Topics. You could create an Avro schema like this:
{"name":"SchemaMapping", "type":"record", "fields":[
{"name":"schemaName", "type":"string"},
{"name":"topicName", "type":"string"}
]}
You would publish a single record per topic with your Avro-encoded mapping into a special topic -- for example called SchemaMapping -- and after consuming the Schema topic from the beginning, a consumer would listen to SchemaMapping and after that it would know exactly which schema to apply for each topic.
| {
"pile_set_name": "StackExchange"
} |
Q:
mdadm Raid5 gives spares missing events
I successfully built up a raid5 array on Debian testing (Wheezy). As the man pages and further tell, the array would be created as an out-of-sync array with just a new spare injected to be repaired.
That worked fine.
But after the rebuild process, I get daily messages on missing spares, but the array should be raid5 over 3 discs without spares.
I think I only need to tell mdadm that there is -- and should be -- no spare, but how to?
mdadm -D gives
Active Devices: 3
Working Devices: 3
Failed Devices: 0
Spare Devices: 0
and /proc/mdstat reads
md1: active raid5 sda3[0] sdc3[3] sdb3[1]
##### blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
Any ideas?
A:
Open the /etc/mdadm/mdadm.conf file, find the line that begins with ARRAY /dev/md1 and remove the line immediately following which states 'spares=1'. Then restart mdadm service.
If you did a mdadm --examine --scan to retrieve the array definitions while the md1 array was still rebuilding, one partition was seen as spare at that moment.
| {
"pile_set_name": "StackExchange"
} |
Q:
Does a thread release a lock when it finishes?
I have read in places that it isn't good programming practice to acquire a Lock object without the code following being enclosed in a try...finally block so the lock can be released even if an exception is thrown.
It might sound like a simple question: do all locks belonging to a thread automatically release when the thread finishes?
The reason I ask this question is that the program I am working on is such that once a thread acquires a lock, it should have no reason let it go until it finishes. Additionally, I'm new to using locks, so I'm wondering if there's any pitfalls I may not have considered. Do I have to worry about explicitly releasing locks in my code before the thread finishes, or can I leave it to the JVM, confident in the certain knowledge that other threads being blocked on all the active thread's locks will be activated as soon as the active thread stops?
A:
Simple test may show you that lock is not released upon thread termination:
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.LockSupport;
import java.util.concurrent.locks.ReentrantLock;
public class LockTest {
public static void main(String[] args) {
final Lock l = new ReentrantLock();
Thread t = new Thread() {
@Override
public void run() {
System.out.println(Thread.currentThread()+": Acquire lock");
l.lock();
System.out.println(Thread.currentThread()+": Lock aquired: wait");
LockSupport.parkNanos(1_000_000_000);
System.out.println(Thread.currentThread()+"; Exiting");
}
};
t.start();
LockSupport.parkNanos(500_000_000);
System.out.println(Thread.currentThread()+": Acquire lock");
l.lock();
System.out.println(Thread.currentThread()+"; Success!");
}
}
Output:
Thread[Thread-0,5,main]: Acquire lock
Thread[Thread-0,5,main]: Lock aquired: wait
Thread[main,5,main]: Acquire lock
Thread[Thread-0,5,main]; Exiting
// "Success" is never written: stuck in dead-lock
So when the separate thread acquired the lock, then exited, lock cannot be taken by main thread.
| {
"pile_set_name": "StackExchange"
} |
Q:
adding UITableView inside UITableViewCell
I am working on custom UITableViewCell where I want to add UITableView inside UITableViewCell so is there any controller available to do the same
This is the image of what I want to add in my project
This is expandable UITableView where after clicking first row inside table and buttons are expanded.
So if any similar controller is available please suggest me.
Thanks in advance.
A:
Apple does not recommend table views to be added as subviews of other scrollable objects.
If you want to develop such thing, here are the steps for you:
Make separate section for your 'table view' inside your table view
The first row of the section - your clickable row
When the user touches a row, handle it via - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath.
Insert other cells into this section by modifying your datasource array.
Reload section or do an update,
as
[self.dataSourceArray insertObject:object atIndex:indexPath.row];
NSIndexPath *indexPath = [NSIndexPath indexPathForRow:0 inSection:0];
[self.tableView insertRowsAtIndexPaths:@[indexPath] withRowAnimation:UITableViewRowAnimationAutomatic];
A:
I'll advise against nesting table views. As a user, it's frustrating when one scrollable entity sits inside another scrollable entity. When you swipe, you're never quite sure how the content will behave. It can take several swipes to get from one end of the parent to the other. It matters how you time your swipes. It just sucks to use.
Try this approach of expanding a table view's sections Expand/collapse section in UITableView in iOS
Good luck!
| {
"pile_set_name": "StackExchange"
} |
Q:
Can we use Eisenstein's Irreducibility Criterion to show that $x^4+1$ is not reducible in Q?
As such:
Let $a(x)=x^4+1\in\mathbb{Q}\left[x\right]$. Then choose any prime $p$. By Eisenstein's Criterion, we see that $p\nmid 1$, $p\mid 0$ (since all coefficients of intermediate terms are 0), and $p^2\nmid 1$. Thus we conclude that $a(x)$ is not reducible in $\mathbb{Q}$.
Is this valid, or am I making some glaring omission?
My professor used the Rational Root Theorem, but it turned out to be a much longer process (with all of the testing for possible roots).
Edit -- since I have to have $p \mid 1$, I tried a different method, and substituted $x=\bar{x}+1$. Then $x^4+1=\bar x^4+4\bar x^3 + 6\bar x^2 + 4\bar x + 2$, and chose $p=2$. Then, I believe, it meets the criterion. Is this correct?
A:
For applying Eisenstein on a polynomial $P = a_n x^n+ ... + a_1 x + a_0$ you should have $p | a_k \forall k <n$ and $p^2 \not|a_0 $ and $p\not| a_n$.
But in your case $P(X) = X^4+1$. But you can apply Eisenstein if you substitute $X = Y+1$. Then
$$P(X) = Y^4+4Y^3+6Y+4Y+2$$ and you can use $p=2$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Angular 2 Multipart AJAX Upload
I'm using Angular 2 with Spring MVC. I currently have an Upload component that makes an AJAX call to the Spring backend and returns a response of parsed data from a .csv file.
export class UploadComponent {
uploadFile: function(){
var resp = this;
var data = $('input[type="file"]')[0].files[0];
this.fileupl = data;
var fd = new FormData();
fd.append("file", data);
$.ajax({
url: "uploadFile",
type: "POST",
data: fd,
processData: false,
contentType: false,
success: function(response) {
resp.response = response;
},
error: function(jqXHR, textStatus, errorMessage) {
console.log(errorMessage);
}
});
};
}
This works, I get a valid response back; however, is there a more angular 2 way to pass this file to Spring and receive a response? I've been looking into creating an injectible service and using subscribe, but I've been struggling to get a response back
A:
I ended up doing the following:
import { Component, Injectable } from '@angular/core';
import { Observable} from 'rxjs/Rx';
const URL = 'myuploadURL';
@Component({
selector: 'upload',
templateUrl: 'upload.component.html',
styleUrls: ['upload.component.css']
})
export class UploadComponent {
filetoUpload: Array<File>;
response: {};
constructor() {
this.filetoUpload = [];
}
upload() {
this.makeFileRequest(URL, [], this.filetoUpload).then((result) => {
this.response = result;
}, (error) => {
console.error(error);
});
}
fileChangeEvent(fileInput: any){
this.filetoUpload = <Array<File>> fileInput.target.files;
}
makeFileRequest(url: string, params: Array<string>, files: Array<File>) {
return new Promise((resolve, reject) => {
let formData: any = new FormData();
let xhr = new XMLHttpRequest();
for(let i =0; i < files.length; i++) {
formData.append("file", files[i], files[i].name);
}
xhr.onreadystatechange = () => {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
resolve(JSON.parse(xhr.response));
} else {
reject(xhr.response);
}
}
};
xhr.open("POST", url, true);
xhr.send(formData);
});
}
}
I can then inject a response into my html like:
<div class="input-group">
<input type="file" id="file" name="file" placeholder="select file" (change)="fileChangeEvent($event)">
<input type="submit" value="upload" (click)="upload()" class="btn btn-primary">
</div>
<div *ngIf="response">
<div class="alert alert-success" role="alert">
<strong>{{response.myResponseObjectProperty | number}}</strong> returned successfully!
</div>
This has support for multiple file uploads. I created it as an injectable service in this plunkr:
https://plnkr.co/edit/wkydlC0dhDXxDuzyiDO3
| {
"pile_set_name": "StackExchange"
} |
Q:
Random spawning within a set distance in a circle
So, what I am looking for is a way to spawn a sprite in a random place at 30 pixels away from the player's sprite. How would I do this?
A:
As others have already told you, you should at least have an attempt to the code, else you're abusing the purpose of the website.
Nevertheless:
radius = Math.random()*maxRadius;
angle = Math.random()*2*Math.PI;
x = Math.cos(angle)*radius + x_of_circle_center;
y = Math.sin(angle)*radius + y_of_circle_center;
| {
"pile_set_name": "StackExchange"
} |
Q:
Обработчик события .on()
$('a[href^="#"]').click(function() {
// Код
return false;
});
Прочитал в документации что можно переделать на:
$('a[href^="#"]').on('click', function() {
// Код
return false;
});
handler — функция, которая будет установлена в качестве обработчика.
Вместо функции, можно указать значение false, это будет эквивалентно
установке такой функции: function(){return false;}.
$('a[href^="#"]').on('click', function(false) {
// Код
});
$('a[href^="#"]').on('click', false) {
// Код
});
Но что-то не чего не получается, я не как понял или просто не правильно прописываю?
A:
Внимательно посмотрите в консоль браузера, наверняка ошибки вываливаются. В документации(англ) к jQuery приведен пример:
$( "a.disabled" ).on( "click", false );
У вас написано лишнее
$('a[href^="#"]').on('click', false) {
// Код
});
| {
"pile_set_name": "StackExchange"
} |
Q:
How to have all Jenkins slave tasks executed with nice?
We have a number of Jenkins jobs which may get executed over Jenkins slaves. Is it possible to globally set the nice level of Jenkins tasks to make sure that all Jenkins tasks get executed with a higher nice level?
A:
Yes, that's possible. The "trick" is to start the slave agent with the proper nice level already; all Jenkins processes running on that slave will inherit that.
Jenkins starts the slave agent via ssh, effectively running a command like
cd /path/to/slave/root/dir && java -jar slave.jar
On the Jenkins node config page, you can define a "Prefix Start Slave Command" and a "Suffix Start Slave Command" to have this nice-d. Set as follows:
Prefix Start Slave Command: nice -n -10 sh -c '
Suffix Start Slave Command: '
With that, the slave startup command becomes
nice -n -10 sh -c 'cd "/path/to/slave/root/dir" && java -jar slave.jar'
This assumes that your login shell is a bourne shell. For csh, you will need a different syntax. Also note that this may fail if your slave root path contains blanks.
I usually prefer to "Launch slave via execution of command on the Master", and invoke ssh myself from within a shell wrapper. Then you can select cipher and client of choice, and also setting niceness can be done without Prefix/Suffix kludges and without whitespace pitfalls.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does a wave actually diffract?
I know that waves diffract around a slit and this is due to the Huygens-Fresnel principle. But I never understand this in an intuitive wave that why does a wave become a spherical wave front at the slit? Huygens principle gives all the math behind this but does not actually explain that why does a wave bend round the edges.
Some might say that this happens because this a property of a wave that it forms a spherical wave front and the final wave is a result of superposition of those wave fronts. I know that, but it would be great if anyone could come up with an intuitive approach to this and actually explain why waves diffract or why do they form a different wave front at the slit. This question might be somewhat similar to, why do waves bend around corners.
Image source: https://www.ph.utexas.edu/~coker2/index.files/Diffraction.gif
A:
I think you are looking at the question in a slightly backwards way. It would be better phrased as: Why is it possible to have plane waves?
In physics all point sources, wave sources which are smaller than the wavelength, generate outgoing spherical waves. As an example consider throwing a rock into a pond; the outgoing waves are emitted equally in all directions. Generating a plane wave, such as you have at the input of your image, requires taking many of these point sources and exciting them coherently such that their individual spherical waves add up to form a plane wave travelling in one direction. In the water example this can be done by moving a large flat surface back and forth which creates an infinite number of point sources along its surface.
Diffraction is the opposite of this. You've managed to generate a plane wave by coherently combining a bunch of point sources. Now you block the plane wave except at a gap which is comparable to the wavelength, and in doing so you extract one of the spherical waves which was generating the plane wave.
A:
The best intuition is the well-defined mathematics underlying the concept. The simplest equation for the wave is
$$\frac{\partial^2}{\partial t^2} h = c^2 \frac{\partial^2}{\partial x^2} h + c^2\frac{\partial^2}{\partial y^2} h $$
Here, $c$ is the speed of the waves ("fundamental" physicists would think about the speed of light as the most well-known example of the equation).
The second time derivative of the height of the water at a given place $(x,y,t)$ at time $t$ is equal to the Laplacian (the sum of second $x$-derivative and $y$-derivative) of the same height.
One aspect of the intuition is to know why this equation is right for a given physical system. For example, if water has "bumps" on it, the equation says that there is a force that tries to "flatten" these bumps. Such an equation may be defined from a mechanical model of the water as a continuum, or water as a collection of many atoms, and so on.
At the end, the fundamental laws we know – the Standard Model of particle physics, for example – contain some wave equations (e.g. the Klein-Gordon equation for the Higgs field) at the fundamental level (with some extra non-linear terms). In this context, these equations can't be derived from anything "deeper" (except for string theory which has its own wave equations in the fundamental equations, too – and they can't be derived from something deeper, and if they can, I must say "and so on").
Another aspect is why the wave equation above implies the Hyugens principle. It does. If you study how the function $h(x,y,t)$ changes if $t$ is changed to $t+dt$, one may see that the height at the given point is affected by the heights in the previous moment and in the whole infinitesimal vicinity of the given point. That's why the disturbances are propagating in all directions, whether these directions are around the corner or not.
You may imagine that the surface of the water is a grid or net of many people holding the hands of their horizontal neighbors, and holding the legs (with their legs, please be skillful) of their vertical neighbors. The wave equation says that whenever a human in the net feels that he's higher than the average of his 4 neighbors, he tries to lift the neighbors in the upward direction. So this rule makes the perturbations – bumps on the water or on any field – spread in both vertical and horizontal directions, and because the other directions are combinations that may be obtained by successive moves, the disturbances spread in all directions. Whether there is a wall or "corner" at some finite distance makes no impact on the fact that the signals are spreading in all directions.
A:
I'll take a stab at a less scientific or mathematical approach to the problem.
You can think of water molecules as wanting to make the surface of the water as flat as possible. Seeing as any body of water will eventually become still (flat surface) if no outside forces work on it, it makes sense intuitively.
Of course water molecules can only feel the forces caused by nearby water molecules. So all they are really trying is making their local bit of water flat, which eventually flattens the entire surface of the body of water.
A last thing to keep in mind is that it will take a while for a molecule to change direction. If its neighbours are pulling it up, it can gain quite a bit of speed. When one of its neighbours then starts going down again, it will take a while before this molecule has lost its momentum. (the two neighbouring molecules may very well stop being neighbours since their speed will differ too much.)
So what does this mean for waves?
Well, let's imagine you pull one molecule up a bit. This molecule will consequently pull up its neighbours, which will in turn pull up their neighbours etc. However, these neighbours are also pulling the initial molecule down (and so is gravity) so while the neighbours gain upwards speed, this initial molecule gains downward speed, until it actually drops lower than its neighbours (which are still going up). At this point the initial molecule starts slowing down since its neighbours are now pulling it up. This process repeats, with the initial molecule alternatingly being lower and higher than its direct neighbours. Since these neighbours also influence their neighbours and so on, this creates a wave. Since there are statisticly just as many neighbours in any direction (and water molecules are extremely small) this spreads out at almost exactly the same speed in each direction, this makes a circle.
Now let's consider what a straight wave looks like. You have a long (or infinite) line of molecules that are at a maximum height and their neighbours are lower the further away they are from the intial row of molecules. Until we get far enough away, at wich point the pattern repeats itself. This shape also seems to move in a direction perpendicular to the line. If we assume molecules only move up and down, this can only mean the particles that are to the right of the wave (if the wave is moving right) are moving up and the particles to the left of the wave are moving down. You can easily see how this would result in each molecule moving up and down periodically, which would result in exactly the way waves behave. Since the wave is straight, the neighbours in the direction parallel to the wave must be at the same height and have the same speed as one another. So the wave can only propagate in a direction perpendicular to the wave.
So what happens when the wave hits the wall?
When the wave hits a wall, molecules don't have any neighbours to be pulled up or down by in that direction. This allows them to move a bit more freely, which results in the wave seeming to bounce back (I won't get in to this much further)
At the hole in the wall though, the molecules inside the hole will logicaly start moving up and down. In turn their neighbours will do the same. The neighbours in the direction parallel to the waves won't be at the same height as them though. (since the wave can't get through a solid wall.) So this situation ends up looking a lot like example with the one intially moving molecule, which resulted in circular waves. And that's exactly what will happen.
note:
I simplified the matter enormously, but I believes it sketches a more or less accurate idea of how simple waves work.
| {
"pile_set_name": "StackExchange"
} |
Q:
Trouble writing HTML in Razor view if/else condition
I'm not quite sure what I am missing here, but I am having trouble adding a class='success' to a <tr> in a Partial View while iterating through a list in my Model.
My goal is to have the first row's tr to have the attribute class='success'
Here is the code I have:
@model List<DxRow>
<div class="row">
<table class="table" id="diagnosis">
<tr id="header-row">
<th>Dx Code</th>
<th>Dx Description</th>
<th>Dx Date</th>
<th>OnSet Or Ex</th>
<th>Dx Order (Top is Primary)</th>
</tr>
@{ bool IsFirst = true; }
@foreach (DxRow r in Model)
{
if (IsFirst)
{
Html.Raw("<tr class='success'>");
IsFirst = false;
}
else
{
Html.Raw("<tr>");
}
<tr>
<td>@r.dxCode</td>
<td>@r.dxDescription</td>
<td>@r.dxDate</td>
<td>@r.dxType</td>
<td>
<span class='up'><i class='icon-arrow-up bigger-160' style='color:green'></i></span>
<span class='down'><i class='icon-arrow-down bigger-160' style='color:red'></i></span>
<span class='top'><i class='icon-star bigger-160' style='color:goldenrod'></i></span>
<span class='delete'><i class='icon-trash bigger-160' style='color:red'></i></span>
</td>
</tr>
}
</table>
When I step through the code, it goes into the special IsFirst section the first iteration and the else section on all other iterations.
But, when I use FireBug to view the source code, it is not writing the part: <tr class='success'> , just the <tr>.
A:
I think the extra <tr> after your if statement might be causing the browser to ignore your html, as the html would actually be broken with two <tr> tags being rendered.
I would probably update the code to be more like;
@{ bool IsFirst = true; }
@foreach (DxRow r in Model)
{
<tr class="@( IsFirst ? "success" : "")">
<td>@r.dxCode</td>
<td>@r.dxDescription</td>
<td>@r.dxDate</td>
<td>@r.dxType</td>
<td>
<span class='up'><i class='icon-arrow-up bigger-160' style='color:green'></i></span>
<span class='down'><i class='icon-arrow-down bigger-160' style='color:red'></i></span>
<span class='top'><i class='icon-star bigger-160' style='color:goldenrod'></i></span>
<span class='delete'><i class='icon-trash bigger-160' style='color:red'></i></span>
</td>
</tr>
IsFirst = false;
}
Since you are setting IsFalse to false after the first loop. Makes the code tidier.
| {
"pile_set_name": "StackExchange"
} |
Q:
Description for Webservice parameters
Is it possible to create a description for parameters used in an (asmx)-webservice?
I know I can set the description of the webmethod with the Description-property.
However is it also possible to add an attribute to the parameter to create description in the webservice for a given parameter
[WebMethod(Description = @"Get all approved friends <br />
where rownum >= StartPage * count AND rownum < (StartPage+1) * count")]
public Friend[] GetFriendsPaged(int startPage, int count){...}
For instance in the example given above, I would like to add documentation that the StartPage is 0-based.
Thanks in advance
A:
There is no way to do this, and no standard for what to do with the information even if you could add it. It's true that a WSDL may contain annotations for any element, but there's no standard about, for instance, placing those annotations into comments in the generated proxy class.
| {
"pile_set_name": "StackExchange"
} |
Q:
dynamically choose API to use
I use an external tool in my Python code. In order to initialize this tool, I have to create a couple of objects. The external tool in question provides two quite different APIs, and no one of these APIs is capable of creating all objects the tool needs. Let's say, the tool is trafic simulation tool, where car objects are created using API 1 and bikes are created using API 2.
I have played with inheritance, tried to pick an appropriate design pattern but all my solutions look ugly to me.
The most simple way to represent what I am trying to achieve is:
class ObjectHandler():
api_1_types = ('type1', 'foo')
api_2_types = ('type2', 'bar')
def __init__(self):
self.handler1 = ObjectHandler1()
self.handler2 = ObjectHandler2()
def create(self, obj_type):
if obj_type in self.api_1_types:
return self.handler1.create()
elif obj_type in self.api_2_types:
return self.handler2.create()
else:
raise NotImplementedError
class ObjectHandler1():
def __init__(self):
# load external module that defines API 1
def create(self):
# return an object created via API 1
class ObjectHandler2():
def __init__(self):
# load external module that defines API 2
def create(self):
# return an object created via API 2
if __name__ == '__main__':
handler = ObjectHandler()
object_1 = handler.create('type1') # must be created by ObjectHandler1
object_2 = handler.create('type2') # must be created by ObjectHandler2
I am now searching for a good OO and pythonic way to achieve this.
A:
Your method looks ok. Should use sets for in tests but it doesn't really matter. An alternative could be the following but I don't know if it is better:
def __init__(self):
self.handlers = dict()
handler1 = ObjectHandler1()
for type in api_1_types:
# These won't be copied but simply be a reference to the object
self.handlers[type] = handler1
# Repeat for the other one
and
def create(self, obj_type):
try:
return self.handlers[obj_type].create()
except KeyError:
raise NotImplementedError
| {
"pile_set_name": "StackExchange"
} |
Q:
javascript learning, while if
this may seem stupid to some, but please try and help me see sense here, I'm currently learning javascript and one of my challenges is as follows
In countdown.js below, modify the while-loop with a conditional that will only allow a number to be printed if it is even. Your results should be the even numbers from 10 to 2 descending. Think carefully about how your code might decide if a number is even…
If I use the code below I get the answer 9, 7, 5, 3, 1, but if I change it to num + 1 in the log out it works. I thought the decrement would occur after the log out put?
var num = 10;
while (num > 0) {
if (num % 2) {
console.log(num); // can't be right??????
}
num--;
}
A:
The test in your if statement, num % 2, is true when the value in "num" is not divisible by 2. The % operator gives the remainder after a division, so when the remainder is non-zero, it's true.
If you want even numbers, use !(num % 2), which will be true when the remainder is zero, or perhaps more explicitly num % 2 === 0.
A:
0 == false, and 1 == true. % division returns the remainder, so 3 % 2 = 1, which evaluates to true. Negate your comparision: if( ! num % 2 );
Alternately, use if( num % 2 === 0 ), which may be more clear.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get the current screen resolution on windows via command line?
I'm trying get the current screen resolution on windows via command line.
Based on most of the answers I've found, I should use:
wmic desktopmonitor get screenheight, screenweight
But this returns the max supported resolution of the display device, not the current one, which is what I need.
Example:
I'm using a 4k monitor, but currently set to display only at 1920x1080. When I run the command above, I get:
ScreenHeight ScreenWidth
2160 3840
How do I get the current screen resolution on windows via command line?
A:
Dealing with high DPI made this somewhat challenging because most Windows API functions return a scaled version of the resolution for compatibility unless the application declares high DPI awareness. Inspired by this Stack Overflow answer I wrote this PowerShell script:
Add-Type @"
using System;
using System.Runtime.InteropServices;
public class PInvoke {
[DllImport("user32.dll")] public static extern IntPtr GetDC(IntPtr hwnd);
[DllImport("gdi32.dll")] public static extern int GetDeviceCaps(IntPtr hdc, int nIndex);
}
"@
$hdc = [PInvoke]::GetDC([IntPtr]::Zero)
[PInvoke]::GetDeviceCaps($hdc, 118) # width
[PInvoke]::GetDeviceCaps($hdc, 117) # height
It outputs two lines: first the horizontal resolution, then the vertical resolution.
To run it, save it to a file (e.g. screenres.ps1) and launch it with PowerShell:
powershell -ExecutionPolicy Bypass .\screenres.ps1
| {
"pile_set_name": "StackExchange"
} |
Q:
Samba share yielding "invalid handle"
I have a strange behavior that came up suddenly with a samba share (arch linux) since yesterday. The only trigger that I can think of is a system update (pacman -Syu). Ever since, the root share (/) is accessible and all directories are visible but any attempt to access any of the directories triggers an "invalid handle" response in Windows. If I however share any of the directories (e.g. /data) as a separate share, it is fully accessible without trouble. Here is the share definition.
In the meantime, I have isolated the issue to the Samba server (rather than the Windows host). A second Arch Linux installation will mount the [data] share correctly, but will refuse access to the root [/data/root_ssd] share. Conversely, starting Samba on this new, virgin Arch Linux install will again lead to no sharing of the root path.
Any ideas? It seems to me that this behavior is new to a recent Samba upgrade.
[antergos1-festplatte]
comment = 20 GB Festplatte
path = /
writeable = yes
create mask = 0766
directory mask = 0777
guest ok = yes
force user = aag
browseable = yes
[data]
comment = webserver directories
path = /data
writeable = yes
create mask = 0777
directory mask = 0777
guest ok = yes
force user = aag
browseable = yes
force group = admins
A:
This behavior comes with the latest Samba security updates. I just encountered it with Debian Wheezy. It seems that fixing CVE-2015-5252 either intentionally or inadvertently blocks root level shares (/).
As a workaround, you can set in smb.conf
[global]
unix extensions = no
[share]
wide links = yes
Note: unix extension = yes, which is the default, would disable wide links.
| {
"pile_set_name": "StackExchange"
} |
Q:
PHP - If one or more allowed values in one array are in another array
I have two arrays:$user_level and $allowed_levels
What is the best way to check if the $user_level array contains one or more of the array items that are in $allowed_level ?
The code can be seen here:
$user_level = array('Level 1', 'Level 2', 'Level 3', 'Level 4', 'Level 5');
$allowed_levels = array('Level 1', 'Level 2');
Thanks.
A:
The array_diff function will create an array of values that are not present in both arrays. You can then check if this is empty or not. If it is not empty then this will mean $user_levels contains one or more of the $allowed_level array items:
$user_level = array('Level 1', 'Level 2', 'Level 3', 'Level 4', 'Level 5');
$allowed_levels = array('Level 1', 'Level 2');
$result = array_diff($user_level, $allowed_levels)
if(!(empty($result)){
**Code if it is not empyy**
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Can we accept such words as 'invite' when used as a noun in correct English?
So often people tell me they have had an 'invite' to something, I am wondering if the word can actually be accepted as correct English, as opposed to 'invitation'. In a similar vein people, more usually in the north of England, will use the present participle of verbs in situations which call for the past. E.g. Waiter in restaurant asks you 'how would you like your eggs cooking?'instead of 'cooked'. Are these not simply incorrect expressions in English? It just seems to me that nowadays if enough people start saying something it becomes acceptable.
A:
Of course we can. Invite has been used informally as a noun since at least 1659, when it occurred in Hamon L'Estrange’s 'The alliance of divine offices exhibiting all the liturgies of the Church of England':
Bishop Cranmer . . . gives him an earnest invite to England.
| {
"pile_set_name": "StackExchange"
} |
Q:
Which digraphs are morally unambiguous?
All my digraphs are reflexive and have no parallel arcs.
Intuition. Think of the vertexes as people. An arc $x \rightarrow y$ means "$x$ would care about $y$ if $x$ knew and understood $y$" or, more simply "$x$ cares about $y$." The good people are the ones that care about every other good person. More formally:
Definition 0. Given a digraph, call a subset $G$ of the vertexes of that digraph a "candidate for the set of good people" iff the following two axioms hold.
0, Mutuality. Good people care about each other; i.e. for all $x,y \in G$, we have that $x \rightarrow y$.
1, Saturation. If you care about all good people, then you yourself are good; i.e. for each vertex $x$ of the digraph, if "for all $y \in G$ we have $x \rightarrow y$," then $x \in G$.
Note that these can be combined into a single axiom:
Axiom. For each vertex $x$ of the digraph, $x \in G$ iff for all $y \in G$ we have $x \rightarrow y.$
Definition 1. Call a digraph morally ambiguous iff it has many candidates for the set of good people, or none. Otherwise, it is morally unambiguous.
For example, morally speaking:
The discrete (reflexive) digraph on $2$ vertexes is ambiguous.
The complete (reflexive) digraph on $2$ vertexes is unambiguous.
Question. Is there a decent characterization of those digraphs that are morally unambiguous?
I'm also interested in necessary and sufficient conditions.
My gut feeling is that this question should be pretty easy: we should be able to solve it just by thinking about the existence of complete subgraphs enjoying certain relationships to the other vertexes. I haven't been able to make this intuition precise, though.
A:
I don't really consider this to be any nicer than the definition itself, but maybe this is what you are looking for. It is easy to see that $G$ is a candidate iff $G$ is a complete subgraph such that for any $x\not\in G$, there is some $y\in G$ such that $x\not\to y$. So a graph is unambiguous iff it contains exactly one complete subgraph with this property.
Note that the analogue for undirected graphs does have a much simpler description: then candidates are exactly the maximal complete subgraphs, so a graph is unambiguous iff it has a unique maximal complete subgraph. Since every vertex is contained in a some maximal complete subgraph, this means the graph must be complete.
| {
"pile_set_name": "StackExchange"
} |
Q:
Finding the closed form of $\int _0^{\infty }\frac{\ln \left(1+ax\right)}{1+x^2}\:\mathrm{d}x$
I solved a similar case which is also a very well known integral
$$\int _0^{\infty }\frac{\ln \left(1+x\right)}{1+x^2}\:\mathrm{d}x=\frac{\pi }{4}\ln \left(2\right)+G$$
My teacher gave me a hint which was splitting the integral at the point $1$,
$$\int _0^1\frac{\ln \left(1+x\right)}{1+x^2}\:\mathrm{d}x+\int _1^{\infty }\frac{\ln \left(1+x\right)}{1+x^2}\:\mathrm{d}x=\int _0^1\frac{\ln \left(1+x\right)}{1+x^2}\:\mathrm{d}x+\int _0^1\frac{\ln \left(\frac{1+x}{x}\right)}{1+x^2}\:\mathrm{d}x$$
$$2\int _0^1\frac{\ln \left(1+x\right)}{1+x^2}\:\mathrm{d}x-\int _0^1\frac{\ln \left(x\right)}{1+x^2}\:\mathrm{d}x=\frac{\pi }{4}\ln \left(2\right)+G$$
I used the values for each integral since they are very well known.
My question is, can this integral be generalized for $a>0$?, in other words can similar tools help me calculate
$$\int _0^{\infty }\frac{\ln \left(1+ax\right)}{1+x^2}\:\mathrm{d}x$$
A:
You can evaluate this integral with Feynman's trick,
$$I\left(a\right)=\int _0^{\infty }\frac{\ln \left(1+ax\right)}{1+x^2}\:dx$$
$$I'\left(a\right)=\int _0^{\infty }\frac{x}{\left(1+x^2\right)\left(1+ax\right)}\:dx=\frac{1}{1+a^2}\int _0^{\infty }\left(\frac{x+a}{1+x^2}-\frac{a}{1+ax}\right)\:dx$$
$$=\frac{1}{1+a^2}\:\left(\frac{1}{2}\ln \left(1+x^2\right)+a\arctan \left(x\right)-\ln \left(1+ax\right)\right)\Biggr|^{\infty }_0=\frac{1}{1+a^2}\:\left(\frac{a\pi \:}{2}-\ln \left(a\right)\right)$$
To find $I\left(a\right)$ we have to integrate again with convenient bounds,
$$\int _0^aI'\left(a\right)\:da=\:\frac{\pi }{2}\int _0^a\frac{a}{1+a^2}\:da-\int _0^a\frac{\ln \left(a\right)}{1+a^2}\:da$$
$$I\left(a\right)=\:\frac{\pi }{4}\ln \left(1+a^2\right)-\int _0^a\frac{\ln \left(a\right)}{1+a^2}\:da$$
To solve $\displaystyle\int _0^a\frac{\ln \left(a\right)}{1+a^2}\:da$ first IBP.
$$\int _0^a\frac{\ln \left(a\right)}{1+a^2}\:da=\ln \left(a\right)\arctan \left(a\right)-\int _0^a\frac{\arctan \left(a\right)}{a}\:da=\ln \left(a\right)\arctan \left(a\right)-\text{Ti}_2\left(a\right)$$
Plugging that back we conclude that
$$\boxed{I\left(a\right)=\:\frac{\pi }{4}\ln \left(1+a^2\right)-\ln \left(a\right)\arctan \left(a\right)+\text{Ti}_2\left(a\right)}$$
Where $\text{Ti}_2\left(a\right)$ is the Inverse Tangent Integral.
The integral you evaluated can be proved with this,
$$I\left(1\right)=\int _0^{\infty }\frac{\ln \left(1+x\right)}{1+x^2}\:dx=\frac{\pi }{4}\ln \left(2\right)-\ln \left(1\right)\arctan \left(1\right)+\text{Ti}_2\left(1\right)$$
$$=\frac{\pi }{4}\ln \left(2\right)+G$$
Here $G$ denotes the Catalan's constant.
A:
As @Dennis Orton answered, Feynman trick is certainly the most elegant approach for the solution.
What you could also do is
$$\frac 1 {1+x^2}=\frac i 2 \left( \frac 1 {x+i}-\frac 1 {x-i}\right)$$ and we face two integrals
$$I_k=\int \frac {\log(1+ax)}{x+k i}\,dx=\text{Li}_2\left(\frac{1+a x}{1-i a k}\right)+\log (a x+1) \log \left(1-\frac{1+a
x}{1-i a k}\right)$$
$$J_k=\int_0^p \frac {\log(1+ax)}{x+k i}\,dx=\text{Li}_2\left(\frac{i (1+a p)}{a k+i}\right)+\log (1+a p) \log \left(\frac{a
(k-i p)}{a k+i}\right)-\text{Li}_2\left(\frac{i}{a k+i}\right)$$ Computing $\frac i 2(J_1-J_{-1})$ and making $p \to\infty$, assuming $a>0$ you should end with
$$\int _0^{\infty }\frac{\log \left(1+ax\right)}{1+x^2}\,dx=\frac{1}{4} \pi \log \left(1+a^2\right)+\log (a) \cot ^{-1}(a)+\frac{1}{2} i
\left(\text{Li}_2\left(-\frac{i}{a}\right)-\text{Li}_2\left(\frac{i}{a}\right)
\right)$$
| {
"pile_set_name": "StackExchange"
} |
Q:
Spinner returning null
I have 4 spinners that are returning null and I can't figure out why. I made sure I have the ids right and followed my code that I had used in another activity to make sure it was right but it keeps crashing saying
01-25 17:02:47.464: E/AndroidRuntime(13052): java.lang.NullPointerException
01-25 17:02:47.464: E/AndroidRuntime(13052): at com.skateconnect.AllSpotsActivity$2.onClick(AllSpotsActivity.java:157)
the whole code for the class is
public class AllSpotsActivity extends ListActivity {
// Progress Dialog
private ProgressDialog pDialog;
// Creating JSON Parser object
JSONParser jParser = new JSONParser();
ArrayList<HashMap<String, String>> spotsList;
GPSTracker gps;
// url to get all products list
//private static String url_all = "http://72.83.78.137:8080/skate_connect/get_all.php";
private static String url_all = "http://skateconnect.no-ip.biz:8080/skate_connect/get_all.php";
// JSON Node names
private static final String TAG_SUCCESS = "success";
private static final String TAG_SPOTS = "spots";
private static final String TAG_PID = "pid";
private static final String TAG_NAME = "name";
private int search_trig=0;
// products JSONArray
JSONArray spots = null;
private Spinner spinner_pavement, spinner_traffic, spinner_enviro,spinner_dist;
private String str_pavement, str_traffic, str_enviro,str_dist;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.all_spots);
gps = new GPSTracker(AllSpotsActivity.this);
// Hashmap for ListView
spotsList = new ArrayList<HashMap<String, String>>();
// Loading products in Background Thread
whattosearch();
//new LoadAllSpots().execute();
// Get listview
ListView lv = getListView();
// on seleting single product
// launching Edit Product Screen
lv.setOnItemClickListener(new OnItemClickListener() {
@Override
public void onItemClick(AdapterView<?> parent, View view,
int position, long id) {
// getting values from selected ListItem
String pid = ((TextView) view.findViewById(R.id.pid)).getText()
.toString();
// Starting new intent
Intent in = new Intent(getApplicationContext(),
ViewSpotActivity.class);
// sending pid to next activity
in.putExtra(TAG_PID, pid);
// starting new activity and expecting some response back
startActivityForResult(in, 100);
}
});
}
public class CustomOnItemSelectedListener implements OnItemSelectedListener {
@Override
public void onItemSelected(AdapterView<?> parent, View view, int pos,
long id) {
Toast.makeText(parent.getContext(),
parent.getItemAtPosition(pos).toString(),
Toast.LENGTH_SHORT).show();
}
@Override
public void onNothingSelected(AdapterView<?> arg0) {
// TODO Auto-generated method stub
}
}
public void addListenerOnSpinnerItemSelection() {
spinner_pavement = (Spinner) findViewById(R.id.spinner_search_pavement);
spinner_pavement
.setOnItemSelectedListener(new CustomOnItemSelectedListener());
spinner_traffic = (Spinner) findViewById(R.id.spinner_search_traffic);
spinner_traffic
.setOnItemSelectedListener(new CustomOnItemSelectedListener());
spinner_enviro = (Spinner) findViewById(R.id.spinner_search_enviro);
spinner_enviro
.setOnItemSelectedListener(new CustomOnItemSelectedListener());
spinner_dist = (Spinner) findViewById(R.id.spinner_dist);
spinner_dist
.setOnItemSelectedListener(new CustomOnItemSelectedListener());
}
void whattosearch(){
final Dialog search = new Dialog(this);
search.setContentView(R.layout.search);
search.setTitle("What to Search For: ");
search.setCancelable(false);
spinner_pavement = (Spinner) findViewById(R.id.spinner_search_pavement);
spinner_traffic = (Spinner) findViewById(R.id.spinner_search_traffic);
spinner_enviro = (Spinner) findViewById(R.id.spinner_search_enviro);
spinner_dist = (Spinner) findViewById(R.id.spinner_dist);
Button dialogButton = (Button) search.findViewById(R.id.ButtonOK);
// if button is clicked, close the custom dialog
dialogButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
//these are returning null...
str_pavement = spinner_pavement.getSelectedItem().toString();
str_traffic = spinner_traffic.getSelectedItem().toString();
str_enviro = spinner_enviro.getSelectedItem().toString();
str_dist = spinner_dist.getSelectedItem().toString();
search_trig=1;
search.dismiss();
new LoadAllSpots().execute();
}
});
Button dialogSeeAll = (Button) search.findViewById(R.id.ButtonAll);
// if button is clicked, close the custom dialog
dialogSeeAll.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
search.dismiss();
search_trig=0;
new LoadAllSpots().execute();
}
});
search.show();
}
// Response from Edit Product Activity
// using longest equation to get least amount of error
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
// if result code 100
if (resultCode == 100) {
// if result code 100 is received
// means user edited/deleted product
// reload this screen again
Intent intent = getIntent();
finish();
startActivity(intent);
}
}
private double getDistance(double fromLat, double fromLon, double toLat,
double toLon) {
Location location1 = new Location("loc1");
location1.setLatitude(fromLat);
location1.setLongitude(fromLon);
Location location2 = new Location("loc2");
location2.setLatitude(toLat);
location2.setLongitude(toLon);
double d = location1.distanceTo(location2) * 0.000621371;// convert to
// miles
return d;
}
/**
* Background Async Task to Load all product by making HTTP Request
* */
class LoadAllSpots extends AsyncTask<String, String, String> {
/**
* Before starting background thread Show Progress Dialog
* */
@Override
protected void onPreExecute() {
super.onPreExecute();
pDialog = new ProgressDialog(AllSpotsActivity.this);
pDialog.setMessage("Loading Spots. Please wait...");
pDialog.setIndeterminate(false);
pDialog.setCancelable(false);
pDialog.show();
}
/**
* getting All products from url
* */
@Override
protected String doInBackground(String... args) {
double slat = gps.getLatitude();
double slong = gps.getLongitude();
// Building Parameters
List<NameValuePair> params = new ArrayList<NameValuePair>();
// getting JSON string from URL
JSONObject json = jParser.makeHttpRequest(url_all, "GET", params);
// Check your log cat for JSON reponse
Log.d("All Spots: ", json.toString());
try {
// Checking for SUCCESS TAG
int success = json.getInt(TAG_SUCCESS);
if (success == 1) {
// products found
// Getting Array of Products
spots = json.getJSONArray(TAG_SPOTS);
// looping through All Products
for (int i = 0; i < spots.length(); i++) {
JSONObject c = spots.getJSONObject(i);
// Storing each json item in variable
double elong = Double.parseDouble(c
.getString("longitude"));
double elat = Double.parseDouble(c
.getString("latitude"));
double dist = getDistance(slat, slong, elat, elong);
String distance = String.format("%.1f", dist);
String id = c.getString(TAG_PID);
String pave = c.getString("pavement");
String traffic = c.getString("traffic");
String enviro = c.getString("environment");
String name = c.getString(TAG_NAME) + ": " + distance
+ " Miles Away";
//need to create case statement here on search_trig
//0 means search all
//1 means search by inputs
// creating new HashMap
HashMap<String, String> map = new HashMap<String, String>();
// adding each child node to HashMap key => value
map.put(TAG_PID, id);
map.put(TAG_NAME, name);
map.put("distance", distance);
// adding HashList to ArrayList
spotsList.add(map);
}
} else {
// no products found
// Launch Add New product Activity
Intent i = new Intent(getApplicationContext(),
NewSpotActivity.class);
// Closing all previous activities
i.addFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP);
startActivity(i);
}
} catch (JSONException e) {
e.printStackTrace();
}
return null;
}
/**
* After completing background task Dismiss the progress dialog
* **/
@Override
protected void onPostExecute(String file_url) {
// dismiss the dialog after getting all products
pDialog.dismiss();
// updating UI from Background Thread
runOnUiThread(new Runnable() {
@Override
public void run() {
/**
* Updating parsed JSON data into ListView
* */
ListAdapter adapter = new SimpleAdapter(
AllSpotsActivity.this, spotsList,
R.layout.list_item, new String[] { TAG_PID,
TAG_NAME, "distance" }, new int[] {
R.id.pid, R.id.name, R.id.distance });
Collections.sort(spotsList,
new Comparator<Map<String, String>>() {
@Override
public int compare(Map<String, String> o1,
Map<String, String> o2) {
String value1 = o1.get("distance");
String value2 = o2.get("distance");
return Double.valueOf(value1).compareTo(
Double.valueOf(value2));
}
});
// updating listview
setListAdapter(adapter);
}
});
}
}
}
If anyone could help me out or point me in the right direction that would be great. I think it might be because the spinners are in a dialog box but I'm not quite sure.
Thank you in advance,
Tyler
EDIT: full logcat
01-25 20:21:03.652: E/AndroidRuntime(18730): FATAL EXCEPTION: main
01-25 20:21:03.652: E/AndroidRuntime(18730): Process: com.skateconnect, PID: 18730
01-25 20:21:03.652: E/AndroidRuntime(18730): java.lang.NullPointerException
01-25 20:21:03.652: E/AndroidRuntime(18730): at com.skateconnect.AllSpotsActivity$2.onClick(AllSpotsActivity.java:157)
01-25 20:21:03.652: E/AndroidRuntime(18730): at android.view.View.performClick(View.java:4452)
01-25 20:21:03.652: E/AndroidRuntime(18730): at android.view.View$PerformClick.run(View.java:18498)
01-25 20:21:03.652: E/AndroidRuntime(18730): at android.os.Handler.handleCallback(Handler.java:733)
01-25 20:21:03.652: E/AndroidRuntime(18730): at android.os.Handler.dispatchMessage(Handler.java:95)
01-25 20:21:03.652: E/AndroidRuntime(18730): at android.os.Looper.loop(Looper.java:137)
01-25 20:21:03.652: E/AndroidRuntime(18730): at android.app.ActivityThread.main(ActivityThread.java:5083)
01-25 20:21:03.652: E/AndroidRuntime(18730): at java.lang.reflect.Method.invokeNative(Native Method)
01-25 20:21:03.652: E/AndroidRuntime(18730): at java.lang.reflect.Method.invoke(Method.java:515)
01-25 20:21:03.652: E/AndroidRuntime(18730): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:777)
01-25 20:21:03.652: E/AndroidRuntime(18730): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:593)
01-25 20:21:03.652: E/AndroidRuntime(18730): at dalvik.system.NativeStart.main(Native Method)
A:
Change the following lines in your whattosearch() method...
spinner_pavement = (Spinner) findViewById(R.id.spinner_search_pavement);
spinner_traffic = (Spinner) findViewById(R.id.spinner_search_traffic);
spinner_enviro = (Spinner) findViewById(R.id.spinner_search_enviro);
spinner_dist = (Spinner) findViewById(R.id.spinner_dist);
...to use the findViewById(...) method of your Dialog. Example...
spinner_pavement = (Spinner) search.findViewById(R.id.spinner_search_pavement);
You seem to have got your wires crossed along the way and you're trying to find the spinners in your activity's content view instead of in the dialog's.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add react with typescript to existing asp.net core mvc application
I am trying to add react components (typescript) in the form of .tsx files to be rendered on some of my asp.net core mvc .cshtml pages but cannot seem to get things working properly. Is there a good way to do this?
A:
You basically need to use webpack and modify your .csproj to do a yarn build before running to transpile your typescript and connect all of the dependencies into a bundle.
Start by adding Nuget packages: Yarn.MSBuild and Microsoft.TypeScript.MSBuild to your project.
tsconfig.json: (put in root dir of your project)
{
"typeAcquisition": {
"enable": true
},
"compileOnSave": false,
"compilerOptions": {
"sourceMap": true,
"module": "commonjs",
"target": "es6",
"noImplicitAny": false,
"jsx": "react",
"outDir": "wwwroot/dist",
"moduleResolution": "node"
},
"exclude": [
"node_modules",
"wwwroot"
]
}
^^Note that this will make it so that your .tsx files transpile into folder: wwwroot/dist, change outDir if you want it somewhere else
package.json (put in root dir of your project)
{
"scripts": {
"webpack": "webpack"
},
"devDependencies": {
"@types/react": "^16.4.6",
"@types/react-dom": "^16.0.6",
"@types/webpack-env": "^1.13.6",
"aspnet-webpack": "^3.0.0",
"awesome-typescript-loader": "^5.2.0",
"babel-core": "^6.26.3",
"bootstrap": "4.3.1",
"clean-webpack-plugin": "^0.1.19",
"typescript": "^2.9.2",
"webpack": "^4.15.1",
"webpack-cli": "^3.0.8",
"webpack-dev-middleware": "^3.1.3",
"webpack-hot-middleware": "^2.22.2"
},
"dependencies": {
"react": "^16.4.1",
"react-dom": "^16.4.1",
"react-hot-loader": "^4.0.0"
}
}
^^add whatever npm dependencies you need in your package.json (obviously) as well as your typescript @type dependencies. These are just the minimum viable packages for react and hot reloading.
Webpack.targets (put in root dir of your project)
<Project>
<Target Name="EnsureNodeModulesInstalled"
BeforeTargets="Build"
Inputs="yarn.lock;package.json"
Outputs="node_modules/.yarn-integrity">
<Yarn Command="install" />
</Target>
<Target Name="PublishWebpack"
BeforeTargets="Publish">
<ConvertToAbsolutePath Paths="$(PublishDir)">
<Output TaskParameter="AbsolutePaths" PropertyName="AbsPublishDir" />
</ConvertToAbsolutePath>
<Yarn Command="run webpack --env.prod --env.publishDir=$(AbsPublishDir)" />
</Target>
</Project>
^^this file Webpack.target is to be added to your .csproj file (I added it at the end) and should be added like so:
In your .csproj:
<Import Project="Webpack.targets" />
webpack.config.js: (put in root dir of your project)
const path = require('path');
const CheckerPlugin = require('awesome-typescript-loader').CheckerPlugin;
const CleanWebpackPlugin = require('clean-webpack-plugin');
module.exports = (env) => {
const isDevBuild = !(env && env.prod);
const outputDir = (env && env.publishDir)
? env.publishDir
: __dirname;
return [{
mode: isDevBuild ? 'development' : 'production',
devtool: 'inline-source-map',
stats: { modules: false },
entry: {
'App': './ClientApp/App.tsx',
},
watchOptions: {
ignored: /node_modules/
},
output: {
filename: "dist/[name].js",
path: path.join(outputDir, 'wwwroot'),
publicPath: '/'
},
resolve: {
// Add '.ts' and '.tsx' as resolvable extensions.
extensions: [".ts", ".tsx", ".js", ".json"]
},
devServer: {
hot: true
},
module: {
rules: [
// All files with a '.ts' or '.tsx' extension will be handled by 'awesome-typescript-loader'.
{
test: /\.tsx?$/,
include: /ClientApp/,
loader: [
{
loader: 'awesome-typescript-loader',
options: {
useCache: true,
useBabel: true,
babelOptions: {
babelrc: false,
plugins: ['react-hot-loader/babel'],
}
}
}
]
}
]
},
plugins: [
new CleanWebpackPlugin(path.join(outputDir, 'wwwroot', 'dist')),
new CheckerPlugin()
]
}];
};
^^Note the entry point and modules->rules->include. That means webpack is being configured to look in a folder called ClientApp (in the root dir of your project) for the .tsx files to be transpiled. If you want your .tsx files elsewhere simply change these up. The entry point also specifies that the file entry point is App.tsx, put your highest level react component (that depends on all of your other components) into here or simply include it in this file.
Now add WebpackDevMiddleware to your Startup->Configure function:
app.UseWebpackDevMiddleware(new WebpackDevMiddlewareOptions
{
HotModuleReplacement = true
});
Now if in your App.tsx file you select a div (say one with id: 'root') and render your component in it, all you need to do to add your React component to your .cshtml file is include the transpiled App.js (in wwwroot/dist) to the .cshtml along with a div with id: 'root' and bam. You have react components that can be added to .cshtml without using any SPA bs
Credit:
https://natemcmaster.com/blog/2018/07/05/aspnetcore-hmr/
| {
"pile_set_name": "StackExchange"
} |
Q:
PostgreSQL INSERT dynamic value in json
There are two tables:
CREATE TABLE user (
id bigserial,
name varchar(255),
address varchar(255)
)
CREATE TABLE user_history (
id bigserial,
user_id int8
user_json json,
create_date timestamp
)
user.id is generated by DEFAULT.
I want to have a history record for the user being created, but i don't know, how to pass generated user.id to json. Something like this:
INSERT INTO user_history
VALUES (
DEFAULT,
(SELECT id FROM user WHERE name = 'Some name'),
'{
"id": GENERATED ID HERE,
"name": "Some name",
"address": "Some address"
}',
'2010-01-01 00:00:00'
);
Postgres 10
A:
on the top:
with c as (SELECT id FROM user WHERE name = 'Some name')
INSERT INTO user_history (user_id,user_json,create_date)
select id, concat(
'{
"id": ',id,',
"name": "Some name",
"address": "Some address"
}')::json,
'2010-01-01 00:00:00'
from c
;
also maybe use json_build_object for generating json?..
here' working example
update
I did not realyze all keys/values in json come from user table, below is shorter query for it:
INSERT INTO user_history (user_id,user_json,create_date)
select id, json_build_object('id',id,'name',name,'address',address), '2010-01-01 00:00:00'
from "user"
http://sqlfiddle.com/#!17/19a8e/3
Also saving user_json does not make much sence - you can always get it by FK on user_history.user_id
t=# select to_json(u) from "user" u;
to_json
--------------------------------------------
{"id":1,"name":"Some name","address":null}
| {
"pile_set_name": "StackExchange"
} |
Q:
Amazon S3 java sdk - download progress
Trying to find how to output a clean count of 0 - 100% while downloading a file from amazon. There are plenty of examples of how to do this for uploading but they don't seem to directly translate to downloads.
At the moment I'm doing the following
TransferManager transferManager = new TransferManager(s3Client);
Download download = transferManager.download(s3Request, downloadedFile);
while (!download.isDone()) {
LOGGER.info("Downloaded >> " + download.getProgress().getPercentTransferred());
}
Which does work, but it spams the console with the same value many many times (assume as it's threaded).
I know I can also do something like:
ProgressListener listener = progressEvent -> LOGGER.info("Bytes transfer >> " + progressEvent.getBytesTransferred());
GetObjectRequest s3Request = new GetObjectRequest("swordfish-database", snapshot.getKey());
TransferManager transferManager = new TransferManager(s3Client);
Download download = transferManager.download(s3Request, downloadedFile);
download.addProgressListener(listener);
download.waitForCompletion();
Which also works but, I loose the ease of using download.getProgress().getPercentTransferred(). Is this the more proper way of doing this?
Ultimately I want to be getting an int to use for a progress bar. Any advice?
A:
Well if you reserve a separate thread for progress updating then I don't see any problems with the first approach - you could just add something like TimeUnit.SECONDS.sleep(1) to update the progress every second instead of looping all the time. On the other hand in the second approach you only have to divide getBytesTransferred() by getBytes() to get percentage which also doesn't seem too difficult :-)
| {
"pile_set_name": "StackExchange"
} |
Q:
C++0x: iterating through a tuple with a function
I have a function named _push which can handle different parameters, including tuples, and is supposed to return the number of pushed elements.
For example, _push(5) should push '5' on the stack (the stack of lua) and return 1 (because one value was pushed), while _push(std::make_tuple(5, "hello")) should push '5' and 'hello' and return 2.
I can't simply replace it by _push(5, "hello") because I sometimes use _push(foo()) and I want to allow foo() to return a tuple.
Anyway I can't manage to make it work with tuples:
template<typename... Args, int N = sizeof...(Args)>
int _push(const std::tuple<Args...>& t, typename std::enable_if<(N >= 1)>::type* = nullptr) {
return _push<Args...,N-1>(t) + _push(std::get<N-1>(t));
}
template<typename... Args, int N = sizeof...(Args)>
int _push(const std::tuple<Args...>& t, typename std::enable_if<(N == 0)>::type* = nullptr) {
return 0;
}
Let's say you want to push a tuple<int,bool>. This is how I expect it to work:
_push<{int,bool}, 2> is called (first definition)
_push<{int,bool}, 1> is called (first definition)
_push<{int,bool}, 0> is called (second definition)
However with g++ 4.5 (the only compiler I have which supports variadic templates), I get an error concerning _push<Args...,N-1>(t) (line 3) saying that it couldn't find a matching function to call (without any further detail). I tried without the "..." but I get another error saying that the parameters pack is not expanded.
How can I fix this?
PS: I know that you can do this using a template struct (this is in fact what I was doing before), but I'd like to know how to do it with a function
PS 2: PS2 is solved, thanks GMan
A:
I don't have a compiler to test any of this, so you'll have to report any issues.
The following should allow you to iterate across a tuple calling a function. It's based off your logic, with a few minor changes. (N is a std::size_t, it's the first parameter to allow Args (and Func) to be deduced on further calls, it just calls some function instead of performing a specific task). Nothing too drastic:
namespace detail
{
// just to keep things concise and readable
#define ENABLE_IF(x) typename std::enable_if<(x)>::type
// recursive case
template <std::size_t N, typename... Args, typename Func>
ENABLE_IF(N >= 1) iterate(const std::tuple<Args...>& pTuple, Func& pFunc)
{
pFunc(std::get<N - 1>(pTuple));
iterate<N - 1>(pTuple, pFunc);
}
// base case
template <std::size_t N, typename... Args, typename Func>
ENABLE_IF(N == 0) iterate(const std::tuple<Args...>&, Func&)
{
// done
}
}
// iterate tuple
template <typename... Args, typename Func>
Func iterate(const std::tuple<Args...>& pTuple, Func pFunc)
{
detail::iterate<sizeof...(Args)>(pTuple, pFunc);
return pFunc;
}
Assuming that all works, you then just have:
struct push_lua_stack
{
// constructor taking reference to stack to push onto
// initialize count to 0, etc....
template <typename T>
void operator()(const T& pX)
{
// push pX onto lua stack
++count;
}
std::size_t count;
};
And lastly:
std::size_t pushCount = iterate(someTuple, push_lua_stack()).count;
Let me know if that all makes sense.
Since you seem to really be seriously against structs for some reason, just make a function like this:
template <typename T>
void push_lua(const T& pX)
{
// push pX onto lua stack
}
And change everything to specifically call that function:
namespace detail
{
// just to keep things concise and readable
#define ENABLE_IF(x) std::enable_if<(x)>::type* = nullptr
// recursive case
template <std::size_t N, typename... Args>
typename ENABLE_IF(N >= 1) iterate(const std::tuple<Args...>& pTuple)
{
// specific function instead of generic function
push_lua(std::get<N - 1>(pTuple));
iterate<N - 1>(pTuple);
}
// base case
template <std::size_t N, typename... Args, typename Func>
typename ENABLE_IF(N == 0) iterate(const std::tuple<Args...>&, Func&)
{
// done
}
}
// iterate tuple
template <typename... Args>
void _push(const std::tuple<Args...>& pTuple)
{
detail::iterate<sizeof...(Args)>(pTuple);
}
No idea why you'd avoid generic functionality though, or be so against structs.
Oh how nice polymorphic lambda's would be. Ditch the utility push_lua_stack class and just write:
std::size_t count = 0;
iterate(someTuple, [&](auto pX)
{
// push onto lua stack
++count;
});
Oh well.
A:
I solved the problem with some hacks. Here is the code:
template<typename... Args, int N = sizeof...(Args)>
int _push(const std::tuple<Args...>& t, std::integral_constant<int,N>* = nullptr, typename std::enable_if<(N >= 1)>::type* = nullptr) {
return _push(t, static_cast<std::integral_constant<int,N-1>*>(nullptr)) + _push(std::get<N-1>(t));
}
template<typename... Args, int N = sizeof...(Args)>
int _push(const std::tuple<Args...>& t, std::integral_constant<int,N>* = nullptr, typename std::enable_if<(N == 0)>::type* = nullptr) {
return 0;
}
Don't hesitate to post if you find a better way
| {
"pile_set_name": "StackExchange"
} |
Q:
C++ referencing operator in declaration
I am a beginner in C++, and this must be a really basic question. I understand & stands for referencing operation. For example, a = &b assigns the address of b to a. However, what does & in a declaration such as the following mean?
className& operator=(const className&);
Do the following make sense and what is the difference between the last line and the following?
className* operator=(const className*);
From the answers below, it seems --- as I understand it --- that the following is valid too. Is it? If it is, how is it different from the version with "&"?
className operator=(const className);
After reading the following answers and some more outside, I realized part of my original confusion stems from mixing up reference as in general computer science and reference type as in C++. Thank you all who answered my question. All the answers clarify my understanding to different degrees, even though I can only pick one as the accepted answer.
A:
The token & has three distinct meanings in C++, two of which are inherited from C, and one of which is not.
It's the bitwise AND operator (unless overloaded).
It's the address operator, which acts on an lvalue to yield a pointer to it. (Unless overloaded.) This is what is happening in a = &b.
It denotes a reference type. const className& as a parameter is a const reference to an object of type className, so when the function is called, the argument will be passed by reference, but it won't be allowed to be modified by the function. The function you gave also returns a reference.
A:
Assignment Operator
Understanding is best gained by example:
class A {
int x;
public:
A(int value) : x(value) {}
A& operator=(const A& from) { // Edit: added missing '=' in 'operator='
x = from.x;
return *this;
}
};
A m1(7);
A m2(9);
m2 = m1; /// <--- calls the function above, replaces m2.x with 7
Here, we defined the assignment operator. This special method is designed to provide assignment capability to objects.
The reason that it returns a reference to *this is so you can chain assignments without excessive memory copies:
A m3(11);
m3 = m1 = m2; /// <--- now, m3.x and m1.x both set to m2.x
Expanded as follows:
m3 = ( something )
where
(something) is a reference to the object m1
by result of call to m1.operator=(m2) method
such that
the returned reference is then passed into m3.operator(...)
Chaining lets you do this:
(m1=m2).function(); // calls m1.function after calling assignment operator
Libraries such as boost leverage this pattern (for a different type of chaining example, see the program options library in boost) and it can be useful when developing a domain specific 'language'.
If instead full objects were returned from operator=, the chained assignment would at a minimum involve multiple extra copies of the objects, wasting CPU cycles and memory. In many cases things would not work properly because of the copy.
Essentially, using a reference avoids a copy.
Note
In reality, (simplified explanation) a reference is just a fancy syntax for a C pointer.
In the common case, you can then write code with A.x instead of A->x.
Caution
Returning a pure reference from a method is often dangerous; newcomers can be tempted to return a reference to an object constructed locally inside the method on the stack, which depending on the object can lead to obscure bugs.
Your pointer example
It depends on what you return from the body of the method, but regardless, the following would instead return a pointer to some instance of className:
className* operator=(const className*);
This will compile and it even seems to work (if you return this from the method), but this does violate the Rule of Least Surprise, as it is likely anyone else attempting to use your code would not expect the assignment operator to return a pointer.
If you think about base types:
int x=1; int y=2; int z; z=y=x;
will never ever do anything other than return integers - so having operator= return the assigned to object is consistent)
It also doesn't let you do this:
(m1 = m2).something
It also allows you to pass NULL which is something assignment operators don't typically want to care about.
Instead of writing
blah& operator(const blah& x) { a = x.a ; return *this; }
You would need to write:
blah* operator(const blah* x) {
if (x) { a = x->a ; return this; }
else { /*handle null pointer*/
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Calling base class in C# example
Consider the code below:
public class Analyzer {
protected Func f,fd;
public delegate double Func( double x );
public Analyzer( Func f, Func fd ) {
this.f = f;
this.fd = fd;
}
public Analyzer( Func f ) {
this.f = f;
fd = dx;
}
public Analyzer( ) { }
protected double dx( double x ) {
double h = x / 50.0;
return ((f(x + h) - f(x - h)) / (2 * h));
}
public double evaluate(double x) {
return f( x );
}
public double evaluateDerived( double x ) {
return fd( x );
}
public double solve(double x0) {
double eps = 1, x1 = f(x0), x2 = fd(x0);
do x0 = x0 - ( f( x0 ) / fd( x0 ) );
while ( f( x0 ) > eps );
return x0;
}
}
public class PolyAnalyzer : Analyzer {
private double[] coefs;
public PolyAnalyzer( params double[] coef ) {
coefs = coef;
f = poly;
fd = dx;
}
private double poly( double x ) {
double sum = 0;
for ( int i = 0 ; i < coefs.Length ; i++ ) {
sum += coefs[i] * Math.Pow(x,coefs.Length-1-i);
}
return sum;
}
}
I was trying to think of a way to send poly to the constructor Analyser(Func f), is there a way to do that here? tried something like :
public PolyAnalyzer( params double[] coef ) : base(new Func(poly)){
coefs = coef;
}
but it doesnt compile... compilation error::
An object reference is required for the nonstatic field, method, or property 'member'
Id appriciate a well explained answer, and not just how its done... :)
A:
In my opinion you're trying to combine object-oriented inheritance and functional programming, and it's not working well in this case.
I would write
public abstract class Analyzer {
protected abstract double Fd(double x);
// ...
}
and override it in the descendant class (if you want to stick with an OO hierarchy). This is classic Strategy Pattern implementation.
To address your comment, if you want Analyzer to be instantiable, use:
public class Analyzer {
protected virtual double Fd(double x)
{
// provide default implementation
}
// ...
}
If you want to stick to functional programming, I would use composition instead of inheritance:
// Does not descend from Analyzer. Could implement IAnalyzer.
public class PolyAnalyzer {
private readonly Analyzer analyzer;
private double[] coefs;
public PolyAnalyzer( params double[] coef ) {
coefs = coef;
analyzer = new Analyzer(poly);
}
public double evaluate(double x) {
return analyzer.evaluate(x);
}
// Implement evaluateDerived and solve through delegation
private double poly( double x ) {
double sum = 0;
for ( int i = 0 ; i < coefs.Length ; i++ ) {
sum += coefs[i] * Math.Pow(x,coefs.Length-1-i);
}
return sum;
}
}
Alternately, if you took @Reed Copsey's advice and switched to Func<double, double>, you could use closures in a factory method:
public static class PolyAnalyzerFactory {
public static Analyzer Create( params double[] coef ) {
var coefs = coef.ToArray(); // protect against mutations to original array
return new Analyzer(
x =>
{
double sum = 0;
for ( int i = 0 ; i < coefs.Length ; i++ ) {
sum += coefs[i] * Math.Pow(x,coefs.Length-1-i);
}
return sum;
});
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
What do you call the prongs of an electrical plug?
I speak German on a daily basis and I don't know what to call the prongs of an electrical plug. This makes me so uncomfortable that I have to ask in English about it. I have searched the internet to find out, but couldn't find a schematic showing the name of the prongs in German. Are they "Zinken", "Zacken", "Sporne" or even something else? Who's the electrician to tell me?
and the original image link, although I don't expect it to last for a very long time:
A:
These are called
Stifte (singular: der Stift)
or Kontaktstifte. There are Rundstifte like in continental Europe and Flachstifte like in the UK and the US.
| {
"pile_set_name": "StackExchange"
} |
Q:
Southwest corner of LAX has peculiar airplane parts, what is this area?
In the Southwest corner of LAX (at the intersection of Imperial Highway and Pershing Dr.) there is a strange looking area that appears to have some abandoned airplane parts. Also, it looks like a road-striper had a good time one day doing donuts around one of the artifacts. Anyone familiar with this area and what it is used for?
See Google Maps
A:
It is an aircraft fire simulator. They use it for training emergency response personnel and vehicles.
You can see them training on one in this YouTube video
Source: Leipzig Halle Airport
A:
All major airports have practice fire fighting facilities equipped with dummy aircraft that are made of steel so they don't burn.
In addition to the dummy steel aircraft, they will often use retired aircraft to use for familiarization and evacuation training.
Source
Source
Source
| {
"pile_set_name": "StackExchange"
} |
Q:
Building unit tests for MVC2 AsyncControllers
I'm considering re-rewriting some of my MVC controllers to be async controllers. I have working unit tests for these controllers, but I'm trying to understand how to maintain them in an async controller environment.
For example, currently I have an action like this:
public ContentResult Transaction()
{
do stuff...
return Content("result");
}
and my unit test basically looks like:
var result = controller.Transaction();
Assert.AreEqual("result", result.Content);
Ok, that's easy enough.
But when your controller changes to look like this:
public void TransactionAsync()
{
do stuff...
AsyncManager.Parameters["result"] = "result";
}
public ContentResult TransactionCompleted(string result)
{
return Content(result);
}
How do you suppose your unit tests should be built? You can of course invoke the async initiator method in your test method, but how do you get at the return value?
I haven't seen anything about this on Google...
Thanks for any ideas.
A:
As with any async code, unit testing needs to be aware of thread signalling. .NET includes a type called AutoResetEvent which can block the test thread until an async operation has been completed:
public class MyAsyncController : Controller
{
public void TransactionAsync()
{
AsyncManager.Parameters["result"] = "result";
}
public ContentResult TransactionCompleted(string result)
{
return Content(result);
}
}
[TestFixture]
public class MyAsyncControllerTests
{
#region Fields
private AutoResetEvent trigger;
private MyAsyncController controller;
#endregion
#region Tests
[Test]
public void TestTransactionAsync()
{
controller = new MyAsyncController();
trigger = new AutoResetEvent(false);
// When the async manager has finished processing an async operation, trigger our AutoResetEvent to proceed.
controller.AsyncManager.Finished += (sender, ev) => trigger.Set();
controller.TransactionAsync();
trigger.WaitOne()
// Continue with asserts
}
#endregion
}
Hope that helps :)
| {
"pile_set_name": "StackExchange"
} |
Q:
WPF Animation FPS vs. CPU usage - Am I expecting too much?
Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines the CPU usage still seems very high.
Here's some numbers I ran from a few 5 minute sampling periods:
~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop]
~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop]
I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well.
I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before).
WPF Performance Suite:
Process Launch Error
Unable to attach to process:
CCHearts.exe
Do you want to kill it?
This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach.
Performance Explorer:
Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application.
Output Window from Performance:
Profiling started.
Profiling process ID 5360 (CCHearts).
Process ID 5360 has exited.
Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp.
Profiling finished.
PRF0025: No data was collected.
Profiling complete.
So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :)
Summary:
Lot's of progress so far...
Performance Explorer
Problem: Wasn't working, vague error message.
Solution: Used the command line version to get a real error message that pointed me to this thread pointing out that it was a Symantec dll blocking me. Setting the following registry key fixed the problem:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SysPlant]
"Start"=dword:00000004
WPF Performance Suite
Problem: Wasn't working, no error message.
Solution: Use the WPF Performance Suite designed for .NET 4 from the Windows SDK 7.1 instead of the previous version from Windows SDK 7 (I feel silly :-P)
WPF Animation
Problem: High CPU usage compared to FPS/percieved workload.
Solution: Lots ok tweaks here and there. But the HUGE improvement was Cached Composition as pointed out by Jeremiah Morrill. Check out the changesets.
A:
I downloaded your code and saw ~20-28% CPU usage on my quad-core 2.6ghz. By adding cached composition, I was able to get it down to ~6-8% CPU.
I compiled using "DEBUG", not "DEBUG(USEVISUAL)" and in Heart.xaml I added the BitmapCache to the Path:
Path CacheMode="BitmapCache" ...
| {
"pile_set_name": "StackExchange"
} |
Q:
Suppress a process in linux kernel scheduler (not kill)
In linux scheduler, I want to suppress some processes by modifying the scheduler code. Is it possible to suppress process without killing but just suppression?
A:
In the linux scheduler, I want to suppress some processes by modifying the scheduler code
Probably not possible, and certainly ill defined. The good way to think of modifying the kernel is first: don't, and later don't yet, and at last minimally and carefully !
What exactly "suppressing" a process is meaning to you? You might want to terminate it. You certainly cannot simply "suppress" some process, since the kernel is carefully cleaning up after it has terminated.
And why are you wanting to modify the kernel? In general, user-space and user-mode is a better place to do such things (or even systemd). You might want to also have some kernel thread (very tricky).
You might consider kernel to user-space communication with netlink(7), then try to minimize your kernel footprint. Be however aware that the scheduler is a critical, and very well tuned, piece of code inside the kernel.
In practice, I would suggest a high-priority user-land daemon. See setpriority(2), nice(2) and sched(7). We don't know what you want to achieve, but it is likely do be practically doable in user-land. And if it is not, perhaps Linux is not the right kernel for you (taking into account that you Silvara is a drone developer). Then look into genuine real-time operating systems, IoT OSes like Contiki, or library operating systems unikernels such as MirageOS.
| {
"pile_set_name": "StackExchange"
} |
Q:
Extra slide in between itemize environment
My MWE:
\documentclass{beamer}
\usepackage[english]{babel}
\usepackage{calc}
\usepackage[absolute,overlay]{textpos}
\usepackage{pdfsync}
\mode<presentation>
\title[AAA]{AAA}
%\subtitle
\institute[AAA]{BBB}
\author{AAA}
\date{BBB}
% Define the title of each inserted pre-subsection frame
\newcommand*\titleSubsec{Next Subsection}
% Define the title of the "Table of Contents" frame
\newcommand*\titleTOC{Outline}
\begin{document}
\section{Section 1}
\begin{frame}\frametitle{Section 1 - A}
\begin{itemize}
\item <1-> Blablabla:
\begin{itemize}
\item Blablabla
\item Blablabla
\item Blablabla
\item Blablabla
\item $\cdots\cdots\cdots$
\end{itemize}
\vspace{0.5cm}
\item <2-> Blablabla:
\begin{itemize}
\item Blablabla
\item Blablabla
\item Blablabla
\item Blablabla
\item $\cdots\cdots\cdots$
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}\frametitle{Section 1 - B}
PHOTO
\end{frame}
\begin{frame}\frametitle{Section 1 - A}
\begin{itemize}
\item <1-> Blablabla:
\begin{itemize}
\item Blablabla
\item Blablabla
\item Blablabla
\item Blablabla
\item $\cdots\cdots\cdots$
\end{itemize}
\vspace{0.5cm}
\item <2-> Blablabla:
\begin{itemize}
\item Blablabla
\item Blablabla
\item Blablabla
\item Blablabla
\item $\cdots\cdots\cdots$
\end{itemize}
\end{itemize}
\end{frame}
\end{document}
What I basically want is to get rid of page 2 and page 4 of the pdf output file. That is, on the first slide I want to show item <1>, then I want to show a slide with just a photo, and then I want to show item <1> together with item <2> on a new slide.
A:
To solve this task, you need two things:
if you wand to display only a specific overlay of a frame, you can use \begin{frame}<1> ... \end{frame}
to reuse an "old" frame, \againframe<>{} is the answer
Example:
\documentclass{beamer}
\begin{document}
\section{Section 1}
\begin{frame}<1>[label=frame1]
\frametitle{Section 1 - A}
\begin{itemize}
\item <1-> Blablabla 1:
\item <2-> Blablabla 2:
\end{itemize}
\end{frame}
\begin{frame}\frametitle{Section 1 - B}
PHOTO
\end{frame}
\againframe<2>{frame1}
\end{document}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get my 24 word seed using bitcoin-cli
I've tried:
bitcoin-cli sethdseed 'word1 word2 ...'
but i got invalid private key.
please help me getting my 24 word seed using bitcoin core
A:
You can't.
Bitcoin Core is not BIP39 compliant.
A:
How to get my 24 word seed using bitcoin-cli
You write about getting but your commands are about setting. This is confusing.
You can't get a seed-phrase from Bitcoin-core.
You can use the sethdseed function to "Set or generate a new HD wallet seed." this marks any existing private keys as inactive (which may or may not be what you want).
Note that an HD seed and a (24 word) seed phrase are different things.
I've tried: bitcoin-cli sethdseed 'word1 word2 ...' but i got invalid private key.
I think that may be because the second argument is not a seed phrase but a WIF key.
"seed" (string, optional) The WIF private key to use as the new HD seed;
Related questions with useful answers (assuming you're trying to get seed-words out of Bitcoin Core because you think thats whats needed to transfer to another wallet that uses them)
How can I transfer a Bitcoin-qt wallet to Electrum?
How To Transfer Coins From Bitcoin Core to Electrum to avoid Sync Process?
Having trouble transfering private keys from Bitcoin Core to Electrum, for want of not having to wait for sync
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is GIF image size more than the sum of individual frame size?
I just tried to convert few JPEGs to a GIF image using some online services. For a collection of 1.8 MB of randomly selected JPEGs, the resultant GIF was about 3.8 MB in size (without any extra compression enabled).
I understand GIF is lossless compression. And that's why I expected the resultant output to be around 1.8 MB (input size). Can someone please help me understand what's happening with this extra space ?
Additionally, is there a better way to bundle a set of images which are similar to each other (for transmission) ?
A:
JPEG is a lossy compressed file, but still it is compressed. When it uncompresses into raw pixel data and then recompressed into GIF, it is logical to get that bigger a size
GIF is worse as a compression method for photographs, it is suited for flat colored drawings mostly. It uses RLE [run-length encoding] if I remember well, that is you get entries in the compressed file that mean "repeat this value N times", so you need to have lots of same colored pixels in horizontal sequence to get good compression.
If you have images that are similar to each other, maybe you should consider packing them as consequtive frames (the more similar should be closer) of a video stream and use some lossless compressor (or even risk it with a lossy one) for video, but maybe this is an overkill.
| {
"pile_set_name": "StackExchange"
} |
Q:
Using cookies with axios and Vue
I have created a Node.js express server that connects to Salesforce.com using the SOAP interface provided by 'jsforce'. It uses session cookies for authorization via the 'express-session' package. So far, it has a POST method for login and a GET to perform a simple query. Testing with Postman has proven that this server is working as expected.
As the browser interface to this server, I have wrttien a Vue application that uses axios to perform the GET and POST. I need to save the session cookie created during login POST then attach attach the cookie to subsequent CRUD operations.
I have tried various methods to handle the cookies. One method I have tried is using axios response interceptors on the POST
axios.interceptors.response.use(response => {
update.update_from_cookies();
return response;
});
The function 'update_from_cookies' attempts to get the cookie named 'js-force' but it does not find it although I know it is being sent
import Cookie from 'js-cookie';
import store from './store';
export function update_from_cookies() {
let logged_in = Cookie.get('js-force');
console.log('cookie ' + logged_in);
if (logged_in && JSON.parse(logged_in)) {
store.commit('logged_in', true);
} else {
store.commit('logged_in', false);
}
}
I have also seen various recommendations to add parameters to the axios calls but these also do not work.
I would appreciate some advice about how to handle cookies using axios or some similar package that works with Vue
Thanks
A:
The problem has been resolved. I was using the wrong syntax for the axios call
The correct syntax has the {withCredentials: true} as the last parameter
this.axios.post(uri, this.sfdata, {withCredentials: true})
.then( () => {
this.$router.push( {name : 'home' });
})
.catch( () => {
});
| {
"pile_set_name": "StackExchange"
} |
Q:
File date returning wrong in Delphi xe5
I am using a simple command to get the file date from a file, but keep getting the wrong date.
On my computer i looked and saw the date was 14/3/2014.
But when i run the command i get 30/12/1999 no matter what file i try, it stays the same return date.
I've tried
BackupFileDate:=FileAge(S);;
originalfiledate:=FileAge(fileName);
And
BackupFileDate:=GetFileModDate(S);
originalfiledate:=GetFileModDate(Filename);
function GetFileModDate(filename : string) : TDateTime;
var
F : TSearchRec;
begin
FindFirst(filename,faAnyFile,F);
Result := F.TimeStamp;
//Result := F.Time;
FindClose(F);
end;
Both have the same result.
PS: both BackupFileDate and originalfiledate are now defined as TDate, I've already tried TDateTime as-well with the same result.
I would like to get the date and time last edited of the file.
A:
FileAge returns a time stamp used by the OS to record information such as the date and time a file was modified.
You should use the FileDateToDateTime function to convert the Integer value to a more manageable TDateTime format:
FileDateToDateTime(FileAge(fileName));
Note:
function FileAge(const FileName: string): Integer; overload;
is deprecated. There is another version of FileAge
function FileAge(const FileName: string; out FileDateTime: TDateTime; FollowLink: Boolean = True): Boolean;
that returns the time stamp of FileName in the FileDateTime output parameter.
FileAge(filename, timeDate);
EDIT
Depending on the use of the data, it may be (very) important to convert from UTC to local time.
A:
tl;dr Use TFile.GetLastWriteTime or TFile.GetLastWriteTimeUtc.
Your first attempt fails because FileAge returns DOS date time value. That's completely different from a TDateTime.
Your second piece of code essentially works, modulo the fact that you neglected to check for errors. The likely explanation for the error is that you passed an invalid file name. When the call to FindFirst fails, the search record that is returned is undefined.
The TimeStamp property of TSearchRec converts the file time from UTC to local, and then converts from file time to TDateTime.
You'd want to fix the lack of error handling like this:
function GetFileModDate(const FileName: string): TDateTime;
var
F: TSearchRec;
begin
if FindFirst(filename, faAnyFile, F)<>0 then
raise SomeException.Create('...');
Result := F.TimeStamp;
FindClose(F);
end;
You should be clear that this returns a TDateTime in local time.
That said, I would do it in a platform independent way using IOUtils. Specifically TFile.GetLastWriteTime or TFile.GetLastWriteTimeUtc depending on how you want to deal with time zones.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use PNGJS library to create png from rgb matrix?
Im having trouble figuring out to create a PNG file (encoding) from the documentation here https://github.com/niegowski/node-pngjs . The documentation gives an example on manipulating an existing PNG. In order to sanity check I've been trying to give every pixel the same RGB value. I think this issue post is relevant as well https://github.com/niegowski/node-pngjs/issues/20 . Heres a code Ive tried
var pic = new PNG({
width: cols,
height: rows
});
console.log(pic.width, pic.height);
var writeStream = fs.createWriteStream('C:\\Users\\yako\\desktop\\out.png');
pic.pack().pipe(writeStream);
writeStream.on('parsed', function() {
console.log('parsed');
});
writeStream.on('finish', function() {
fs.createReadStream('C:\\Users\\yako\\desktop\\out.png')
.pipe(new PNG({
filterType: -1
}))
.on('parsed', function() {
for (var y = 0; y < this.height; y++) {
for (var x = 0; x < this.width; x++) {
var idx = (this.width * y + x) << 2;
this.data[idx] = 255;
this.data[idx+1] = 218;
this.data[idx+2] = 185;
this.data[idx+3] = 0.5;
}
}
this.pack().pipe(fs.createWriteStream('C:\\Users\\yako\\desktop\\newOut.png'));
});
});
writeStream.on('error', function (err) {
console.error(err);
});
A:
Simple:
var fs = require('fs'),
PNG = require('pngjs').PNG;
var png = new PNG({
width: 100,
height: 100,
filterType: -1
});
for (var y = 0; y < png.height; y++) {
for (var x = 0; x < png.width; x++) {
var idx = (png.width * y + x) << 2;
png.data[idx ] = 255;
png.data[idx+1] = 218;
png.data[idx+2] = 185;
png.data[idx+3] = 128;
}
}
png.pack().pipe(fs.createWriteStream('newOut.png'));
| {
"pile_set_name": "StackExchange"
} |
Q:
PHP unsetting array in loop
I am trying to modify an existing array called field_data and delete the parent array of the key->value pair that has control == off. So in this example I would like to unset Array[1]. However it's not working, what am I doing wrong?
foreach ($field_data['0'] as &$subsection) {
foreach ($subsection as $key => $value)
{
if($key=='Control' && $value =='OFF')
{ echo 'match'; unset($subsection); }
}
return $field_data;
}
Field_data
----------
Array
(
[0] => Array
(
[0] => Array
(
[SECTION_ID] =>
[Control] => ON
[1] => Array
(
[SECTION_ID] =>
[Control] => OFF
)
)
)
A:
You're trying to remove a variable that PHP is still using, specifically the array the inner loop is looping over.
The way you're checking you don't even need the inner loop. I would do something like this:
foreach( $field_data[0] as $skey => $svalue ) {
if( array_key_exists('Control', $svalue) && $svalue['Control'] == 'OFF' ) {
unset($field_data[0][$skey]);
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Xcode 7 linker error with cocoapods
I use xcode 7 and cocoapods to work with parse and my app runs correctly on simulator but when I want to test it on my iPhone5s this error appears:
ld: -undefined and -bitcode_bundle (Xcode setting ENABLE_BITCODE=YES) cannot be used together
clang: error: linker command failed with exit code 1 (use -v to see invocation)
A:
Either remove the "-undefined" linker flag or disable Bitcode.
If you don't have a good reason to use "-undefined", you should get rid of this:
Project Settings -> Target -> Build Settings -> Other Linker Flags -> delete the "-undefined" entry.
Else disabling Bitcode is the way to go:
Project Settings -> Target -> Build Settings -> Enable Bitcode -> set to "No".
| {
"pile_set_name": "StackExchange"
} |
Q:
Caliburn Micro Changes from Version 1.1 to 1.5.1?
Do any one knows, what are the changes made in Caliburn Micro 1.1 to 1.5.1, beside adding support to WinRT and Windows Phone 8.
I need this Info, since I am using Caliburn Micro 1.4 in my project and want to update it to 1.5.1?
If there are any major changes, I will go for it.
Is there any change in the naming Conventions?
A:
This is taken from each of the release's changes.txt:
1.2
Improvements to EventAggregator to improve testability and re-use apart from the full Caliburn.Micro framework.
Enabled basic child containers for the SimpleContainer.
Some improvements to the nuget install script.
Improvements and bug fixes for View/ViewModel name resolution.
Fixed some NRE's in the new UriBuilder. NO explicitly throws if it cannot locate the view.
Improved logging around searched for Views/ViewModels.
Fixed bugs with the WP7 version of Screen.OnViewReady. It now works consistently.
Improvements to PropertyChangedBase and BindableCollection to better support serialization.
Moved IsInDesign mode out of Bootstrapper and into the Execute class.
Added WP7 platform abstractions for vibration and sound effects, including enabling the window manager to play sounds when showing a custom modal dialog.
Fixed some bugs in the WindowManager related to bubbling actions.
Fixed some issues with the WPF navigation service.
Minor refactoring to enable the new "feature packages".
1.3
Improved serialization of PropertyChangedBase and BindableCollection
Enabled the WP7 UriBuilder to actually build a Uri without navigating.
Added SetUIThreadMarshaller method to Executor to allow customization of the framework's default thread marshalling behavior.
Added optional settings parameters to all window manager apis.
Changed FrameAdapter to inject query string parameters into the ViewModel before the conventional databinding takes place.
Added a new WinRT project. WinRT now supports Execute, BindableCollection, PropertyChangedBase, ExtensionMethods, EventAggregator and SimpleContainer.
Fixed some WPF bugs in Screen
Vast improvements and API enhancements to ViewModelLocator and ViewLocator for easier customization of location conventions.
Fixed a potential memory leak in coroutines that are cancelled and re-used.
Enabled design-time application of convention bindings (preliminary support). To turn this feature on, set the Bind.AtDesignTime attached property to true for your view. If you are using blend's design-time data generation, you can optionally replace ViewLocator.ModifyModelTypeAtDesignTime to perform custom mapping to views. It shouldn't be needed though.
Turned ConventionManager.ConfigureSelectedItem into a delegate to allow customizations.
Added ConventionManager.ConfigureSelectedItemBinding delegate aimed to allow the inspection of the proposed binding and its customization or rejection.
Added Support for WP7 Mango
Added Support for Silverlight 5
Various improvements made to the NavigationService; improvements to navigation away, tombstoning, etc.
Fixed some WPF bugs with TabControl
Some improvements to integration between the tombstoning mechanism and the IoC container.
The Application property of the Bootstrapper is no longer globally available, to help prevent misuse.
Some breaking changes in ConventionManager API related to bug fixes in ItemsControl conventions.
Enabled overriding of default services in PhoneContainer
Assemblies are now marked as CLSCompliant.
Added a new Func to ViewLocator called DeterminePackUriFromType. This function maps a View Type to pack Uri for use in navigation scenarios. Since there is no way to reliable way to determine the Uri from a type, a default implementation is provided which should work for most cases, but can be replaced for other scenarios. This function is used internally by the WP7 UriBuilder.
Updated the SL5 build to use the new native UpdateSourceTrigger.
Enabled ValidatesOnExceptions when conventional validation is turned on for a binding.
Fixed a certain long-standing bug which caused problems when conventions were applied via the Bind.Model property inside of a virtualizing control with container recycling enabled. This may have fixed some other intermitent issues related to the Bind.Model property as well.
1.3.1
Switching to Semantic Versioning.
Added some exception handling for design time bootstrapper operations.
Added a custom converter to the MessageBinder so that we can handle converting to DateTime from string.
1.4
This includes no changes.txt, so the best I could find was:
This version includes many bug fixes across all platforms, improvements to nuget support and...the biggest news of all...full support for both WinRT and WP8.
| {
"pile_set_name": "StackExchange"
} |
Q:
PL-SQL to_date with timezone
Is there a way to simply convert string date as
2018-02-15T14:00:00+01:00 to oracle date?
I tried with to_date and 'YYYY-MM-DDTHH24:MI:SS+01:00' format but it is not valid
Oracle always thrown 'date format not recognized'
A:
select cast(TO_TIMESTAMP_TZ('2018-02-15T14:00:00+01:00','yyyy-mm-dd"T"hh24:mi:ss"+"TZH:TZM') as date) from dual;
oracle date has not time zone information. You have to convert it into timestamp with time zone and cast it as date (losing accuracy)
| {
"pile_set_name": "StackExchange"
} |
Q:
Jewish Money Lenders in England: What happened to Money Lendings after Expulsion?
When the Jews left England in 1290 by edict of Edward 1, who took over the role of Jews? My understanding is that the primary economic function was loaning money against future sales of farm produce -- liquidity before the grain was actually grown or sold. Was this their role and if so, who moved into this role given religious laws against "usury"? Also, were Jews supported by force of law in collecting debts?
A:
The 1290 Expulsion of the Jews involved a fairly small number of people, about 2,000. only some of which were moneylenders.
Their places were easily taken up by the Lombards, whose laws allowed moneylending. By the mid 14th century, there were complaints or at least suspicions that some of the "Lombards" were actually returned Jews.
| {
"pile_set_name": "StackExchange"
} |
Q:
Dynamically Creating HTML table from JS array with jQuery
I would like to dynamically create an HTML table using jQuery. I have a JS array filled with data.
I have tried the following and it doesn't work, nothing shows up. What am I doing wrong?
Javascript code
for(var i = 0; i < cycleArr.length; i++) {
var strTable = "<tr>"
for(var j = 0; j < cycleArr[i]; j++) {
var strTable = strTable + "<td>";
var strTable = strTable + cycleArr[i];
var strTable = strTable + "</td>";
}
var strTable = strTable + "</tr>";
}
$('#model_table').append(strTable);
HTML code
<div id="model_table">
</div>
A:
Assuming that cycleArr is a 2-dimensional array (for everything else this code wouldn't make a lot of sense, but correct me if I'm wrong), I found the following issues with your code:
You are comparing j with cycleArr[i] which is probably an array, instead of cycleArr[i].length.
In the inner loop you are accessing cycleArr[i] instead of cycleArr[i][j].
You are overwriting your strTable variable in each iteration of the outer loop because you are assigning <tr> instead of appending it.
You are declaring your variable strTable over and over again. It should be declared only once.
You are inserting <tr>s and <td>s into a <div> instead of a <table>. While this may be intended, I assumed it is not.
Here is a working version of your code:
var cycleArr = [['a', 'b', 'c'], ['d', 'e', 'f']];
var strTable = "";
for(var i = 0; i < cycleArr.length; i++) {
strTable += "<tr>"
for(var j = 0; j < cycleArr[i].length; j++) {
strTable += "<td>";
strTable += cycleArr[i][j];
strTable += "</td>";
}
strTable += "</tr>";
}
$('#model_table').append(strTable);
table {
border-collapse: collapse;
}
td {
border: 1px solid grey;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<table id="model_table">
</table>
Also, you wrote that you have a "JSON array", but it would appear you have a JS (JavaScript) array. A JSON array would be a string which encodes an array (you wouldn't be able to iterate over it before parsing it which makes it no longer JSON). I took the liberty to correct your post to avoid confusion.
| {
"pile_set_name": "StackExchange"
} |
Q:
Entity Framework generates short instead of int
We are using Entity Framework database-first for our Oracle database.
For some reason Number(5) becomes Int16 - short
Max Number(5) value is 99999
Max Int16 value is 32767
Problem... is there a way to instruct the mapper to translate Number(5) to int32?
A:
Solved it, Added this to the web.config:
<oracle.dataaccess.client>
<settings>
<add name="int16" value="edmmapping number(4,0)" />
<add name="int32" value="edmmapping number(9,0)" />
</settings>
</oracle.dataaccess.client>
Recreated the Model with the *.edmx file and...
Now Number(5) is Int32 instead of Int16
and Number(10) is Int64 instead of Int32
I hope it'll help someone else in the future...
| {
"pile_set_name": "StackExchange"
} |
Q:
Paraview: NameError: name 'inputs' is not defined
I'm trying to create a ProgrammableFilter in Paraview using Python. The filter should take the current selected points and count them (the filter will be more elaborated, but this is enough for explaining my problem).
In my code I'm not using any variable called 'inputs', but when I execute it I get this output (note there is an error at the end, and the code seems to be executed twice):
Generated random int: 13 using time 1419991906.3
13 Execution start
13 Selection is active
Generated random int: 59 using time 1419991906.34
59 Execution start
59 No selection is active
59 Execution end
13 Extr_Sel_raw was not None
13 Number of cells: 44
13 Execution end
Traceback (most recent call last):
File "<string>", line 22, in <module>
NameError: name 'inputs' is not defined
The code is the following, my pipeline has 2 steps, the first is a "Sphere source" and the second is the ProgrammableFilter with this code:
import paraview
import paraview.simple
import paraview.servermanager
import random
import time
a = time.time()
random.seed(a)
#time.sleep(1)
tmp_id = random.randint(1,100)
print "\nGenerated random int: %s using time %s" % (tmp_id, a)
print "%s Execution start" % (tmp_id)
proxy = paraview.simple.GetActiveSource()
active_selection = proxy.GetSelectionInput(proxy.Port)
if active_selection is None:
print "%s No selection is active" % (tmp_id)
else:
print "%s Selection is active" % (tmp_id)
Extr_Sel = paraview.simple.ExtractSelection(Selection=active_selection)
Extr_Sel_raw = paraview.servermanager.Fetch(Extr_Sel)
if Extr_Sel_raw is None:
print "%s Extr_Sel_raw was None" % (tmp_id)
else:
print "%s Extr_Sel_raw was not None" % (tmp_id)
print "%s Number of cells: %s" % (tmp_id, Extr_Sel_raw.GetNumberOfCells())
pdi = self.GetPolyDataInput()
pdo = self.GetPolyDataOutput()
pdo.SetPoints(pdi.GetPoints())
print "%s Execution end\n" % (tmp_id)
Do you know what can be causing my problem?
A:
After some work I found how to achieve to access the selected points in Paraview without generating that weird error mentioned above.
Here is the code:
import paraview
import paraview.simple
proxy = paraview.simple.GetActiveSource()
if proxy is None:
print "Proxy is None"
return
active_selection = proxy.GetSelectionInput(proxy.Port)
if active_selection is None:
print "No selection is active"
return
print "Selected points: %s" % (active_selection.IDs)
print "Amount of points: %s" % (len(active_selection.IDs) / 2)
And this is the output if I select 6 points in a Sphere Source:
Selected points: [0, 14, 0, 15, 0, 16, 0, 20, 0, 21, 0, 22]
Amount of points: 6
You can see that each selected point generates 2 IDs, the first one is the "Process ID" and the second one is the actual ID of your point.
Anyway, the reason of the original error remains unclear to me.
| {
"pile_set_name": "StackExchange"
} |
Q:
powershell script - TimeGenerated Last 24 hours
I have a functional PowerShell script that I'm using to capture user logons and logoffs for single local machine. The script works fine, but I'm having a difficult time trying to pull the last 24 hours from current date/time. I have $Date = [DateTime]::Now.AddDays(-1) at the top of my script, but it appears to be getting ignored.
Can anyone tell me what I'm missing?
powershell
$Date = [DateTime]::Now.AddDays(-1)
$Date.tostring("MM-dd-yyyy_HH,mm,ss")
$UserProperty = @{n="user";e={(New-Object System.Security.Principal.SecurityIdentifier $_.ReplacementStrings[1]).Translate([System.Security.Principal.NTAccount])}}
$TypeProperty = @{n="Action";e={if($_.EventID -eq 7001) {"Logon"} else {"Logoff"}}}
$TimeProperty = @{n="Time";e={TimeGenerated}}
Get-EventLog System -Source Microsoft-Windows-Winlogon | select $UserProperty,$TypeProperty,$TimeProperty | export-csv -path C:\Temp\TrackLogin.csv -NoTypeInformation
A:
Your current code is very limited as it doesn't interact with the date at all. Here is a bit of code I use in a function. Should work if run as an admin.
$Days = 1
$Computer = $env:COMPUTERNAME
$events = @()
$events += Get-WinEvent -ComputerName $Computer -FilterHashtable @{
LogName='Security'
Id=@(4800,4801)
StartTime=(Get-Date).AddDays(-$Days)
}
$events += Get-WinEvent -ComputerName $Computer -FilterHashtable @{
LogName='System'
Id=@(7001,7002)
StartTime=(Get-Date).AddDays(-$Days)
}
$type_lu = @{
7001 = 'Logon'
7002 = 'Logoff'
4800 = 'Lock'
4801 = 'Unlock'
}
$ns = @{'ns'='http://schemas.microsoft.com/win/2004/08/events/event'}
$target_xpath = "//ns:Data[@Name='TargetUserName']"
$usersid_xpath = "//ns:Data[@Name='UserSid']"
If($events) {
$results = ForEach($event in $events) {
$xml = $event.ToXml()
Switch -Regex ($event.Id) {
'4...' {
$user = (Select-Xml -Content $xml -Namespace $ns -XPath $target_xpath).Node.'#text'
Break
}
'7...' {
$sid = (Select-Xml -Content $xml -Namespace $ns -XPath $usersid_xpath).Node.'#text'
$user = (New-Object -TypeName 'System.Security.Principal.SecurityIdentifier' -ArgumentList $sid).Translate([System.Security.Principal.NTAccount]).Value
Break
}
}
New-Object -TypeName PSObject -Property @{
Time = $event.TimeCreated
Id = $event.Id
Type = $type_lu[$event.Id]
User = $user
}
}
If($results) {
$results | sort Time -Descending
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to Set Initial Values to Formik using Axios with Typescript
I am new to using React, Formik, and Axios and not sure how to set the form's initial values from a database call. I have tried below code snippet, but have not been successful. I am unable to find any typescript examples online on how to do this.
async function getInitialValues() {
try {
const response = await axios.get('http://localhost:53132/api/test');
//console.log(response);
return {
Division_Id: response.Divison_Id,
Year: response.Year,
Cost: response.Cost
}
//console.log(InitialValues);
//return InitialValues;
} catch (error) {
console.error(error);
}
}
<Formik initialValues={getInitialValues()}...
A:
You'll want to make your network request on mount (using the "useEffect" hook in this example). Then save those values to state (using the useState hook here, but you can use Redux or whatever state management tool you're using).
function NewForm() {
const [initialValues, setInitialValues] = useState();
useEffect(() => {
getInitialValues().then(res => setInitialValues(res);
}, []);
return initialValues ?
<Formik initialValues={initialValues}>content</Formik> :
<span>loading...</span>;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Pointer to Array Pointer
Consider the following C code:
char array1[1]={1};
char array2[2]={2, 2};
void viewDataPointedToBy(char* z)
{
printf("%i\n", *z);
}
int main(void)
{
viewDataPointedToBy(array1); //prints 1, as expected.
viewDataPointedToBy(array2); //prints 2, as expected.
viewDataPointedToBy(&array1); //prints 1, How??
viewDataPointedToBy(&array2); //prints 2, How??
viewDataPointedToBy(&&array1); //Compilation Error: 'array1' used but not defined, Why ?
system ("pause");
return 0;
}
I understand the outcomes of the first two calls of viewDataPointedToBy.
I need a explanation for the outcomes of following three calls.
A:
This is since in C array name and array addresses are equivalent. So when you pass &array1 it is equivalent to passing array.
You may try printing the address to confirm this behavior:
printf("%u %u\n", array1, &array1);
A:
You send the base address of the array.
(1)viewDataPointedToBy(array1)
or
viewDataPointedToBy(&array1)
in both function you send the base address of array1
therefore the result of both function are same.
(2) in the case : viewDataPointedToBy(&&array1)
you can use (&&)logical and operator. Therefore error will occur.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to open an app in non-retina mode for iPad?
Can we turn off retina display for iPad programmatically? I know it is possible in cocos2d by using
[[CCDirector sharedDirector] enableRetinaDisplay:NO];
But for cocos touch is it possible? If yes how?
A:
No,Its not possible. Cocos2d acts as a engine so its possible to turn off retina display.It creates its own virtual display and show it over the normal display.
But in cocos touch we can not access the actual engine.So its not possible in cocos touch.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is Stackoverflow headed in the right direction?
I've tried asking a question, and began getting bad feedback with barely any clear comments on what's inappropriate about it. Now I no longer have access to asking new questions. I've read http://goo.gl/C1Kwu and have tried editing my questions to be more formal and clear. However nothing seems to help.
How to assign to a variable an alias
So honestly...where is this website headed?
I am not here to spam or be provocative, I'm here to be productive and try and contribute to the enterprise of knowledge these forums provide, without offending the epistemology of it. Do these policies truly set path for productivity? Tt seems rather counter-productive
A:
First of all, you get many chances to write better questions. It's not like at your first offense (writing a bad question.. well, a question that doesn't fit our standards) you could get banned.
If you are question banned it is because you didn't take the time to learn from the previous negative feedback you had. Every users, especially new ones, will get some downvoted posts along the way. Look at my questions history and you will see that the more I advance on this website the more I learn to formulate good posts and, therefore, get more positive feedback. (Except on Meta but here downvotes aren't interpreted the same way).
So honestly...where is this website headed?
As far as I'm concerned, and I'm sure a lot of users share my point of view, the site is heading right the way it should be heading right now. A few things could change here and there but these are technicalities and that's what meta is for.
A:
Where StackOverflow is heading
According to the About page:
With your help, we're working together to build a library of detailed answers to every question about programming.
As long as your questions are suitable for inclusion in the "library of detailed answers" you will fit in here. If you have a history of asking question that are not good additions to that library, the system will recognize the pattern and take appropriate action.
What you can do
Read more about what to do when you are banned.
| {
"pile_set_name": "StackExchange"
} |
Q:
Selecting default option value with knockout
I'm trying to select a default select option based on one of the property with which I'm populating my select option.
This code is copied straight from @rneimeyer's fiddle. I did tweak it to do what I wanted to do.
So, I have choices as my observableArray.
var choices = [
{ id: 1, name: "one", choice: false },
{ id: 2, name: "two", choice: true },
{ id: 3, name: "three", choice: false }
];
function ViewModel(choices, choice) {
this.choices = ko.observableArray(choices);
};
The difference between rneimeyer's fiddle and mine is that I have choice property added on my object inside the observableArray instead of having a separate observable for the option that we want to be default.
Here's the fiddle on my attempt.
Now I'm checking in my select element tag whether the choice attribute is true or not. And if it is then I want to set the name to the value attribute so that it becomes the default.
<select data-bind="options: choices, optionsText: 'name', value: choice"></select>
I've tested this with simple data model in my fiddle here as well which is working just as I wanted.
I guess what my real query is how to check choice property in the data-bind. I see that optionText is being able to access the name property just fine. Not sure why it isn't same for choice property in value attribute.
A:
I might have misdirected to some people. Also, I apologize for not mentioning the version that I'm using. I'm currently using Knockout 3.0.0 (you'll see why this is important later)
Also, just to note that I'm not saying @XGreen's method is wrong but that wasn't exactly what I was looking for and this might be due to my poor explanation.
Let me first try to clarify what I was trying to accomplish.
First of all, I will be having an array of object with the information for the options.
[
{ id: 1, name: "one", choice: false },
{ id: 2, name: "two", choice: true },
{ id: 3, name: "three", choice: false }
]
Now, what I wanted to do was to data-bind select option to that array with choice true being the default selected one.
I'm not intending to create any extra observable except the array itself which is going to be an observableArray.
After much research I finally found optionsAfterRender attribute for options property in Knockout's Docs.
<select data-bind="options: choices,
optionsValue: 'name',
optionsAfterRender: $root.selectDefault">
</select>
So what optionsAfterRender really does is, on each array element it calls custom function which I've set to check if the choice is true or not and make the value of select option that which has the true.
Do note that ko.applyBindingsToNode does not work on version 2.2.0 which I had in my original fiddle.
function ViewModel(choices) {
this.choices = ko.observableArray(choices);
this.selectDefault = function(option,item){
if(item.choice){
ko.applyBindingsToNode(option.parentElement, {value: item.name}, item);
}
};
};
ko.applyBindings(new ViewModel(choices));
And here's the fiddle for it.
A:
Ok If I understand you want to set the true choice as your default selected value.
First you need to involve id in your drop down so it becomes the value of the options as we will filter our collection based on that unique id
<select data-bind="options: choices, optionsText: 'name', optionsValue: 'id', value: selectedChoice"></select>
As you see now you need to create a new observable called selectedChoice and we are going to populate that observable with the choice that is true using a computed.
var choices = [
{ id: 1, name: "one", choice: false },
{ id: 2, name: "two", choice: true },
{ id: 3, name: "three", choice: false }
];
function ViewModel(choices) {
var self = this;
self.choices = ko.observableArray(choices);
self.trueChoice = ko.computed(function(){
return ko.utils.arrayFirst(self.choices(), function(item){
return item.choice === true;
});
});
self.selectedChoice = ko.observable(self.trueChoice().id);
};
ko.applyBindings(new ViewModel(choices));
the new computed property trueChoice uses the arrayFirst method in order to return the first item in your choices collection that has its choice property set to true.
Now that we have our true choice all we have to do is to set the selected value of the dropdown aka selectedChoice to be the id of that true choice so the item becomes selected in the drop down.
Here is also a working fiddle for this
A:
Added a Gist that disabled the first option in a select drop down list, and work nicely with KO's optionsCaption binding, using a optionsDisableDefault binding:
https://gist.github.com/garrypas/d2e72a54162787aca345e9ce35713f1f
HTML:
<select data-bind="value: MyValueField,
options:OptionsList,
optionsText: 'name',
optionsValue: 'value',
optionsCaption: 'Select an option',
optionsDisableDefault: true">
</select>
| {
"pile_set_name": "StackExchange"
} |
Q:
What is HIBERNATE_IDX with @JoinTable
I never do this, but someone on my project created a many to many relationship between, let's say, a Foo and a Bar. Foo and Bar both have unique system generated IDs. On Foo, I have the following code:
@ManyToMany(targetEntity = Bar.class, cascade = CascadeType.ALL, fetch = FetchType.EAGER)
@JoinTable(name = "FOO_BAR_LNK",
joinColumns = {@JoinColumn(name = "FOO_ID", referencedColumnName = "ID")},
inverseJoinColumns = {@JoinColumn(name = "BAR_ID", referencedColumnName = "ID")})
private Set<Bar> bars;
When the table gets created, it has 3 columns, HIBERNATE_IDX, FOO_ID and BAR_ID. HIBERNATE_IDX contains all zeroes.
What is HIBERNATE_IDX?
A:
It appears that HIBERNATE_IDX is part of a tie-breaker to guarantee unique indices on join tables and prevent cartesian products. Kind of pieced together, but that's what I think.
| {
"pile_set_name": "StackExchange"
} |
Q:
Trabalhando com Array e selecionando um indice especifico
Nos meus estudos em JS eu tenho o seguinte codigo:
<select name='options'>
<option value="a">A</option>
<option value="b">B</option>
<option value="c">C</option>
</select>
<input class="input" type="text" />
<input class="input" type="text" />
<input class="input" type="date" />
Preciso que baseado no select ele mostre o input referente ao indice dele.
Exemplo: Se selecionei A, ele mostra o input 1, se selecionei B ele esconde o 1 e mostra o 2. E assim em diante. A idéia é que apareça apenas um elemento do tipo input na tela, e todos os outros estejam com display: none;
Porem não posso usar uma class extra para diferenciar. O que tive como idéia foi usar um array, e fazer um For, mas não tive evolução com isso.
A:
Utilize a propriedade HTMLSelectElement.selectedIndex como argumento da função .eq() para selecionar o <input> pelo índice.
$('[name=options]').bind('change', function(){
$('.input').hide() // ocultar todos
.eq(this.selectedIndex).show(); // selecionar pelo índice e exibir
}).trigger('change'); // forçar execução imediata
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<select name='options'>
<option value="a">A</option>
<option value="b">B</option>
<option value="c">C</option>
</select>
<input class="input" type="text" value="A" />
<input class="input" type="text" value="B" />
<input class="input" type="text" value="C" />
| {
"pile_set_name": "StackExchange"
} |
Q:
Does gravity limit the number of bosons that can occupy the same single-particle state?
QFT says that an unlimited number of bosons can occupy the same "state" (what I mean by that is that the whole system's wavefunction is composed of a product of many identical wavefunctions).
However, gravity increases monotonically with energy density. It seems that at some point, one additional boson would create a high enough energy density to create a black hole. Is this true? Could I calculate the number of bosons necessary to cause this?
A:
Yes, and Yes.
So how would you go about doing this? Let's start Newtonian. Assume everything is in a ground state. You assume a spherically symmetric wave function. For each shell of radius r, the mass enclosed is the total mass multiplied by the probability of the particle being inside r, which requires integrating the square of the wave function. From the density you can calculate how the energy changes with depth (again, involving integrals). You need to solve for these simultaneously because the wave function depends on the gravitational field and visa-versa.
As long as things stay newtonian it's stable; if you reduce the linear size by 1/2 the gravitational well deepens by a factor of two (gravitational potential drops off like 1/x, force is 1/x^2) but the quantum "zero point" energy increases 4 fold (the particle in a box energy). This means that if the star expands, gravity wins and pulls it back, and if it shrinks, the zero-point term wins and keeps it form collapsing.
Einstein makes things much more difficult. The Schodinger equation no longer applies, and there is not a simple relativistic analogue, and we have to now apply it in a curved space. Finally, we need to know both pressure and density is a function of "circumferential radius". However, we still can use the Newtonian calculations to get an order-of-magnitude estimate of when the whole thing collapses to a black hole.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do mathematicians reconcile that an infinite set does not have to be larger than its proper subset?
If we imagine an infinite number of fractions and, within them, an infinite number of integers, doesn't the former constitute a "larger" infinite set of numbers?
This has always been paradoxical for me. I would like to know how (or if) mathematicians reconciled this.
A:
There is a long controversy as to what should count as the "size" of an infinite set, and there provably does not exist a notion that satisfies both the bijectivity principle, a.k.a. Hume's principle (bijective sets have equal size), and the part-whole principle (whole is greater than its part). So any notion of size for infinities will be counterintuitive in one way or the other. The history of philosophical debates surrounding this clash of intuitions is described by Mancosu in Measuring the size of infinite collections of natural numbers: was Cantor's theory of infinite number inevitable? (his answer is no).
Modern mathematics adopted the notion of cardinality that satisfies the bijectivity principle, but its downside are the "paradoxes of infinity" like the Hilbert's hotel that can accomodate any 10 (or 100, or 100000) additional guests at any time. It has rooms numbered by positive integers and to free up rooms 1 to 10 (or 100, or 100000) the manager needs only to move all existing guests from room n to room n+10 (n+100, n+100000).
The part-whole principle can be accomodated, but only if one gives up the bijectivity principle. One version was introduced by Katz, who shared the OP sentiment, in Sets and their Sizes:
"Cantor’s theory of cardinality violates common sense. It says, for example,
that all infinite sets of integers are the same size. This thesis criticizes the arguments for Cantor’s theory and presents an alternative. The alternative is based on a general theory, CS (for Class Size)... Because the language of CS is restricted... the notion of one-one correspondence cannot be expressed in this language, so Cantor’s definition of similarity will not be in CS, even though it is true for all finite sets."
Another alternative was introduced by Benci in 1995, and is called numerosity, "an Aristotelian notion of size", as he put it. Numerosity is always smaller for proper subsets, but... it depends on how a subset is given, it "counts" labeled subsets, not just subsets. So as long as a bijection changes the labeling too much it does not count. Bijective sets can, and do, have different numerosities. Mancosu uses numerosities to give an interesting counter to Gödel's argument that adoption of Cantor's cardinality was "inevitable":
"Gödel’s reflection aims at showing that in generalizing the notion of number from the finite to the infinite one inevitable ends up with the Cantorian notion of cardinal number. The key step in the argument is the premise and the theory of numerosities can help us see that the premise already contains in itself the Cantorian solution. In fact, the premise takes as evident the request that “the number of objects belonging to some class does not change if, leaving the objects the same, one changes in any way whatsoever their properties or mutual relations (e.g., their colors or their distribution in space).” While the premise constitutes no problem when dealing with finite sets, one might question its acceptability in the realm of the infinite.
Indeed, in the theory of numerosities we cannot grant the premise when it comes to infinite sets. For, while it is possible to abstract from the nature of the objects themselves there is one type of relation that affects the counting, namely the way in which the elements are grouped. Such grouping makes no difference in the realm of finite sets of integers. But when we move to infinite sets a rearrangement of the grouping will in general affect the approximating functions and thus the numerosity of the set. Someone committed to the counting embodied in the theory of numerosities might thus reasonably resist accepting the premise on which Gödel bases his argument and thus also resist the claim that the generalization of number from the finite to the infinite must perforce end up with the notion of cardinal number."
| {
"pile_set_name": "StackExchange"
} |
Q:
Disable antialising when scaling images
Possible Duplicate:
How to stretch images with no antialiasing
Is it in any way possible to disable antialiasing when scaling up an image ?
Right now, i get something that looks like this :
Using the following css code :
#bib { width:104px;height:104px;background-image:url(/media/buttonart_back.png);background-size:1132px 1360px;
background-repeat:no-repeat;}
What I would like, is something like this :
In short, any CSS flag to disable anti-aliasing from when scaling up images, preserving hard edges.
Any javascript hacks or similar are welcome too.
(Yes, I am aware that php and imagemagick can do this as well, but would prefer a css based solution.)
UPDATE
The following have been suggested :
image-rendering: -moz-crisp-edges;
image-rendering: -moz-crisp-edges;
image-rendering: -o-crisp-edges;
image-rendering: -webkit-optimize-contrast;
-ms-interpolation-mode: nearest-neighbor;
But that doesn't seem to work on background images.
A:
Try this,
it's a fix for removing it in all browsers.
img {
image-rendering: optimizeSpeed; /* STOP SMOOTHING, GIVE ME SPEED */
image-rendering: -moz-crisp-edges; /* Firefox */
image-rendering: -o-crisp-edges; /* Opera */
image-rendering: -webkit-optimize-contrast; /* Chrome (and eventually Safari) */
image-rendering: pixelated; /* Chrome */
image-rendering: optimize-contrast; /* CSS3 Proposed */
-ms-interpolation-mode: nearest-neighbor; /* IE8+ */
}
Sources:
http://nullsleep.tumblr.com/post/16417178705/how-to-disable-image-smoothing-in-modern-web-browsers
http://updates.html5rocks.com/2015/01/pixelated
GitaarLAB
A:
CSS that works in Firefox Only:
img { image-rendering: -moz-crisp-edges; }
It worked for me (firefox 16.0)
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I make a full screen form in Java FX?
How do I make a full screen form in Java FX?
That should look like game window.
A:
Take a look at Stage.setFullScreen.
| {
"pile_set_name": "StackExchange"
} |
Q:
Do jars behave differently in unix and linux?
I have a jar which creates a filtered XML file from a huge XML file.
Now this jar is working perfectly in UNIX with Java SE 1.5.0.15, but in Linux this jar is behaving differently and creating erroneous XML.
Can this be a platform issue?
Do I require to make a new jar for LINUX?
A:
Linux is a version of Unix. Do you mean Solaris?
I suggest you use the latest version of Java, and if you have to, the latest version of Java 5.0. I would also try with Java 6 update 45 or Java 7 update 25 to see if this is a bug which has been fixed.
A:
Can this be a platform issue?
Just a wild guess...In java the default charset is platform dependent. Default charset depends upon the locale of the underlying operating system. When converting XML String to bytes and writing them to a file you can get different results depending on the default charset.
byte[] xmlFileBytes = xmlString.getBytes() //depends on default charset
To see JVM default charset call Charset.defaultCharset() in your code on both platforms for comparison. Perhaps it is different on UNIX?
Annother way to determine default charset is to use command-line utility:
$ jinfo <processId> | grep file\.encoding
To see OS locale use comand:
$ locale
XML files should be encoded in UTF-8 charset.
You can override default charset in your code with UTF-8: xmlString.getBytes("UTF-8")
Or you can override it by providing file.encoding system property when JVM starts:
$ java -Dfile.encoding=UTF-8 -jar your-jar-file.jar
Do I require to make a new jar for LINUX?
Probably not, Java mantra is supposed to be "Write once, run anywhere"
However:
The catch is that since there are multiple JVM implementations, on top of a wide variety of different operating systems such as Windows, Linux, Solaris, NetWare, HP-UX, and Mac OS, there can be subtle differences in how a program may execute on each JVM/OS combination, which may require an application to be tested on various target platforms. This has given rise to a joke among Java developers, "Write Once, Debug Everywhere".
| {
"pile_set_name": "StackExchange"
} |
Q:
Python: how to speed up spatial search (nearest point)?
I'm working on a program that determines the closest location from a given point. The point cloud I'm testing against is very big (~800.000). However, this doesn't really explain why my implementation is so slow. This is my approach:
First, I created a spatial index for the point shape
pntshp.ExecuteSQL('CREATE SPATIAL INDEX ON %s' % table_name)
I defined an array of buffer distances to narrow down the search radius. Which of course also means that I have to create a buffer for each point (which might be expensive).
BUFFER_DISTANCES = ( 0.001, 0.005, 0.01, 0.02, 0.05 ) # 100m, 500m, 1km, 2km, 5km
Then, the buffer is used as a spatial filter
node_lyr.SetSpatialFilter(buff)
If the filter returns None the buffer distance will be increased.
for buffer_d in BUFFER_DISTANCES:
buffr = get_buffer(xy_street,buffer_d)
...
Then I am calculating the distance to the points returned by the spatial filter
p=ogr.Geometry(ogr.wkbPoint)
p.AddPoint(xy[0],xy[1])
for feat in node_lyr:
geom = feat.GetGeometryRef()
d = p.Distance(geom)
dist.append(d)
To get the closest point:
def get_closest_pnt(dist, node, how_close):
mrg = zip(dist,node)
mrg.sort(key=lambda t: t[0])
try:
return mrg[how_close]
except IndexError, ierr:
print '%s \ndist/node tuple contain %s' % (ierr,mrg)
It all works fine but is really slow. Creating a spatial index didn't show any effect, really. To calculate 100 points this implementations takes ~6,7 seconds. The program needs to be able to calculate the closest location for more than 2000 points as fast as possible. Any ideas on how to improve my approach?
EDIT
I tried different approaches to see where it gets me. I came across something very astonishing I want to share here.
I implemented a simple lookup algorithm as described here, and one of the solutions that where suggested (the sorted set approach).
The surprising fact is that performance is not only dependent on the implementation but even more so of the OSX. My original ogr/buffer algorithm turns out to be blazing fast on my OSX whereas it is painstaking slow on Linux (hence the question here).
Here are my results (100 runs).
Method | OSX | Linux Ubuntu
ogr buffer | 0:00:01.434389 | 0:01:08.384309
sub string | 0:00:19.714432 | 0:00:10.048649
sorted set | 0:00:01.239999 | 0:00:00.600773
Specs Mac OSX
Processor 4x2.5 GHz
Memory 8 GB 1600 MHz
Specs Dell Linux Ubuntu
Processor 8x3.4GHz
Memory 7.8 GB
If someone can explain why these differences occur, please don't hesitate.
A:
Avoiding the spatial query
Since you noted buffering is computationally expensive and may be holding you back, consider this approach: Start looping through each point and round off your lat long point to a decimal place within your buffer (i.e. if your lat/long is 12.3456789/12.3456789 then get all points that begin with a lat/long of 12.34567/12.34567 or 12.34568/12.34568 or 12.34567/12.34568 or 12.34568/12.34567). Use a hash table to do this. Take this subset of points, get all distances to your input point, and the point with minimum distance is the one you want. Creating a lookup methodology will make this very efficient.
This avoids having to do expensive spatial queries and query filter setup 800,000 times. You would only be doing string/double comparisons and distance calculations in this method. The only downside that I could see to this method is that each decimal roundoff is an order of magnitude above the last, so if your spatial query didn't return any points, rounding down again may return many more points than you need, which would slow you down a bit. However, you have at least two orders of magnitude in your orignal BUFFER_DISTANCES, so I think this method may suffice for your purposes and would certainly be faster than the method you have going right now.
The hash table:
Here's a more concise and better introductory explanation to hash tables.
The concept is like so: You want to look up the definition for the word "Yes" in the dictionary. You don't look through every word in a dictionary starting from A to find the words that start with Ye, correct? You jump straight to the Y section first, then you look for the page that says Ye-Yo and then you scan all the words on that page to get the definition for Yes.
The lookup methodology to loop through all the lat/long points would be implemented in this same fashion. You'd look first for all the lat/longs that start with 12.3 in a range from, lets say 0 to 99. Then you'd look in those 10 values for 12.34, and so on. If programmed correctly, you can return a "bucket" with all of the points within your buffer, without having to execute a single spatial query or string/double comparison!
Finally, it should be noted that if you store your indexed table in a RDBMS, there may be optimization for this already. If your lat/long values are doubles and do a simple BETWEEN SQL query, it will likely have its search function already optimized to do this (if your query is written correctly).
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I convince management to implement a commission system for sales?
I recently started work as a salesperson at a large sailing center. The original idea was base salary + commission, but they've never done commission before and the director wasn't sure how to approach it at the time.
So I accepted just a base salary for 1-2 months (I also do other work there) and we agreed that we'd revisit the subject after that time period.
Well, it's that time now but we still have conflicting views regarding commission. Everything is very amicable and open. We talk it through rationally from both points of view and the director is up for doing it, but he doesn't see how it could be successfully implemented at the center due to their business model.
I'll explain ... we have a mixture of fixed staff and temporary staff (the sailing instructors). Everyone, in reality, does sales and a bit of everything. On the fixed staff we have someone who has designed products from scratch, marketed them and sells them constantly (either through platforms, inbound calls, talking to people in person who come into the center, reaching out to organisations, etc...) and who gets paid a fixed salary. Other people do similar things to a lesser extent.
The director wants to get 5-6 of the people who are on the fixed staff, including him and myself, doing sales as much as possible as the center needs it.
When I bring up commissions, he sees the following:
People should do their best with their base salary, i.e. if they're assigned to sales then there's no reason they shouldn't bring as many sales in as possible. Why pay them extra to do their job?
If salespeople get commission, what about the rest of staff who also do a lot of things that can be considered sales? For example the case I mentioned before. Shouldn't they also get commission?
It's complicated to implement and he thinks it's more straightforward to stick with salary.
I see it this way:
Commission will motivate people to do even better. You might reach a "peak performance" level on salary but the commission can take you further. Of course, I realise that this doesn't apply to everyone and I look at it a lot through my own personal point-of-view.
There should be clearly delineated responsibilities and a dedicated sales team. There shouldn't be friction if salespeople earn more than salaried staff as that's normal. This one is impossible though because the responsibilities everyone have are pretty fluid, excluding administration and finance.
We pay travel agents a commission similar to the one we're considering. If we already do that, it means we can do it internally with the advantage that our team is focused exclusively on selling the center's products as opposed to a TA who has lots of activity providers they earn commission from.
As you can see I understand his point of view and can actually agree with it too. But I, personally, can't do sales without a performance-based component. It's contrary to my nature.
One way out of this impasse would be to give everyone who does sales two "extreme" choices: either salary or only commission. I'd probably opt for the latter.
It's all very complicated though and that doesn't help.
How can I convince management to implement a commission system for sales?
A:
If I were management, I would be reluctant to offer commission in the situation you described. With duties being pretty fluid except administration, rewarding one duty (sales) means that it is to the employees benefit to focus solely on that and not other duties. Commissions are ideal for people whose sole duty is sales and a bit of a gamble otherwise. A annual/quarterly bonus for permanent staff based on revenue or profit might make sense (as almost all job duties should lead to the bottom line) but a direct commission doesn't.
| {
"pile_set_name": "StackExchange"
} |
Q:
Detecting multiple collisions in SpriteKit
I'm still a beginner in Swift and I'm having some trouble with collision detections in SpriteKit. I used this question from StackOverFlow which was great in showing my how to construct things neatly. But I'm having problems with my didBegin function, which does not even get called to at all. I'm hoping I missed something simple out that you guys can take a look at for me.
Thanks in advance.
Here is my PhysicsCatagoies struct:
import Foundation
import SpriteKit
struct PhysicsCatagories: OptionSet {
let rawValue: UInt32
init(rawValue: UInt32) { self.rawValue = rawValue }
static let None = PhysicsCatagories(rawValue: 0b00000) // Binary for 0
static let Player = PhysicsCatagories(rawValue: 0b00001) // Binary for 1
static let EnemyBullet = PhysicsCatagories(rawValue: 0b00010) // Binary for 2
static let PlayerBullet = PhysicsCatagories(rawValue: 0b00100) // Binary for 4
static let Enemy = PhysicsCatagories(rawValue: 0b01000) // Binary for 8
static let Boss = PhysicsCatagories(rawValue: 0b10000) // Binary for 16
}
extension SKPhysicsBody {
var category: PhysicsCatagories {
get {
return PhysicsCatagories(rawValue: self.categoryBitMask)
}
set(newValue) {
self.categoryBitMask = newValue.rawValue
}
}
}
And here is how I assigned my nodes in GameScene:
player.physicsBody = SKPhysicsBody(rectangleOf: player.size)
player.physicsBody!.affectedByGravity = false
player.physicsBody!.categoryBitMask = PhysicsCatagories.Player.rawValue
player.physicsBody!.collisionBitMask = PhysicsCatagories.None.rawValue
player.physicsBody!.category = [.Enemy, .EnemyBullet, .Boss]
bullet.physicsBody = SKPhysicsBody(rectangleOf: bullet.size)
bullet.physicsBody!.affectedByGravity = false
bullet.physicsBody!.categoryBitMask = PhysicsCatagories.PlayerBullet.rawValue
bullet.physicsBody!.collisionBitMask = PhysicsCatagories.None.rawValue
bullet.physicsBody!.category = [.Enemy, .Boss]
enemy.physicsBody = SKPhysicsBody(rectangleOf: enemy.size)
enemy.physicsBody!.affectedByGravity = false
enemy.physicsBody!.categoryBitMask = PhysicsCatagories.Enemy.rawValue
enemy.physicsBody!.collisionBitMask = PhysicsCatagories.None.rawValue
enemy.physicsBody!.category = [.Player, .PlayerBullet]
enemyBullet.physicsBody = SKPhysicsBody(rectangleOf: enemyBullet.size)
enemyBullet.physicsBody!.affectedByGravity = false
enemyBullet.physicsBody!.categoryBitMask = PhysicsCatagories.EnemyBullet.rawValue
enemyBullet.physicsBody!.collisionBitMask = PhysicsCatagories.None.rawValue
enemyBullet.physicsBody!.category = [.Player]
boss.physicsBody = SKPhysicsBody(rectangleOf: boss.size)
boss.physicsBody!.affectedByGravity = false
boss.physicsBody!.categoryBitMask = PhysicsCatagories.Boss.rawValue
boss.physicsBody!.collisionBitMask = PhysicsCatagories.None.rawValue
boss.physicsBody!.category = [.Player, .PlayerBullet]
bulletSpecial.physicsBody = SKPhysicsBody(rectangleOf: bulletSpecial.size)
bulletSpecial.physicsBody!.affectedByGravity = false
bulletSpecial.physicsBody!.categoryBitMask = PhysicsCatagories.PlayerBullet.rawValue
bulletSpecial.physicsBody!.collisionBitMask = PhysicsCatagories.None.rawValue
bulletSpecial.physicsBody!.category = [.Enemy, .Boss]
Finally, this is my didBegin function, which does not seem to work at all:
func didBegin(_ contact: SKPhysicsContact) {
let contactCategory: PhysicsCatagories = [contact.bodyA.category, contact.bodyB.category]
switch contactCategory {
case [.Player, .Enemy]:
print("player has hit enemy")
case [.PlayerBullet, .Enemy]:
print("player bullet has hit enemy")
case [.PlayerBullet, .Boss]:
print("player bullet has hit boss")
case [.Player, .Boss]:
print("player has hit boss")
case [.Player, .EnemyBullet]:
print("player has hit enemy bullet")
default:
preconditionFailure("Unexpected collision type: \(contactCategory)")
}
}
A:
I've not used the OptionSet technique for cagegoryBitMasks, so here's how I'd do it:
Define unique categories, ensure your class is a SKPhysicsContactDelegate and make yourself the physics contact delegate:
//Physics categories
let PlayerCategory: UInt32 = 1 << 0 // b'00001'
let EnemyBulletCategory: UInt32 = 1 << 1 // b'00010'
let PlayerBulletCategory: UInt32 = 1 << 2 // b'00100'
let EnemyCategory: UInt32 = 1 << 3 // b'01000'
let BossCategory: UInt32 = 1 << 4 // b'10000'
class GameScene: SKScene, SKPhysicsContactDelegate {
physicsWorld.contactDelegate = self
Assign the categories (usually in didMove(to view:) :
player.physicsBody.catgeoryBitMask = PlayerCategory
bullet.physicsBody.catgeoryBitMask = BulletCategory
enemy.physicsBody.catgeoryBitMask = EnemyCategory
enemyBullet.physicsBody.catgeoryBitMask = EnemyBulletCategory
boss.physicsBody.catgeoryBitMask = BossCategory
(not sure about bulletSpecial - looks the same as bullet)
Set up contact detection:
player.physicsBody?.contactTestBitMask = EnemyCategory | EnemyBulletCategory | BossCategory
bullet.physicsBody?.contactTestBitMask = EnemyCategory | BossCategory
enemy.physicsBody?.contactTestBitMask = PlayerCategory | PlayerBulletCategory
enemyBullet.physicsBody?.contactTestBitMask = PlayerCategory
boss.physicsBody?.contactTestBitMask = PlayerCategory | PlayerBulletCategory
Turn off collisions: (on by default)
player.physicsBody?.collisionBitMask = 0
bullet.physicsBody?.collisionBitMask = 0
enemy.physicsBody?.collisionBitMask = 0
enemyBullet.physicsBody?.collisionBitMask = 0
boss.physicsBody?.collisionBitMask = 0
Implement didBegin:
func didBegin(_ contact: SKPhysicsContact) {
print("didBeginContact entered for \(String(describing: contact.bodyA.node!.name)) and \(String(describing: contact.bodyB.node!.name))")
let contactMask = contact.bodyA.categoryBitMask | contact.bodyB.categoryBitMask
switch contactMask {
case PlayerCategory | EnemyCategory:
print("player has hit enemy")
case PlayerBulletCategory | EnemyCategory:
print("player bullet has hit enemy")
case PlayerBulletCategory | BossCategory:
print("player bullet has hit boss")
case PlayerCategory | BossCategory:
print("player has hit boss")
case PlayerCategory | EnemyBulletCategory:
print("player has hit enemy bullet")
default:
print("Undetected collision occurred")
}
}
It's a bit late here, so hopefully I haven't made any stupid mistakes.
=======================
You could also include this function and then call it via checkPhysics() once you have set up all your physics bodies and collisions and contact bit masks. It will go through every node and print out what collides with what and what contacts what (it doesn't check the isDynamic property, so watch out for that):
//MARK: - Analyse the collision/contact set up.
func checkPhysics() {
// Create an array of all the nodes with physicsBodies
var physicsNodes = [SKNode]()
//Get all physics bodies
enumerateChildNodes(withName: "//.") { node, _ in
if let _ = node.physicsBody {
physicsNodes.append(node)
} else {
print("\(String(describing: node.name)) does not have a physics body so cannot collide or be involved in contacts.")
}
}
//For each node, check it's category against every other node's collion and contctTest bit mask
for node in physicsNodes {
let category = node.physicsBody!.categoryBitMask
// Identify the node by its category if the name is blank
let name = node.name != nil ? node.name : "Category \(category)"
if category == UInt32.max {print("Category for \(String(describing: name)) does not appear to be set correctly as \(category)")}
let collisionMask = node.physicsBody!.collisionBitMask
let contactMask = node.physicsBody!.contactTestBitMask
// If all bits of the collisonmask set, just say it collides with everything.
if collisionMask == UInt32.max {
print("\(name) collides with everything")
}
for otherNode in physicsNodes {
if (node != otherNode) && (node.physicsBody?.isDynamic == true) {
let otherCategory = otherNode.physicsBody!.categoryBitMask
// Identify the node by its category if the name is blank
let otherName = otherNode.name != nil ? otherNode.name : "Category \(otherCategory)"
// If the collisonmask and category match, they will collide
if ((collisionMask & otherCategory) != 0) && (collisionMask != UInt32.max) {
print("\(name) collides with \(String(describing: otherName))")
}
// If the contactMAsk and category match, they will contact
if (contactMask & otherCategory) != 0 {print("\(name) notifies when contacting \(String(describing: otherName))")}
}
}
}
}
It will produce output like:
Optional("shape_blueSquare") collides with Optional("Screen_edge")
Optional("shape_redCircle") collides with Optional("Screen_edge")
Optional("shape_redCircle") collides with Optional("shape_blueSquare")
Optional("shape_redCircle") notifies when contacting Optional("shape_purpleSquare")
Optional("shape_redCircle") collides with Optional("shape_greenRect")
Optional("shape_redCircle") notifies when contacting Optional("shape_greenRect")
Optional("shape_purpleSquare") collides with Optional("Screen_edge")
Optional("shape_purpleSquare") collides with Optional("shape_greenRect")
Category for Optional("shape_greenRect") does not appear to be set correctly as 4294967295
Optional("shape_greenRect") collides with Optional("Screen_edge")
Optional("shape_yellowTriangle") notifies when contacting Optional("shape_redCircle")
Optional("shape_yellowTriangle") collides with Optional("shape_greenRect")
Optional("shape_yellowTriangle") notifies when contacting Optional("shape_greenRect")
etc.
| {
"pile_set_name": "StackExchange"
} |
Q:
Store into a database table as row the query result
query result
Array
(
[0] => stdClass Object
(
[ingredientID] => 2
[code] => Bf
[description] => 1st Class Flour
[volume] => 8268
[price] => 750
[amount_gram] => 0.02980
[status] => Inactive
[uom_id] => 1
[flour] => Yes
)
[1] => stdClass Object
(
[ingredientID] => 3
[code] => Sf
[description] => 3rd Class Flour
[volume] => 18490
[price] => 635
[amount_gram] => 0.02540
[status] => Inactive
[uom_id] => 5
[flour] => Yes
)
..........
I want to store this results into another table as row inventory.
the table will look like this:
ID inventory
1 (the result)
2 (another result)
And after I will query it back again so that I can display the result.
here's what I have done lately.
store:
//getting the result
$inv = $this->db->get_from('table','id'=>'1')->row();
<input type="hidden" name="inventory" value="<?php print_r($inv)?>">
//storing in the new table
$this->db->insert('table2',array('inventory'=>$this->input->post('inventory')));
getting:
$inventory = $this->db->get_where('table2',array('ID'=>'1'))->row_array();
//result
array
(
[ID] => 1
[inventory] =>
array
(
[0] => stdClass Object
(
[ingredientID] => 2
...... and so on
I want to display everything in the array['inventory'] which is an array of objects.
I've done this
foreach($arr['inventory'] as $invent):
echo $invent['ingredientID'];
but there's an error in the foreach part.
error: Invalid argument supplied for foreach()
What should i do?
endforeach;
A:
assuming:
$results = $this->db->get_where('table2',array('ID'=>'1'))->row_array();
you should use this to print it
foreach($results['inventory'] as $inventory)
{
print_r($inventory->ingredientID);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Transpose and aggregate SQL query in MS Access
Hi i'm looking for help with a query. I currently have a table that looks like this:
BP I/E
AN02 I
BN02 ECN
EN89 ECN
AN02 I
BN02 ECC
EN89 ECN
AN02 ECC
BN02 ECC
EN89 ECN
AN02 ECC
BN02 ECN
EN89 ECN
AN02 I
BN02 I
EN89 ECN
Im looking to make an SQL query that counts the number of I's, ECN's, and ECC's per BP. Hopefully a query that would look like this:
BP I ECN ECC
AN02 2 3 1
BN02 1 6 9
EN89 4 2 3
Can anyone help? Thank You
A:
A crosstab may suit:
TRANSFORM Count(tbl.[I/E]) AS [CountOfI/E]
SELECT tbl.BP
FROM tbl
GROUP BY tbl.BP
PIVOT tbl.[I/E];
Result:
BP ECC ECN I
AN02 2 3
BN02 2 2 1
EN89 5
| {
"pile_set_name": "StackExchange"
} |
Q:
Entity framework inserting parent-children-grandchildren object
I'm trying to insert a parent entity with n children that can further have n of their own children. When I insert an object that has no grandchildren everything works fine, but as soon as the input object contains grandchildren the following error is presented on Context.SaveChanges():
"The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted."
Parent:
public class Parent : Entity
{
public Parent()
{
this.Children = new HashSet<Children>();
}
public virtual ICollection<Child> Children { get; set; }
}
Child:
public class Child : Entity
{
public Child ()
{
this.GrandChildren = new HashSet<GrandChild>();
}
public virtual ICollection<GrandChild> GrandChildren { get; set; }
public int ParentId { get; set; }
public virtual Parent Parent { get; set; }
}
Grandchild:
public class GrandChild : Entity
{
public int ChildId { get; set; }
public virtual Child Child { get; set; }
}
Here's my DBContext:
modelBuilder.Entity<Child>().ToTable("Children")
.HasRequired<Parent>(x => x.Parent);
modelBuilder.Entity<GrandChild>().ToTable("GrandChildren")
.HasRequired<Child>(y => y.Child);
modelBuilder.Entity<Parent>().ToTable("Parents")
.HasMany(z => z.Child)
.WithRequired(i => i.Parent);
Then finally my insert is as follows, I build a new parent-child-grandchild object conditionally based on another input object (I'm trying to save the initial state of a questionnaire based on a similar parent-child-grandchild questionnaire object hierarchy):
public Parent Insert(List<AnotherObject> input)
{
Parent parent = new Parent();
// Set parent attributes
foreach (var x in input)
{
Child child = new Child();
// Set child attributes
// EDIT: I also set an attribute based on the list of
// entities from the input
child.OtherObjectId = x.Id;
child.Parent = parent;
if (x.Children.Count > 0)
{
foreach (var y in x.Children)
{
GrandChild grandChild = new GrandChild();
// Set grandChild attributes
grandChild.Child = child;
child.GrandChildren.Add(grandChild);
}
}
parent.Children.Add(child);
}
Context.Parents.Add(parent);
Context.SaveChanges();
}
I've checked the DB and entities several times so I'm hoping there's some kind of flaw in my insert logic instead.
EDIT: This is where the input list (selected) comes from in case that helps to determine something:
Random rand = new Random(DateTime.Now.ToString().GetHashCode());
var selected = diffParent.DiffChild.OrderBy(x => rand.Next()).Take(diffParent.AmountShown).ToList();
foreach (var q in selected)
{
var listOne = new List<DiffChild>();
var listTwo = new List<DiffChild>();
if (q.CountAttribute != null)
listOne = q.DiffChild.Where(c => c.Attribute == true).OrderBy(x => rand.Next()).Take((int)q.CountAttribute).ToList();
if (q.OtherCountAttribute != null)
listTwo = q.DiffChild.Where(d => d.Attribute != true).OrderBy(y => rand.Next()).Take((int)q.OtherCountAttribute).ToList();
q.DiffChildren = listOne.Concat(listTwo).ToList();
}
EDIT: The issue seems to stem from the selected list and to be more specific from the for loop where I try to select specific entities from the full list, if I pass only this:
var selected = diffParent.DiffChild.OrderBy(x => rand.Next()).Take(diffParent.AmountShown).ToList();
The insert seems to work without issues. Seems I've been hunting for issues in the wrong place.
A:
True indeed, I knew this was going to be a fixed max 3 depth so I went with this design but I'll keep what you said in mind going forward. On topic I managed to fix this issue by mapping the diffParent to a DTO, then doing the picking and after mapping it back to a list of entities that I passed to the insert method.
| {
"pile_set_name": "StackExchange"
} |
Q:
Django-Python/MySQL: How can I access a field of a table in the database that is not present in a model's field?
This is what I wanted to do:
I have a table imported from another database. Majority of the columns of one of the tables look something like this: AP1|00:23:69:33:C1:4F and there are a lot of them. I don't think that python will accept them as field names.
I wanted to make an aggregate of them without having to list them as fields in the model. As much as possible I want the aggregation to be triggered from within the Django application, so I don't want to resort to having to create MySQL queries outside the application.
Thanks.
A:
Unless you want to write raw sql, you're going to have to define a model. Since your model fields don't HAVE to be named the same thing as the column they represent, you can give your fields useful names.
class LegacyTable(models.Model):
useful_name = models.IntegerField(db_column="AP1|00:23:69:33:C1:4F")
class Meta:
db_table = "LegacyDbTableThatHurtsMyHead"
managed = False # syncdb does nothing
You may as well do this regardless. As soon as you require the use of another column in your legacy database table, just add another_useful_name to your model, with the db_column set to the column you're interested in.
This has two solid benefits. One, you no longer have to write raw sql. Two, you do not have to define all the fields up front.
The alternative is to define all your fields in raw sql anyway.
Edit:
Legacy Databases describes a method for inspecting existing databases, and generating a models.py file from existing schemas. This may help you by doing all the heavy lifting (nulls, lengths, types, fields). Then you can modify the definition to suit your needs.
python manage.py inspectdb > legacy.py
| {
"pile_set_name": "StackExchange"
} |
Q:
Replacing rows in R
In R am reading a file with comments as csv using
read.data.raw = read.csv(inputfile, sep='\t', header=F, comment.char='')
The file looks like this:
#comment line 1
data 1<tab>x<tab>y
#comment line 2
data 2<tab>x<tab>y
data 3<tab>x<tab>y
Now I extract the uncommented lines using
comment_ind = grep( '^#.*', read.data.raw[[1]])
read.data = read.data.raw[-comment_ind,]
Which leaves me:
data 1<tab>x<tab>y
data 2<tab>x<tab>y
data 3<tab>x<tab>y
I am modifying this data through some separate script which maintains the number of rows/cols and would like to put it back into the original read data (with the user comments) and return it to the user like this
#comment line 1
modified data 1<tab>x<tab>y
#comment line 2
modified data 2<tab>x<tab>y
modified data 3<tab>x<tab>y
Since the data I extracted in read.data preserves the row names row.names(read.data), I tried
original.read.data[as.numeric(row.names(read.data)),] = read.data
But that didn't work, and I got a bunch of NA/s
Any ideas?
A:
Does this do what you want?
read.data.raw <- structure(list(V1 = structure(c(1L, 3L, 2L, 4L, 5L),
.Label = c("#comment line 1", "#comment line 2", "data 1", "data 2",
"data 3"), class = "factor"), V2 = structure(c(1L, 2L, 1L, 2L, 2L),
.Label = c("", "x"), class = "factor"), V3 = structure(c(1L, 2L, 1L,
2L, 2L), .Label = c("", "y"), class = "factor")), .Names = c("V1",
"V2", "V3"), class = "data.frame", row.names = c(NA, -5L))
comment_ind = grep( '^#.*', read.data.raw[[1]])
read.data <- read.data.raw[-comment_ind,]
# modify V1
read.data$V1 <- gsub("data", "DATA", read.data$V1)
# rbind() and then order() comments into original places
new.data <- rbind(read.data.raw[comment_ind,], read.data)
new.data <- new.data[order(as.numeric(rownames(new.data))),]
| {
"pile_set_name": "StackExchange"
} |
Q:
CMake Visual Studio project dependencies
My solution consists of a static library and a console application that uses it.
The solution is generated from CMakeLists.txt files (top-level file and two files for every project)
As I know project dependencies in CMake are managed by changing add_subdirectory() order.
However, it does not work for me
Providing the complete top-level file
cmake_minimum_required(VERSION 2.8)
project(vtun CXX)
set(TARGET vtun)
set(Boost_DEBUG ON)
set(Boost_USE_STATIC_LIBS ON)
set(BOOST_ROOT ${MY_BOOST_DIR})
find_package(Boost 1.55.0)
if(NOT Boost_FOUND)
message(FATAL_ERROR "Boost libraries are required")
endif()
add_subdirectory(vtunlib)
add_subdirectory(console_client)
vtunlib project goes first, but anyway *.sln file does not include dependencies information and console_client is always built first
CMake 3.0, Visual Studio 2013
A:
Project dependencies in CMake are not managed by changing add_subdirectory() order. You can specify target dependencies explicitly by add_dependencies command:
add_dependencies(< target> [< target-dependency>]...)
Make a top-level < target> depend on other top-level targets to ensure that they build
before < target> does.
or some commands like target_link_libraries do it automatically:
...the build system to make sure the library being linked is
up-to-date before the target links.
So in case console_client links vtunlib, the command target_link_libraries(console_client vtunlib) will handle build order automatically.
| {
"pile_set_name": "StackExchange"
} |
Q:
Example of continuous function that is analytic on the interior but cannot be analytically continued?
I am looking for an example of a function $f$ that is 1) continuous on the closed unit disk, 2) analytic in the interior and 3) cannot be extended analytically to any larger set. A concrete example would be the best but just a proof that some exist would also be nice. (In fact I am not sure they do.)
I know of examples of analytic functions that cannot be extended from the unit disk. Take a lacuanary power series for example with radius of convergence 1. But I am not sure if any of them define a continuous function on the closed unit disk.
A:
I suggest this function: $$f(z)=\sum_{n=1}^\infty \frac{z^{n!}}{n^2}.$$ It converges uniformly on the closed unit disk, and the derivatives blow up as you approach any root of unity radially.
A:
Let $f(z) = \sum z^n/n^2$, which is continuous and bounded on the closed unit disc but not analytic near $1$. Then consider
$$\sum f(z^n)/n^2.$$
This should have a singularity at every root of unity; and should be analytic in the interior because it is uniformly convergent.
A:
Here is a very concrete example:
$
g(z) = \sum_{n=0}^{\infty}\frac{z^{2^n + 1}}{2^n + 1}.
$
The power series converges uniformly to a continuous function on the closed unit disk. Differentiating we obtain $g'(z) = f(z)$ with
$
f(z) = \sum_{n=0}^{\infty}z^{2^n}.
$
This is the standard example of a function with a natural boundary. Clearly $f(x) \rightarrow +\infty$ as $x \rightarrow 1^{-}$ on the real axis. The functional equation
$
f(z) = z + f(z^2)
$
shows that $f(x) \rightarrow + \infty$ as $x \rightarrow (-1)^{+}$ on the real axis, then
$|f(z)| \rightarrow \infty$ as $z$ tends radially to ${\pm}i$, and so on, so that $|f(z)|$ tends to $\infty$ as $z$ tends radially to any root of unity of order $2^m$. Hence $f(z)$ has a dense set of singularities on the unit circle, and so does $g(z)$, thus $g(z)$ has the unit circle as natural boundary.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to blur a rectagle with OpenCv
Is there any way to blur a Mat object in OpenCv (for Android) so that the blur is contained within a Rect object? I am doing a face blurring application and have tried this:
Mat mat = ...; // is initialized properly to 480*640
...
Mat blurred = new Mat();
for (Rect rect : faceDetections.toArray()) {
int xStart = Math.max(0, rect.x);
int yStart = Math.max(0, rect.y);
Imgproc.blur(mat, blurred, new Size(rect.width/2, rect.height/2), new Point(xStart, yStart));
Core.rectangle(blurred, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height),
new Scalar(0, 255, 0));
}
If I comment out the Imgproc.blur part then it works correctly by drawing a rectagle around the face. However, when I run it with this line I get the following in the logs:
11-07 17:27:54.100: E/AndroidRuntime(25665): Caused by: CvException [org.opencv.core.CvException: cv::Exception: /hdd2/buildbot/slaves/slave_ardbeg1/50-SDK/opencv/modules/imgproc/src/filter.cpp:182: error: (-215) 0 <= anchor.x && anchor.x < ksize.width && 0 <= anchor.y && anchor.y < ksize.height in function void cv::FilterEngine::init(const cv::Ptr<cv::BaseFilter>&, const cv::Ptr<cv::BaseRowFilter>&, const cv::Ptr<cv::BaseColumnFilter>&, int, int, int, int, int, const Scalar&)
This means that the anchor point is out of bounds, but I have looked up that the (0,0) point for open CV is the upper left point so I don't think it should be going out of bounds.
Also ideally I would like to do a gaussian blur (instead of just blur) in the region, but I can't figure out how to bound that in the rectangle either: it always blurs the whole image.
Link to ImgProc docs. Any help is greatly appreciated!
A:
Ok I figured out how to do it:
Mat mask = blurred.submat(rect);
Imgproc.GaussianBlur(mask, mask, new Size(55, 55), 55); // or any other processing
Then blurred will have the blurred region. This is because submat doesn't copy the data in blurred, but rather references it, so when the blur is applied it only blurs the parts in blurred referenced by mask.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can't see IAM user instances in AWS
I had a client get an AWS account.
I then had them create an IAM user account for me with admin privileges.
I made an EC2 instance under my IAM account.
They can't see the EC2 instance in their account.
How do I make it so they can access the instances I make given that it's really all under their account?
A:
There few things you can check. The client should be in the same region of the instances that you are creating. If you created instances in N. Virginia and when they are logged in to the console and it shows US. West they won't see the instances. The second thing just to make sure is that the account their are using is admin and/or they have AmazonEC2FullAccess role attached.
| {
"pile_set_name": "StackExchange"
} |
Q:
Must a $R$-automorphism on $R[X]$ be of the form $X\mapsto aX+b,\ a\in R^*,b\in R$?
Let $R$ be a commutative ring. I wonder if every $R$-automorphism (that is, a ring automorphism that fix $R$) $\varphi$ of $R[X]$ satisfies $\varphi(X)=aX+b$, where $a$ is an unit in $R$ and $b$ an arbitrary element of $R$.
I can prove that it holds when $R$ is reduced, that is, having no nonzero nilpotent element.
A:
No, at least if $R$ is not reduced: if $R=\mathbf Z/4\mathbf Z$, the $R$-homomorphism $f$ defined by $f(x)=2X^2+X$ is its own inverse since $\,f (f(X))=X$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Java libraries and frameworks overviews
Which web sites could you recommend, where I shall find overviews of different java libraries and frameworks that are currently preferable to use in development of applications?
update: To be more precise, I've liked to find a site that will be like a magazine about java, where will be the overviews, comparisons, best practices, examples and other useful information about java (technics, libraries, frameworks and so on) for different purposes. The aim of magazines is not to cover all the things of their subject area, but to present more actual, interesting and useful things only.
A:
I really like the design on Open Source Software in Java. They've got it laid out by type to start with, plus when you dig down you can find several competing projects for each category.
For example clicking on 'HTML Parsers' gives this (and more - this is just a partial clip):
I hope this helps.
A:
The world of java has so many libraries that it would be practically impossible to recommend anything without some kind of understanding about what kind of application you are actually writing. Are you writing a blu-ray disc, a game for a java phone, an android app, a desktop application, a server side computation process, a web service, a web site.... the list of things that java can and does do is huge.
The same thing goes for frameworks too. I would always say (although many people on this site disagree) that you should only look for a framework when you are finding something difficult, and the framework makes that thing easier, without making other stuff harder. Some people say you pick your framework first.....
Perhaps if you gave more details on what kind of thing you were trying to attempt, the community might be able to point you towards some useful stuff.
As a side note, remember that the internet is full of people with opinions - just because they are loud, it doesn't make them right - after all http://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog
| {
"pile_set_name": "StackExchange"
} |
Q:
Make every tab the same width and also expandable
I'm trying to achieve something like the tabs from a Browser. All tabs must have the same width and also be expandable so when there are a lot of them they need to resize and fit the window (exactly like Chrome or Firefox does).
The Problem:
If a tab have more text then the other tabs, the tab will be larger. Like so:
And if I spawn a lot of tabs, it will always be larger then the others.
What I have tried:
I have tried to add a stylesheet to change the width, but if I change the width to a specific number, the tabs will be static and not resize based on the number of tabs to fit the window. Also I tried to tweak with min/max width, messing with the QSizePolicy, but no chance.
I have looked at the documentation of QT5 for C++ and googled a lot, but no place talks about this or any option about this.
Maybe I need to do some calculations in python and add to a stylesheet as a argument, but not sure how. Maybe there is a simple option that I'm missing.
My code: (This is the full code, so you can copy-paste and test it if you need)
import sys
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QApplication, QWidget, QPushButton, QTabWidget,\
QVBoxLayout, QHBoxLayout, QSizePolicy
class Container(QWidget):
def __init__(self, text):
super(Container, self).__init__()
self.hbox = QHBoxLayout()
self.hbox.setSpacing(0)
self.hbox.setContentsMargins(0, 0, 0, 0)
self.setLayout(self.hbox)
self.button = QPushButton(text)
self.hbox.addWidget(self.button)
class CustomWidget (QWidget):
def __init__(self, parent=None):
super(CustomWidget, self).__init__(parent)
self.button = QPushButton("Add tab")
self.button.clicked.connect(self.buttonClicked)
self.tabs = QTabWidget()
self.tabs.setTabsClosable(True)
self.tabs.setMovable(True)
self.tabs.setDocumentMode(True)
self.tabs.setElideMode(Qt.ElideRight)
self.tabs.setUsesScrollButtons(True)
self.tabs.tabCloseRequested.connect(self.closeTab)
self.tabs.addTab(Container("Very big titleeeeeeeeee"),
"Very big titleeeeeeeeeeee")
self.tabs.addTab(Container("smalltext"), "smalltext")
self.tabs.addTab(Container("smalltext2"), "smalltext2")
vbox = QVBoxLayout()
vbox.addWidget(self.button)
vbox.addWidget(self.tabs)
self.setLayout(vbox)
self.resize(600, 600)
def closeTab(self, index):
tab = self.tabs.widget(index)
tab.deleteLater()
self.tabs.removeTab(index)
def buttonClicked(self):
self.tabs.addTab(Container("smalltext2"), "smalltext2")
app = QApplication([])
app.setStyleSheet("""
QTabBar::tab {
background: lightgray;
color: black;
border: 0;
/* min-width: 100px; */
max-width: 200px;
/* width: 150px; */
height: 20px;
padding: 5px;
}
QTabBar::tab:selected {
background: gray;
color: white;
}
""")
widget = CustomWidget()
widget.show()
sys.exit(app.exec_())
Other information:
Operating System: Windows 10
Python Version: 3.6.2
PyQt Version: 5.9.1
A:
To set the width for all tabs we must override the tabSizeHint() method, in this case we return the same width for all tabs.
class TabBar(QTabBar):
def tabSizeHint(self, index):
size = QTabBar.tabSizeHint(self, index)
w = int(self.width()/self.count())
return QSize(w, size.height())
After assigning this custom tabBar we use the setTabBar() method of QTabWidget.
self.tabs = QTabWidget()
self.tabs.setTabBar(TabBar())
Output:
The example can be found in the following link
| {
"pile_set_name": "StackExchange"
} |
Q:
Configure Spring Web Flow with Java configuration
I am using Spring 3.1 and want to include Spring Web Flow 2.3. One thing I really like about Spring is that you can leave off the XML configuration in favour of Java-only configuration using @Configuration and @Bean annotations.
However, I have not yet found out how to configure Web Flow this way. The docs that turned up on my Google searches all referred to XML configuration only. Is it possible, does anyone have any pointers?
EDIT:
I was not asking about the flow definition, but rather for a replacement for the webflow-config schema. At the moment, configuration items such as flow-registry and flow-executor have to go in Spring-XML files, along with the flow handler mapping referring to them.
A:
If you are referring to the flow definitions, then no. Java base configuration for Webflow was to be part of Webflow 3. The most recent status update can be found in this thread.
There is currently no set date for a Spring Web Flow 3 release. Version 3 tickets are currently being reviewed for inclusion in 2.4 or subsequent releases so once again if there are things you care about please vote, comment, and discuss. The flagship Web Flow 3 item -- Java-based flow definitions, is still under consideration although currently on hold as we move forward with other important goals mentioned above.
| {
"pile_set_name": "StackExchange"
} |
Q:
Не удается установить соединение с базой данных jdbc tomcat
public class Connect {
private final String URL = "jdbc:mysql://localhost:3306/gregs_list?" +
"useUnicode=true&useSSL=true&useJDBCCompliantTimezoneShift=true" +
"&useLegacyDatetimeCode=false&serverTimezone=UTC";
private final String USERNAME = "root";
private final String PASSWORD = "root";
private Connection connection;
public Connection getConnect(){
try {
connection = DriverManager.
getConnection(URL, USERNAME, PASSWORD);
} catch (SQLException e) {
e.printStackTrace();
}
return connection;
}
}
Servlets
public class ServletGlobal extends HttpServlet {
private Connect connect;
protected void doPost(HttpServletRequest request, HttpServletResponseresponse)
throws ServletException, IOException {
doGet(request, response);
}
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
try(Connection connection = connect.getConnect();
PreparedStatement prt = connection.prepareStatement("SELECT * FROM user");
ResultSet rs = prt.executeQuery()) {
while (rs.next()){
System.out.println(rs.getInt("user_id"));
System.out.println(rs.getString("user_name"));
System.out.println(rs.getInt("user_age"));
System.out.println(rs.getInt("user_salary"));
}
} catch (SQLException e) {
e.printStackTrace();
}
}
}
Ошибка:
04-Feb-2018 22:07:39.170 SEVERE [http-nio-9000-exec-6]
org.apache.catalina.core.StandardWrapperValve.invoke Servlet.service() for
servlet [Servant] in context with path [] threw exception
java.lang.NullPointerException
at com.mycompany.app.logic.ServletGlobal.doGet(ServletGlobal.java:28)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:635)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:742)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:650)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
web.xml
<web-app>
<display-name>Archetype Created Web Application</display-name>
<servlet>
<servlet-name>Servant</servlet-name>
<servlet-class>com.mycompany.app.logic.ServletGlobal</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>Servant</servlet-name>
<url-pattern>/rer</url-pattern>
я создал минимальный код для проверки, и даже это не работает. Падает на строке когда я пытаюсь получить connection. И ошибка при том вылетает в tomcat LocalHost Log
A:
Проблема в том, что переменная Connect connect не инициализирована.
Как вам предлагали в комментариях, вы можете инициализировать ее Connect connect = new Connect ();, но это будет не совсем верно.
Во-первых, вам придет во всех местах, где требуется экземпляр подключения, создавать экземпляр Connect. Как следствие, 2й, более серьезный недостаток - у вас не будет единого места управления подключениями и контролировать их будет невозможно (например, количество подключений).
Класс Connect в вашем случае это типичный пример фабричного паттерна. Есть несколько вариантов реализации, в вашем случае самый простой - статическая фабрика:
public final class ConnectionManager {
private static final String URL = "jdbc:mysql://localhost:3306/gregs_list?" +
"useUnicode=true&useSSL=true&useJDBCCompliantTimezoneShift=true" +
"&useLegacyDatetimeCode=false&serverTimezone=UTC";
private static final String USERNAME = "root";
private static final String PASSWORD = "root";
private static Connection connection;
private ConnectionManager() {}
public static Connection getConnect() {
try {
connection = DriverManager.getConnection(URL, USERNAME, PASSWORD);
} catch (SQLException e) {
e.printStackTrace();
}
return connection;
}
}
Теперь ConnectionManager управляет подключениями. Он гарантирует, что у вас будет 1 экземпляр connection на всех.
Использование в коде:
try(Connection connection = ConnectionManager.getConnect(); ...
| {
"pile_set_name": "StackExchange"
} |
Q:
Numbered file renaming algorithm
The question is marked as ASP classic but an algorithm solution is ok.
I have the following set of files which are sequentially numbered:
1.jpg, 2.jpg, 3.jpg, 4.jpg ... X.jpg
I need a function which will take as input two filenames, the fromFile and toFile parameter, and which needs to rename all needed files in such a way that the from file is moved in the sequence before the toFile and the files in between renumbered.
Examples:
Moving 1.jpg onto 4.jpg should do the following:
rename 1.jpg to 1.jpg.temp
rename 2.jpg to 1.jpg
rename 3.jpg to 2.jpg
rename 1.jpg.temp to 3.jpg
other files are unaffected by the operation
Moving 4.jpg to 2.jpg should do the following:
rename 4.jpg to 4.jpg.tmp
rename 3.jpg to 4.jpg
rename 2.jpg to 3.jpg
rename 1.jpg to 2.jpg
rename 4.jpg.tmp to 1.jpg
other files are unaffected
As input i have an array of strings containing filenames and the two filenames to/from.
Can you tell me what is the best approach to the file renaming?
A:
Here is a brief approach, considering all your files will be named numeric.jpg, you're going to have to build your own functions though:
FileExists(Filename)
RenameFile(OriginalFilename,NewFilename)
<%
Input1 = Request.Form("file1")
Input2 = Request.Form("file2")
'gets digits only
Input1Digit = Left(Input1,Instr(Input1,"."))
Input2Digit = Left(Input2,Instr(Input2,"."))
'is file1 less than file2?
If Input1Digit < Input2Digit Then
'loop through the digits frontwards 1 to 5
For x = Input1Digit to Input2Digit
'if the first loop?
If cStr(x) = cStr(Input1Digit) Then
'see if file exists here
If FileExists(Input1) Then
FileRename(Input1, Input1 & ".temp") 'Rename the file here [From, To]
OriginalFileExists = True
Else
FileRename(Input1, Input1Digit & ".jpg"
OriginalFileExists = False
End If
'if not on the first loop?
Else
'did the original file exist '.temp'
If OriginalFileExists Then
NewFileName = cInt(x) - 1
Else
NewFileName = cInt(x)
End If
'rename each file here
RenameFile(x & ".jpg", NewFileName & ".jpg")
End If
Next
Else
'loop through the digits more to less 5 to 1
For x = Input1Digit to Input2Digit STEP -1
'if the first loop?
If cStr(x) = cStr(Input1Digit) Then
'see if file exists here
If FileExists(Input1) Then
FileRename(Input1, Input1 & ".temp") 'Rename the file here [From, To]
OriginalFileExists = True
Else
FileRename(Input1, Input1Digit & ".jpg"
OriginalFileExists = False
End If
'if not on the first loop?
Else
'did the original file exist '.temp'
If OriginalFileExists Then
NewFileName = cInt(x) + 1
Else
NewFileName = cInt(x)
End If
'rename each file here
RenameFile(x & ".jpg", NewFileName & ".jpg")
End If
Next
End If
%>
| {
"pile_set_name": "StackExchange"
} |
Q:
Assign a custom binding configuration for wsHttpBinding
I'm running into some issues with what I believe is a pretty simple problem.
When trying to send "large" messages over WCF, I am getting the error:
The maximum message size quota for incoming messages (65536) has been
exceeded. To increase the quote, use the MaxReceivedMessageSize
property on the appropriate binding element.
To remedy this, I created a custom binding configuration with a higher value for MaxReceivedMessageSize. But I still get the error, as if the value is not being read.
Here is my Server App.config:
<system.serviceModel>
<bindings>
<wsHttpBinding>
<binding name="largeBinding" maxReceivedMessageSize="2147483647">
<readerQuotas maxArrayLength="2147483647"/>
</binding>
</wsHttpBinding>
</bindings>
<services>
<service name="EMS.Services.TradeController">
<endpoint address="http://localhost:9002/TradeService"
binding="wsHttpBinding"
bindingConfiguration="largeBinding"
contract="EMS.Contracts.Services.ITradeService"/>
</service>
</services>
</system.serviceModel>
And here is my Client App.config:
<system.serviceModel>
<bindings>
<wsHttpBinding>
<binding name="largeBinding" maxReceivedMessageSize="2147483647">
<readerQuotas maxArrayLength="2147483647"/>
</binding>
</wsHttpBinding>
</bindings>
<client>
<endpoint address="http://localhost:9002/TradeService"
binding="wsHttpBinding"
bindingConfiguration="largeBinding"
contract="EMS.Contracts.Services.ITradeService"/>
</client>
Is there a part I am missing to assign the binding properly?
A:
Is the client’s configuration automatically generated by Adding Service Reference tool? I suspect whether the client proxy uses the service endpoint created by Wshttpbinding. There seems nothing with your current configuration.
Besides, please consider configuring other attributes.
<wsHttpBinding>
<binding name="largeBinding" allowCookies="true"
maxReceivedMessageSize="20000000"
maxBufferSize="20000000"
maxBufferPoolSize="20000000">
<readerQuotas maxDepth="32"
maxArrayLength="200000000"
maxStringContentLength="200000000"/>
</binding>
</wsHttpBinding>
The MaxBufferSize and ReaderQuotas property also needs to be configured.
MaxBufferSize
ReaderQuotas
Feel free to let me know if the problem still exists.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to replace a word in a sentence based on a condition in ReactJS
I have a react component for the header of my website that says (Good morning, I'm John, Thank you for visiting my website, please contact me if you need more information).
in the first line it says Good morning I am trying to change it based on user current time to say good afternoon and good evening. I already have code below but not sure how to implement it here just changing that only word while keeping the rest of my paragraph
let today = new Date()
let curHr = today.getHours()
if (curHr < 12) {
<h1> Good morning <h1>
} else if (curHr < 18) {
<h1> Good afteroon <h1>
} else {
<h1> Good evening <h1>
}
import React, { Component } from "react";
class MainHeader extends Component {
render() {
return (
<div className="my_header">
<h3> Good morning, I'm john Thank you for visiting my website, please contact me if you need more information </h3>
</div>
);
}
}
export default MainHeader;
A:
You can define the code above into a reusable function, and then just create a new greeting on each rerender. Then just display in the render with curly brackets.
function generateGreeting(){
let today = new Date()
let curHr = today.getHours()
if (curHr < 12) {
return 'Good morning'
} else if (curHr < 18) {
return 'Good afteroon'
} else {
return 'Good evening'
}
}
import React, { Component } from "react";
class MainHeader extends Component {
render() {
const greeting = generateGreeting();
return (
<div className="my_header">
<h3> <h1>{greeting}</h1>, I'm john Thank you for visiting my website, please contact me if you need more information </h3>
</div>
);
}
}
export default MainHeader;
| {
"pile_set_name": "StackExchange"
} |
Q:
Tips and tricks to installing cabinet doors and mounting hinges
What are some tips to installing cabinet doors and hinges so that they are positioned correctly and look good? I've been to some showrooms (IKEA) where the doors look slightly crooked or they don't look perfectly symmetrical or the gap between left/right cabinet doors amongst the cabinets in the kitchen are not the same.
A:
Euro style hinges adjust in all 3 directions that you'd want: up/down, in/out (relative to the cabinet wall), and front/back.
If you start with a cabinet carcase that's square or nearly square, it's usually a matter of tweaking some of those 3 adjustments on the hinges. (If the cabinet is desperately racked, you might exceed the adjustment range of the hinges. In this case, you need to correct the cabinet, then fiddle with the hinges.)
Note that the screws on those hinges are often pozidrive, so if you're doing a lot of this, you'd do well to get the right screwdriver. (Failing that, use philips, but be careful about stripping the screw heads.)
| {
"pile_set_name": "StackExchange"
} |
Q:
How to custom CALayer using subclassing
On UIView there are number of CALayers. Each CALayer consists of CAShapeLayers. CAShapeLayers' path property consists of UIBezierPath.
My objective is when i tap on CALayer i want to get points which i used in the drawing of UIBezierPath. For that i thought of subclassing CALayer which has NSMutableArray to store points. I will be saving points when i am drawing in particular CALayer. So whenever i tap on particular CALayer i will get points associated to that. Also i want to give tag to each CALayer. I do not know how to do this.
The second way is to use CALayer's inbuilt property which will give me points captured in CALayer. But i am not aware of this kind of property.
Please share your ideas how to accomplish this task?
A:
You will need to use the UIView that contains the CALayer to handle the touch events, there is no built in Touch events for CALayers. - (CALayer *)hitTest:(CGPoint)thePoint returns the "deepest" CALayer that the point from the touch event is within. So, if you call [self.layer hitTest:point] it will check all your sublayers and return the correct CALayer
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
CGPoint point = [[touches anyObject] locationInView:self];
CALayerSubclass *taplayer = [self.layer hitTest:point]
NSArray *points = [taplayer getPoints];
}
There is no way to get out of a CGPath the points you fed into it. The best you can do are these methods about getting info from the path. So, like you said, your best bet is to subclass CALayer and put all the info you need in a data structure to retrieve later.
// .h
@interface CALayerSubclass : CALayer
@property (nonatomic, strong) NSMutableArray *points;
@end
// .m
-(void)drawInContext:(CGContextRef)ctx {
...
[bezierPath addCurveToPoint:controlPoint1:point1 controlPoint2:point2];
[points addObject:[NSValue valueWithCGPoint:point1]];
[points addObject:[NSValue valueWithCGPoint:point2]];
...
}
Just store all the CGPoints (or most other Core Graphics structure) in an NSValue and throw it into an array.
| {
"pile_set_name": "StackExchange"
} |
Q:
Java parseInt() question
Why is it that when I use parseInt for this:
private String certainNumber;
public int getNumber()
{
return Integer.parseInt(certainNumber);
}
It compiles.
But If I were to do this:
public String getStreetNumber()
{
return streetNumber;
}
and parseInt the returned value like so:
@Override
public int compareTo(Object o)
{
Address tempAddress = (Address)o;
if(Integer.parseInt(getStreetNumber()) < tempAddress.Integer.parseInt(getStreetNumber()))
{
return -1;
}
... // etc.
}
It does not compile?
edit: tried the suggestions... still not compiling?
edit2: Thanks for the help guys!
A:
That is because parseInt is a method which belongs to the Integer class: you have to call it Integer.parseInt(value);. I highly doubt that you have a parseInt function in either your custom class (I suspect that this is all part of an Address class?) or the tempAddress instance.
Try this:
public int compareTo(Object o)
{
Address tempAddress = (Address)o;
if(Integer.parseInt(getStreetNumber()) <
// you need to parse the return value of tempAddress's getStreetNumber()
// not get the tempAddress's parseInt of this.getStreetNumber()
Integer.parseInt(tempAddress.getStreetNumber()))
{
return -1;
}
// etc...
}
A:
Because you called parseInt() not Integer.parseInt()
| {
"pile_set_name": "StackExchange"
} |
Q:
HTTPS ACTIVATED FOR WEB SITE
I need to confirm that my web application connection to JBOSS6 server is done via HTTPS only.
What are some technical checks that I can make ?
This link was useful to me link
Regards,
A:
After completing the configuration changes for JBOSS, you must restart JBoss Web as you normally do. You should be able to access any web application supported by JBoss Web via SSL. For example, try:
https://yourdomain:8443
It should land you to your start page.
If you want more info, please comment. :)
| {
"pile_set_name": "StackExchange"
} |
Q:
Django: How to send RFC 822 compliant email? (users can't reply because of whitespace)
A user has recently notified me that he cannot reply to my email messages because of white space in the address. He also mentioned the raw FROM field not being RFC 822 compliant - I don't know much about it and can't verify.
Here's the raw From field that he received:
From: SiteName [email protected]
This is the way I'm currently sending these emails:
msg_plain = render_to_string('email_template.txt', context)
msg_html = render_to_string('email_template.html', context)
EMAIL_FROM_FIELD = 'SiteName [email protected]'
mail_was_sent = send_mail(
email_subject,
msg_plain,
EMAIL_FROM_FIELD,
[profile.user.email],
html_message=msg_html,
)
What am I doing wrong?
A:
Unless I'm missing something, I think you need to change this:
EMAIL_FROM_FIELD = 'SiteName [email protected]'
To this:
EMAIL_FROM_FIELD = 'SiteName <[email protected]>'
The general rule is that wherever there may be linear-white-space (NOT simply LWSP-chars), a CRLF immediately followed by AT LEAST one LWSP-char may instead be inserted.
This is from: https://www.w3.org/Protocols/rfc822/
A:
You may define the form field as follow:
EMAIL_FROM_FIELD = 'SiteName <[email protected]>'
| {
"pile_set_name": "StackExchange"
} |
Q:
$\beta^k $ is a cycle ⟺ gcd($k$ , o($\beta$))=$1$
GIVEN:
$\beta$ is a cycle and belongs to $S_{n}$,
then $($$\beta$$)^k $ is a cycle iff $(k,o($$\beta$$))=1 $
let o($\beta$)=m and $ $$\beta$$=(a_{1} a_{2} .... a_{m}) $
While trying to prove the converse, I assumed that $(k,m)=1$
Then, since, $o($$\beta$$^k)$ $= m/gcd(m,k) $ = $m$
Does this mean $\beta^k$ is also a cycle?
I'm stuck here (and in general), can you please give me a hint on how to prove the statement assuming that $\beta^k$ is a cycle first?
A:
Consider $\beta=(12)(34)$ in $S_4$, its order is $2$ and $\beta$ is not a cycle, $\beta^3=\beta$. But $\gcd(2,3)=1$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Are continuous self-bijections of connected spaces homeomorphisms?
I hope this doesn't turn out to be a silly question.
There are lots of nice examples of continuous bijections $X\to Y$ between topological spaces that are not homeomorphisms. But in the examples I know, either $X$ and $Y$ are not homeomorphic to one another, or they are (homeomorphic) disconnected spaces.
My Question: Is there a connected topological space $X$ and a continuous bijection $X\to X$ that is not a homeomorphism?
For the record, my example of a continuous bijection $X\to X$ that is not a homeomorphism is the following. Roughly, the idea is to find an ordered family of topologies $\tau_i$ (
$i\in \mathbb Z$) on a set $S$ and use the shift map to create a continuous bijection from $\coprod_{i\in \mathbb Z} (S, \tau_i)$ to itself. Let $S = \mathbb{Z} \coprod \mathbb Z$. The topology $\tau_i$ is as follows: if $i<0$, then the left-hand copy of $\mathbb Z$ is topologized as the disjoint union of the discrete topology on $[-n, n]$ and the indiscrete topology on its complement, while the right-hand copy of $\mathbb Z$ is indiscrete. The space $(S, \tau_0)$ is then indiscrete. For $i>0$, the left-hand copy of $\mathbb Z$ is indiscrete, while the right-hand copy is the disjoint union of the indiscrete topology on $[-n, n]$ with the discrete topology on its complement. Now the map $\coprod_{i\in \mathbb Z} (S, \tau_i)\to \coprod_{i\in \mathbb Z} (S, \tau_i)$ sending $(S, \tau_i) \to (S, \tau_{i+1})$ by the identity map of $S$ is a continuous bijection, but not a homeomorphism.
A:
Here's a nice geometric example. Let $X\subset\mathbb{R}^2$ be the union of the $x$-axis, the line segments $\{n\}\times[0,2\pi)$ for $n\in \{\ldots,-3,-2,-1,0\}$, and circles in the upper half plane of radius $1/3$ tangent to the $x$-axis at the points $(1,0),(2,0),\ldots$.
Note that $X$ is connected.
Define a map $f\colon X\to X$ by
$$
f(x,y) \;=\; \begin{cases}(x+1,y) & \text{if }x\ne 0 \\ \left(1+\frac{\sin y}{3},\frac{1-\cos y}{3}\right) & \text{if }x=0\end{cases}.
$$
That is, $f$ translates most points to the right by $1$, and maps the line segment $\{0\}\times[0,2\pi)$ onto the circle that's tangent to the $x$-axis at the point $(1,0)$. Then $f$ is continuous and bijective, but is not a homeomorphism.
A:
Zipping up halfway gives a continuous bijection from your pants with the fly down to your pants with the fly at half mast and this is not a homeomorphism. However, the two spaces are homeomorphic no?
One can well-imagine this phenomena persists for various other "manifolds with tears" - even in higher dimensions.
A:
Yes (to your body question, not your title question; it is confusing when people do this). Take $X = \mathbb{Z}$ with the topology generated by an open set containing $n$ for every positive integer $n$. (This space is connected because the smallest open set containing a non-positive integer is the entire space.) Consider the continuous bijection given by sending $x$ to $x - 1$.
Here is what might be a Hausdorff example: take $X = \mathbb{R}$ with the topology generated by the usual topology together with the open set $(0, \infty) \cap \mathbb{Q}$, and again consider the continuous bijection $x \to x-1$. Unfortunately I am not sure if $X$ is connected.
The most general situation I know where a continuous bijection $X \to Y$ is automatically a homeomorphism is if $X$ is compact and $Y$ is Hausdorff. This is a nice exercise and extremely useful.
| {
"pile_set_name": "StackExchange"
} |
Q:
Are multiple roles allowed in the @Secured annotation with Spring Security
I would like to allow access to a particular method to more than one group of users. Is it possible in Spring Security 3.x to do such a thing using the @Secured annotation? Consider two groups (roles) OPERATOR and USER, would this code be valid:
@Secured("ROLE_OPERATOR", "ROLE_USER")
public void doWork() {
// do useful processing
}
A:
You're almost there. Syntactically, you need to write it like this:
@Secured({"ROLE_OPERATOR", "ROLE_USER"})
public void doWork() { ... }
This is because you're supplying multiple values to a single array attribute of the annotation. (Java syntactically special-cases handing in a single value, but now you need to do it “properly”.)
A:
@Donal Fellows answer is correct for Spring apps. However, if you're working in Grails, you need to use the Groovy syntax for lists so the code would look like this
@Secured(["ROLE_OPERATOR", "ROLE_USER"])
public void doWork() { ... }
| {
"pile_set_name": "StackExchange"
} |
Q:
MySQL - Selecting rows with a minimum number of occurences
I have this query:
SELECT DISTINCT brand_name FROM masterdata WHERE in_stock = '1' ORDER BY brand_name
It works well, except that I get far too many results. How do I limit this such that rather than just looking for distinct entries, it will only give me distinct entries that exist a minimum of 3 times (for example)?
Basically, if the column had this data...
brand_name
==========
apple
banana
apple
apple
orange
banana
orange
orange
...my current query would return "apple, banana, orange". How do I get it such that it only returns "apple, orange" (ignoring banana because it has less than three occurrences)?
I'm using PHP to build the query, if it matters.
Thanks!
A:
Something like this (off-the-cuff and untested):
SELECT brand_name
FROM masterdata
WHERE in_stock = '1'
GROUP BY brand_name
HAVING COUNT(*) >= 3
ORDER BY brand_name
| {
"pile_set_name": "StackExchange"
} |
Q:
Groupby with Apply Method in Pandas : Percentage Sum of Grouped Values
I am trying to develop a program to convert daily data into monthly or yearly data and so on.
I have a DataFrame with datetime index and price change %:
% Percentage
Date
2015-06-02 0.78
2015-06-10 0.32
2015-06-11 0.34
2015-06-12 -0.06
2015-06-15 -0.41
...
I had success grouping by some frequency. Then I tested:
df.groupby('Date').sum()
df.groupby('Date').cumsum()
If it was the case it would work fine, but the problem is that I can't sum it percent way (1+x0) * (1+x1)... -1. Then I tried:
def myfunc(values):
p = 0
for val in values:
p = (1+p)*(1+val)-1
return p
df.groupby('Date').apply(myfunc)
I can't understand how apply () works. It seems to apply my function to all data and not just to the grouped items.
A:
Your apply is applying to all rows individually because you're grouping by the date column. Your date column looks to have unique values for each row, so each group has only one row in it. You need to use a Grouper to group by month, then use cumprod and get the last value for each group:
# make sure Date is a datetime
df["Date"] = pd.to_datetime(df["Date"])
# add one to percentages
df["% Percentage"] += 1
# use cumprod on each month group, take the last value, and subtract 1
df.groupby(pd.Grouper(key="Date", freq="M"))["% Percentage"].apply(lambda g: g.cumprod().iloc[-1] - 1)
Note, though, that this applies the percentage growth as if the steps between your rows were the same, but it looks like sometimes it's 8 days and sometimes it's 1 day. You may need to do some clean-up depnding on the result you want.
| {
"pile_set_name": "StackExchange"
} |
Q:
jQuery prepend() Bug?
I'm using prepend() and the result seems to be buggy.
$('#element').prepend('<div><a href="http://google.com"><a href="http://test.com">Test.com</a> - A site</a></div>');
And the html result (also viewed with Firebug) is buggy:
<div>
<a href="http://google.com"></a>
<a href="http://test.com">Test.com</a> - A site
</div>
(The links are just example links)
A:
You can't have an anchor inside an anchor...so it's not "buggy", it's behaving unexpectedly with invalid HTML, but when HTML is invalid that's...well, expected.
Think about it this way, if you clicked on the inside anchor, where should your browser go? You clicked on http://test.com and http://google.com.
| {
"pile_set_name": "StackExchange"
} |
Q:
TextView set multiple string into a setText(int resid)
Good morning,
How can I put multiple string ressources inside the setText to display them in order ?
I have a layout with a TextView (id: TxtDisp) and a Button (id: NextSentence) that change the text when I click on it.
NextSentence.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
TxtDisp.setText(R.string.sentence_2);
}
});
Where or how can I put four to six string ressources to be display in order when the button is clicked ?
Thanks in advance !
A:
You could put the string resources in an array, and get the string from that. So add a class member to track the which sentence is next
private int nextSentenceId = 0;
then in onCreate use code like this
final int[] sentences = new int[]{R.string.sentence_1, R.string.sentence_2, R.string.sentence_3};
NextSentence.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if( nextSentenceId < sentences.length ) {
TxtDisp.setText(sentences[nextSentenceId]);
++nextSentenceId;
}
}
});
Make sure to catch when you are at the last sentence or you will get an array out of bounds error.
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits