text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
Key-Value Stores vs. RDBMs vs. "Cloud" DBs (SDB)
I'm comfortable in the MySQL space having designed several apps over the past few years, and then continuously refining performance and scalability aspects. I also have some experience working with memcached to provide application side speed-ups on frequently queried result sets. And recently I implemented the Amazon SDB as my primary "database" for an ecommerce experiment.
To oversimplify, a quick justification I went through in my mind for using the SDB service was that using a schema-less database structure would allow me to focus on the logical problem of my project and rapidly accumulate content in my data-store. That is, don't worry about setting up and normalize all possible permutations of a product's attributes before hand; simply start loading in the products and the SDB will simply remember everything that is available.
Now that I have managed to get through the first few iterations of my project and I need to setup simple interfaces to the data, I am running to issues that I had taken for granted working with MySQL. Ex: grouping in select statements and limit syntax to query "items 50 to 100". The ease advantage I gained using schema free architecture of SDB, I lost to a performance hit of querying/looping a resultset with just over 1800 items.
Now I'm reading about projects like Tokyo Cabinet that are extending the concept of in-memory key-value stores to provide pseudo-relational functionality at ridiculously faster speeds (14x i read somewhere).
My question:
Are there some rudimentary guidelines or heuristics that I as an application designer/developer can go through to evaluate which DB tech is the most appropriate at each stage of my project.
Ex: At a prototyping stage where logical/technical unknowns of the application make data structure fluid: use SDB.
At a more mature stage where user deliverables are a priority, use traditional tools where you don't have to spend dev time writing sorting, grouping or pagination logic.
Practical experience with these tools would be very much appreciated.
Thanks SO!
Shaheeb R.
A:
The problems you are finding are why RDBMS specialists view some of the alternative systems with a jaundiced eye. Yes, the alternative systems handle certain specific requirements extremely fast, but as soon as you want to do something else with the same data, the fleetest suddenly becomes the laggard. By contrast, an RDBMS typically manages the variations with greater aplomb; it may not be quite as fast as the fleetest for the specialized workload which the fleetest is micro-optimized to handle, but it seldom deteriorates as fast when called upon to deal with other queries.
A:
The new solutions are not silver bullets.
Compared to traditional RDBMS, these systems make improvements in some aspect (scalability, availability or simplicity) by trading-off other aspects (reduced query capability, eventual consistency, horrible performance for certain operations).
Think of these not as replacements of the traditional database, but they are specialized tools for a known, specific need.
Take Amazon Simple DB for example, SDB is basically a huge spreadsheet, if that is what your data looks like, then it probably works well and the superb scalability and simplicity will save you a lot of time and money.
If your system requires very structured and complex queries but you insist with one of these cool new solution, you will soon find yourself in the middle of re-implementing a amateurish, ill-designed RDBMS, with all of its inherent problems.
In this respect, if you do not know whether these will suit your need, I think it is actually better to do your first few iterations in a traditional RDBMS because they give you the best flexibility and capability especially in a single server deployment and under modest load. (see CAP Theorem).
Once you have a better idea about what your data will look like and how will they be used, then you can match your need with an alternative solution.
If you want the simplicity of a cloud hosted solution, but needs a relational database, you can check out: Amazon Relational Database Service
|
{
"pile_set_name": "StackExchange"
}
|
Q:
docker repository name component must match
I am trying to build my image using this plugin: https://github.com/spotify/docker-maven-plugin#use-a-dockerfile
When I run mvn clean package docker:build
I get this error:
[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.2.3:build (defa
ult-cli) on project demo: Exception caught: Request error: POST https://192.168.
99.100:2376/v1.12/build?t=DevOpsClient: 500: HTTP 500 Internal Server Error -> [
Help 1]
When I check the docker daemon logs, I see this:
Handler for POST /build returned error: repository name component must match \"[a-z0-9]+(?:[._-][a-z0-9]+)*\"" statusCode=500
Here is the doc for the naming convention: https://docs.docker.com/registry/spec/api/
Apparently you cannot have any upper case letters.
I am trying to build using Spring boot my following this guide: https://spring.io/guides/gs/spring-boot-docker/
I am using a SNAPSHOT release of spring boot and I have a directory named demo-0.1.1-SNAPSHOT. I believe this may be causing the problem.
Also I am working on windows and my project directory path is like:
C:\Users\myname\UserRegistrationClient\git\..... etc
Would this also affect the repository naming convention?
And how would I change it?
A:
So this regular expression: [a-z0-9]+(?:[._-][a-z0-9]+)* doesn't include any upper case letters. So you should change your image name to devopsclient
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Find multiple substrings in strings and record location
The following is the script for finding consecutive substrings in strings.
use strict;
use warnings;
my $file="Sample.txt";
open(DAT, $file) || die("Could not open file!");
#worry about these later
#my $regexp1 = "motif1";
#my $regexp2 = "motif2";
#my $regexp3 = "motif3";
#my $regexp4 = "motif4";
my $sequence;
while (my $line = <DAT>) {
if ($line=~ /(HDWFLSFKD)/g){
{
print "its found index location: ",
pos($line), "-", pos($line)+length($1), "\n";
}
if ($line=~ /(HD)/g){
print "motif found and its locations is: \n";
pos($line), "-", pos($line)+length($1), "\n\n";
}
if ($line=~ /(K)/g){
print "motif found and its location is: \n";
pos($line), "-",pos($line)+length($1), "\n\n";
}
if ($line=~ /(DD)/g){
print "motif found and its location is: \n";
pos($line), "-", pos($line)+length($1), "\n\n";
}
}else {
$sequence .= $line;
print "came in else\n";
}
}
It matches substring1 with string and prints out position where substring1 matched. The problem lies in finding the rest of the substrings. For substrings2 it starts again from the beginning of the string (instead of starting from the position where substring1 was found). The problem is that every time it calculates position it starts from the beginning of string instead of starting from the position of the previously found substring. Since substrings are consecutive substring1, substring2, substring3, substring4, their positions have to occur after the previous respectively.
A:
I'm not a perl expert but you can use $- and $+ to track index location for last regex match found.
Below is code built on top of your code that explains this.
use strict;
use warnings;
my $file="sample.txt";
open(DAT, $file) || die("Could not open file!");
open (OUTPUTFILE, '>data.txt');
my $sequence;
my $someVar = 0;
my $sequenceNums = 1;
my $motif1 = "(HDWFLSFKD)";
my $motif2 = "(HD)";
my $motif3 = "(K)";
my $motif4 = "(DD)";
while (my $line = <DAT>)
{
$someVar = 0;
print "\nSequence $sequenceNums: $line\n";
print OUTPUTFILE "\nSequence $sequenceNums: $line\n";
if ($line=~ /$motif1/g)
{
&printStuff($sequenceNums, "motif1", $motif1, "$-[0]-$+[0]");
$someVar = 1;
}
if ($line=~ /$motif2/g and $someVar == 1)
{
&printStuff($sequenceNums, "motif2", $motif2, "$-[0]-$+[0]");
$someVar = 2;
}
if ($line=~ /$motif3/g and $someVar == 2)
{
&printStuff($sequenceNums, "motif3", $motif4, "$-[0]-$+[0]");
$someVar = 3;
}
if ($line=~ /$motif4/g and $someVar == 3)
{
&printStuff($sequenceNums, "motif4", $motif4, "$-[0]-$+[0]");
}
else
{
$sequence .= $line;
if ($someVar == 0)
{
&printWrongStuff($sequenceNums, "motif1", $motif1);
}
elsif ($someVar == 1)
{
&printWrongStuff($sequenceNums, "motif2", $motif2);
}
elsif ($someVar == 2)
{
&printWrongStuff($sequenceNums, "motif3", $motif3);
}
elsif ($someVar == 3)
{
&printWrongStuff($sequenceNums, "motif4", $motif4);
}
}
$sequenceNums++;
}
sub printStuff
{
print "Sequence: $_[0] $_[1]: $_[2] index location: $_[3] \n";
print OUTPUTFILE "Sequence: $_[0] $_[1]: $_[2] index location: $_[3]\n";
}
sub printWrongStuff
{
print "Sequence: $_[0] $_[1]: $_[2] was not found\n";
print OUTPUTFILE "Sequence: $_[0] $_[1]: $_[2] was not found\n";
}
close (OUTPUTFILE);
close (DAT);
Sample input:
MLTSHQKKFHDWFLSFKDSNNYNHDSKQNHSIKDDIFNRFNHYIYNDLGIRTIA
MLTSHQKKFSNNYNSKQNHSIKDIFNRFNHYIYNDLGIRTIA
MLTSHQKKFSNNYNSKHDWFLSFKDQNHSIKDIFNRFNHYIYNDL
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Application process not ending when I close the form? (C#)
Experimenting with the TcpClient and TcpListener class and for some reason when I have a couple of threads running and I close the form the process does not end but the form disappears.
I have to manually kill the process with the VS IDE or task manager.
Nothing in the form is still running from what I can tell when I close the program but the process does not end.. I insert breakpoints everywhere and even the console output says the threads exited.
Anyone know what's going on here?
A:
The main thread of your application is waiting for the threads your spawned to finish. You can set the IsBackground property of your threads to true so they do not stop your process from terminating:
From MSDN:
A thread is either a background thread or a foreground thread.
Background threads are identical to foreground threads, except that
background threads do not prevent a process from terminating. Once all
foreground threads belonging to a process have terminated, the common
language runtime ends the process. Any remaining background threads
are stopped and do not complete.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
IWindsorInstaller in an assembly and resolving local dependencies
I have a WPF MVVM application that has a model and services assembly. I'm trying to figure out how to use the Windsor container to resolve local (services in the service layer) dependencies, but the only thing I can figure out feels kludgy and incorrect.
Services installer:
public class ServicesInstaller : IWindsorInstaller
{
public void Install(IWindsorContainer container, IConfigurationStore store)
{
//Services
container.Register(
Component.For<IServiceA>().ImplementedBy<ServiceA>().LifeStyle.Singleton,
Component.For<IServiceB>().ImplementedBy<ServiceB>().LifeStyle.Singleton
}
}
Service consumer (located in services):
public class ServiceConsumer
{
public SomeMethodThatUsesServiceAOnlyOcassionally()
{
//buncha logic.
if (allThatFailed)
{
??? ResolveServiceA ???
}
}
}
Because I'm not dependent on ServiceA that often, I don't want to pass it in via constructor injection or property injection. I'd add a static container instance to the Installer, but I have to believe that there's a more idiomatic solution than that.
A:
For what you want are common DI patterns.
If your service has an optional dependency (in other words, it can do its thing with the absence of the dependency), you should use property injection. If your service needs that dependency to properly work, you should use constructor injection (because it is a required dependency). If however, the creation of that service is time consuming, you should hide that dependency behind a proxy, that can create that dependency lazily when the proxy is called for the first time.
In your case however, ServiceA is registered as a singleton, so it is only created once during the lifetime of the application. In other words, there is no reason to use a proxy and since your service can't live without it, you should just use constructor injection, because that clearly communicates that IServiceA is a required dependency.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Nesting Inputbox If Statements
This is probably a simple question, but if I need to collect data at the start of a sub, using several input boxes, which one of these is the right way?
Example 1:
InputText1 = InputBox("Enter your name")
If InputText1 = "" Then Exit Sub
InputText2 = InputBox("Enter your age")
If InputText2 = "" Then Exit Sub
'Do something
Example 2:
InputText1 = InputBox("Enter your name")
If Not InputText1 = "" Then
InputText2 = InputBox("Enter your age")
If Not InputText2 = "" Then
'Do something
End If
End If
A:
I think a better way would be to create a form asking for all of the data.
However both your sets of code work. It depends on if you believe that there should only be one exit in a procedure. Your second example only has one exit. The reasoning for that is that you always know where it exits. However the downside is that the code becomes nested and more complex visually. I prefer to exit if he condition is simple and the subroutine is ending with an error exit ie not doing something. SO I would prefer Example 1.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is it possible to display one object multiple times in a VirtualStringTree?
I realize that I really need to rewrite my programs data structure (not now, but soon, as the deadline is monday), as I am currently using VST (VirtualStringTree) to store my data.
What I would like to achieve, is a Contact List structure. The Rootnodes are the Categories, and the children are the Contacts. There is a total of 2 levels.
The thing is though, that I need a contact to display in more than 1 category, but they need to be synchronized. Particularly the Checkstate.
Currently, to maintain sync, I loop thru my whole tree to find nodes that have the same ID as the one that was just changed. But doing so is very slow when there is a huge ammount of nodes.
So, I thought: Would it be possible to display one instance of the Contact Object, in multiple Categories?
Note: Honestly I am not 100% familiar with the terminology - what I mean by Instance, is one Object (or Record), so I will not have to look thru my entire tree to find Contact Objects with the same ID.
Here is an example:
As you see, Todd Hirsch appears in Test Category, and in All Contacts. But behind the scenes, those are 2 PVirtualNodes, so when I change a property on one of the node's (Like CheckState), or something in the node's Data Record/Class, the 2 nodes are not synchronized. And currently the only way I can synchronize them, is to loop thru my tree, find all the nodes that house that same contact, and apply the changes to them and their data.
To summarize: What I am looking for, is a way to use one object/record, and display it in several Categories in my tree - and whenever one node gets checked, so will every other node that houses the same Contact object.
Do I make any sense here?
A:
Of course you can. You need to separate nodes and data in your mind. Nodes in TVirtualStringTree do not need to hold the data, the can simply be used to point to an instance where the data can be found. And of course you can point two nodes to the same object instance.
Say you have a list of TPerson's and you haev a tree where you want to show each person in different nodes. Then you declare the record you use for your nodes simply as something like:
TNodeRecord = record
... // anything else you may need or want
DataObject: TObject;
...
end;
In the code where the nodes are initialized, you do something like:
PNodeRecord.DataObject := PersonList[SomeIndex];
That's the gist of it. If you want a general NodeRecord, like I showed above, then you would need to cast it back to the proper class in order to use it in the various Get... methods. You can of course also make a specific record per tree, where you declare DataObject to be of the specific type of class that you display in the tree. The only drawback is that you then limit the tree to showing information for that class of objects.
I should have a more elaborate example lying around somewhere. When I find it, I'll add it to this answer.
Example
Declare a record to be used by the tree:
RTreeData = record
CDO: TCustomDomainObject;
end;
PTreeData = ^RTreeData;
TCustomDomainObject is my base class for all domain information. It is declared as:
TCustomDomainObject = class(TObject)
private
FList: TObjectList;
protected
function GetDisplayString: string; virtual;
function GetCount: Cardinal;
function GetCDO(aIdx: Cardinal): TCustomDomainObject;
public
constructor Create; overload;
destructor Destroy; override;
function Add(aCDO: TCustomDomainObject): TCustomDomainObject;
property DisplayString: string read GetDisplayString;
property Count: Cardinal read GetCount;
property CDO[aIdx: Cardinal]: TCustomDomainObject read GetCDO;
end;
Please note that this class is set up to be able to hold a list of other TCustomDomainObject instances. On the form which shows your tree you add:
TForm1 = class(TForm)
...
private
FIsLoading: Boolean;
FCDO: TCustomDomainObject;
protected
procedure ShowColumnHeaders;
procedure ShowDomainObject(aCDO, aParent: TCustomDomainObject);
procedure ShowDomainObjects(aCDO, aParent: TCustomDomainObject);
procedure AddColumnHeaders(aColumns: TVirtualTreeColumns); virtual;
function GetColumnText(aCDO: TCustomDomainObject; aColumn: TColumnIndex;
var aCellText: string): Boolean;
protected
property CDO: TCustomDomainObject read FCDO write FCDO;
public
procedure Load(aCDO: TCustomDomainObject);
...
end;
The Load method is where it all starts:
procedure TForm1.Load(aCDO: TCustomDomainObject);
begin
FIsLoading := True;
VirtualStringTree1.BeginUpdate;
try
if Assigned(CDO) then begin
VirtualStringTree1.Header.Columns.Clear;
VirtualStringTree1.Clear;
end;
CDO := aCDO;
if Assigned(CDO) then begin
ShowColumnHeaders;
ShowDomainObjects(CDO, nil);
end;
finally
VirtualStringTree1.EndUpdate;
FIsLoading := False;
end;
end;
All it really does is clear the form and set it up for a new CustomDomainObject which in most cases would be a list containing other CustomDomainObjects.
The ShowColumnHeaders method sets up the column headers for the string tree and adjusts the header options according to the number of columns:
procedure TForm1.ShowColumnHeaders;
begin
AddColumnHeaders(VirtualStringTree1.Header.Columns);
if VirtualStringTree1.Header.Columns.Count > 0 then begin
VirtualStringTree1.Header.Options := VirtualStringTree1.Header.Options
+ [hoVisible];
end;
end;
procedure TForm1.AddColumnHeaders(aColumns: TVirtualTreeColumns);
var
Col: TVirtualTreeColumn;
begin
Col := aColumns.Add;
Col.Text := 'Breed(Group)';
Col.Width := 200;
Col := aColumns.Add;
Col.Text := 'Average Age';
Col.Width := 100;
Col.Alignment := taRightJustify;
Col := aColumns.Add;
Col.Text := 'CDO.Count';
Col.Width := 100;
Col.Alignment := taRightJustify;
end;
AddColumnHeaders was separated out to allow this form to be used as a base for other forms showing information in a tree.
The ShowDomainObjects looks like the method where the whole tree will be loaded. It isn't. We are dealing with a virtual tree after all. So all we need to do is tell the virtual tree how many nodes we have:
procedure TForm1.ShowDomainObjects(aCDO, aParent: TCustomDomainObject);
begin
if Assigned(aCDO) then begin
VirtualStringTree1.RootNodeCount := aCDO.Count;
end else begin
VirtualStringTree1.RootNodeCount := 0;
end;
end;
We are now mostly set up and only need to implement the various VirtualStringTree events to get everything going. The first event to implement is the OnGetText event:
procedure TForm1.VirtualStringTree1GetText(Sender: TBaseVirtualTree; Node:
PVirtualNode; Column: TColumnIndex; TextType: TVSTTextType; var CellText:
string);
var
NodeData: ^RTreeData;
begin
NodeData := Sender.GetNodeData(Node);
if GetColumnText(NodeData.CDO, Column, {var}CellText) then
else begin
if Assigned(NodeData.CDO) then begin
case Column of
-1, 0: CellText := NodeData.CDO.DisplayString;
end;
end;
end;
end;
It gets the NodeData from the VirtualStringTree and used the obtained CustomDomainObject instance to get its text. It uses the GetColumnText function for this and that was done, again, to allow for using this form as a base for other forms showing trees. When you go that route, you would declare this method virtual and override it in any descendant forms. In this example it is simply implemented as:
function TForm1.GetColumnText(aCDO: TCustomDomainObject; aColumn: TColumnIndex;
var aCellText: string): Boolean;
begin
if Assigned(aCDO) then begin
case aColumn of
-1, 0: begin
aCellText := aCDO.DisplayString;
end;
1: begin
if aCDO.InheritsFrom(TDogBreed) then begin
aCellText := IntToStr(TDogBreed(aCDO).AverageAge);
end;
end;
2: begin
aCellText := IntToStr(aCDO.Count);
end;
else
// aCellText := '';
end;
Result := True;
end else begin
Result := False;
end;
end;
Now that we have told the VirtualStringTree how to use the CustomDomainObject instance from its node record, we of course still need to link the instances in the main CDO to the nodes in the tree. That is done in the OnInitNode event:
procedure TForm1.VirtualStringTree1InitNode(Sender: TBaseVirtualTree;
ParentNode, Node: PVirtualNode; var InitialStates: TVirtualNodeInitStates);
var
ParentNodeData: ^RTreeData;
ParentNodeCDO: TCustomDomainObject;
NodeData: ^RTreeData;
begin
if Assigned(ParentNode) then begin
ParentNodeData := VirtualStringTree1.GetNodeData(ParentNode);
ParentNodeCDO := ParentNodeData.CDO;
end else begin
ParentNodeCDO := CDO;
end;
NodeData := VirtualStringTree1.GetNodeData(Node);
if Assigned(NodeData.CDO) then begin
// CDO was already set, for example when added through AddDomainObject.
end else begin
if Assigned(ParentNodeCDO) then begin
if ParentNodeCDO.Count > Node.Index then begin
NodeData.CDO := ParentNodeCDO.CDO[Node.Index];
if NodeData.CDO.Count > 0 then begin
InitialStates := InitialStates + [ivsHasChildren];
end;
end;
end;
end;
Sender.CheckState[Node] := csUncheckedNormal;
end;
As our CustomDomainObject can have a list of other CustomDomainObjects, we also set the InitialStates of the node to include HasChildren when the Count of the lsit is greater than zero. This means that we also need to implement the OnInitChildren event, which is called when the user clicks on a plus sign in the tree. Again, all we need to do there is tell the tree for how many nodes it needs to prepare:
procedure TForm1.VirtualStringTree1InitChildren(Sender: TBaseVirtualTree; Node:
PVirtualNode; var ChildCount: Cardinal);
var
NodeData: ^RTreeData;
begin
ChildCount := 0;
NodeData := Sender.GetNodeData(Node);
if Assigned(NodeData.CDO) then begin
ChildCount := NodeData.CDO.Count;
end;
end;
That's all folks!!!
As I have shown an example with a simple list, you still need to figure out which data instances you need to link to which nodes, but you should have a fair idea now of where you need to do that: the OnInitNode event where you set the CDO member of the node record to point to the CDO instance of your choice.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Ruby modify file instead of creating new file
Say I have the following Ruby code which, given a hash of insert positions, reads a file and creates a new file with extra text inserted at those positions:
insertpos = {14=>25,16=>25}
File.open('file.old', 'r') do |oldfile|
File.open('file.new', 'w') do |newfile|
oldfile.each_with_index do |line,linenum|
inserthere = insertpos[linenum]
if(!inserthere.nil?)then
line.insert(inserthere,"foo")
end
newfile.write(line)
end
end
end
Now, instead of creating that new file, I would like to modify this original (old) file. Can someone give me a hint on how to modify the code? Thanks!
A:
At a very fundamental level, this is an extremely difficult thing to do, in any language, on any operating system. Envision a file as a contiguous series of bytes on disk (this is a very simplistic scenario, but it serves to illustrate the point). You want to insert some bytes in the middle of the file. Where do you put those bytes? There's no place to put them! You would have to basically "shift" the existing bytes after the insertion point "down" by the number of bytes you want to insert. If you're inserting multiple sections into an existing file, you would have to do this multiple times! It will be extremely slow, and you will run a high risk of corrupting your data if something goes awry.
You can, however, overwrite existing bytes, and/or append to the end of the file. Most Unix utilities give the appearance of modifying files by creating new files and swapping them with the old. Some more sophisticated schemes, such as those used by databases, allow inserts in the middle of files by 1. reserving space for such operations (when the data is first written), 2. allowing non-contiguous blocks of data within the file through indexing and other techniques, and/or 3. copy-on-write schemes where a new version of the data is written to the end of the file and the old version is invalidated by overwriting an indicator of some kind. You are most likely not wanting to go through all this trouble for your simple use case!
Anyway, you've already found the best way to do what you're trying to do. The only thing you're missing is a FileUtils.mv('file.new', 'file.old') at the very end to replace the old file with the new. Please let me know in the comments if I can help explain this any further.
(Of course, you can read the entire file into memory, make your changes, and overwrite the old file with the updated contents, but I don't believe that's what you're asking here.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using SVM with different kernels as a last layer in CNN network
I'm trying to replace the last fully connected layer of a CNN network with SVM using pytorch in a multi-classification problem. I've done some research and it says, that I should just replace the nn.CrossEntropyLoss with nn.MultiMarginLoss.
How does changing the criterion only actually corresponds with the "replacing fully connected layer with SVM" task? Another thing is that I'd like to use the SVM with different kernel, like for example the quadratic one.
A:
This question can actually be interpreted as the difference between Logistic regression and SVM in classification.
We can naively look at the whole platform of your deep learning as if you have a magician, and that magician accepts the input data, and give you a set of engineered featured, and you use those features to do the classification.
Depending on which loss you minimize, you can solve this classification issue with different sorts of functions. If you use cross-entropy, it is like you are applying a logistic regression classification. On the other hand, if you minimize the marginal loss, it is actually equal to finding the support vectors, which is indeed how SVM works.
You need to read about the role of kernels in the calculation of the loss(for ex, here ), but TL;DR is that for loss computation, you have a component of K(xi,xj) which is actually the kernel function and indicate the similarity of xi and xj.
So you can implement a custom loss, where you have a polynomial kernel (quadratic in your case), and imitate the margin loss calculation there.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Get max bookings count in range
I have a ParkingLot model. Parking Lots have a number of available lots. Users can then book a parking lot for one or more days. Hence I have a Booking model.
class ParkingLot
has_many :bookings
end
class Booking
belongs_to :parking_lot
end
Simplified Usecase
ParkingLot
Given a parking lot with 5 available lots:
Bookings
Bob books a place from Monday to Sunday
Sue makes one booking each on Monday, Wednesday and Friday
Henry books only on Friday.
Since the weekend is busy, 4 other people book from Saturday to Sunday.
Edit
The bookings have a start_date & an end_date, so Bob's bookings only has one entry. Mon-Sun.
Sue on the other hand really has three bookings, all starting and ending on the same day. Mon-Mon, Wed-Wed, Fri-Fri.
This gives us following booking data:
For simplicity, instead of the user_id (1) & the date (2015-5-15), I will use the initial (B) and the week days (Mon).
––––––––––––––––––––––––––––––––––––––––––
| id | user_id | start_date| end_date| ... |
|––––––––––––––––––––––––––––––––––––––––––|
| 1 | B | Mon | Sun | ... |
|––––––––––––––––––––––––––––––––––––––––––|
| 2 | S | Mon | Mon | ... |
| 3 | S | Wed | Wed | ... |
| 4 | S | Fri | Fri | ... |
|––––––––––––––––––––––––––––––––––––––––––|
| 5 | H | Fri | Fri | ... |
|––––––––––––––––––––––––––––––––––––––––––|
| 6 | W | Sat | Sun | ... |
| 7 | X | Sat | Sun | ... |
| 8 | Y | Sat | Sun | ... |
| 9 | Z | Sat | Sun | ... |
––––––––––––––––––––––––––––––––––––––––––
This gives us the following week:
–––––––––––––––––––––––––––––––––––––––––
| Mon | Tue | Wed | Thu | Fri | Sat | Sun |
|–––––––––––––––––––––––––––––––––––––––––|
| B | B | B | B | B | B | B |
|–––––––––––––––––––––––––––––––––––––––––|
| S | - | S | - | S | - | - |
|–––––––––––––––––––––––––––––––––––––––––|
| - | - | - | - | H | - | - |
|–––––––––––––––––––––––––––––––––––––––––|
| - | - | - | - | - | W | W |
| - | - | - | - | - | X | X |
| - | - | - | - | - | Y | Y |
| - | - | - | - | - | Z | Z |
|=========================================|
| 2 | 1 | 2 | 1 | 3 | 5 | 5 | # Bookings Count
|=========================================|
| 3 | 4 | 3 | 4 | 2 | 0 | 0 | # Available lots
–––––––––––––––––––––––––––––––––––––––––
These bookings are already in the database, so when a new user wants to book from Monday to Friday, there is space to do so. But when he wants to book from Monday to Saturday, this will not be possible.
My goal is to query for the max number of bookings in a given time range. Ultimately leading to the available lots
# Mon - Thursday => max bookings: 2 => 3 available lots
# Mon - Friday => max bookings: 3 => 2 available lots
# Mon - Sunday => max bookings: 5 => 0 available lots
A simple, but wrong approach of mine was to get all bookings that fall in the given time range:
scope :in_range, ->(range) { where("end_date >= ?", range.first).where("start_date <= ?", range.last) }
But this is by no means correct. Querying from Monday to Friday returns 5 bookings, one from Bob, one from Henry and three from Sue. This would falsely assume the Parking Lot is full.
How would I create such a query to get the max count of bookings in a given time range?
This can also be pure SQL, I'll be happy to translate it into AR lateron.
A:
There is a simple way using a calendar table. If you don't have one already you should create it, it has multiple usages.
select
c.calendar_date
,count(b.start_date) -- number of occupied lots
from calendar as c
left join bookings as b -- need left join to get dates where no lot is already booked
on c.calendar_date between b.start_date and b.end_date
-- restrict to the searched range of dates
where calendar_date between date '2015-05-10' and date '2015-05-18'
group by c.calendar_date
order by c.calendar_date
Edit:
Vladimir Baranov suggested to add a link on how to create and use a calendar table. Of course the actual implementation is always user and DBMS specific (e.g. MS SQL Server), so searching for "calendar table" + yourDBMS will probably reveal some source code for your system.
In fact the easiest way to create a calendar table is to do the calculation for the range of years you need in a spreadsheet (Excel, etc. go all the functions you need like easter calculation) and then push it to the database, it's a one-time operation :-)
Rails use case¹
First, create the CalendarDay model. I've added more columns than just the day, which may come in handy for future scenarios.
db/migrate/201505XXXXXX_create_calendar_days.rb
class CreateCalendarDays < ActiveRecord::Migration
def change
create_table :calendar_days, id: false do |t|
t.date :day, null: false
t.integer :year, null: false
t.integer :month, null: false
t.integer :day_of_month, null: false
t.integer :day_of_week, null: false
t.integer :quarter, null: false
t.boolean :week_day, null: false
end
execute "ALTER TABLE calendar_days ADD PRIMARY KEY (day)"
end
end
Then, after running rake db:migrate add a rake task to populate your model:
lib/tasks/calendar_days.rake
namespace :calendar_days do
task populate: :environment do
(Date.new(2010,1,1)...Date.new(2049,12,31)).each do |d|
CalendarDay.create(
day: d,
year: d.year,
month: d.month,
day_of_month: d.day,
day_of_week: d.wday,
quarter: (d.month / 4) + 1,
week_day: ![0,6].include?(d.wday)
)
end
end
end
And run calendar_days:populate
Lastly, you can use Activerecord to perform complex queries as the one above:
CalendarDay.select("calendar_days.day, count(b.departure_time)")
.joins("LEFT JOIN bookings as b on calendar_days.day BETWEEN b.departure_time and b.arrival_time")
.where(:day => start_date..end_date)
.group(:day)
.order(:day)
# => SELECT "calendar_days"."day", count(b.departure_time)
# FROM "calendar_days"
# LEFT JOIN bookings as b on calendar_days.day BETWEEN b.departure_time and b.arrival_time
# WHERE ("calendar_days"."day" BETWEEN '2015-05-04 13:41:44.877338' AND '2015-05-11 13:42:00.076805')
# GROUP BY day
# ORDER BY "calendar_days"."day" ASC
1 - Use case added by TheChamp
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why this Firebase Listener is not being called?
Fellow programmers, I'm developing a new app with Firebase and Flutter and I have finished in an issue when I'm retrieving some data from my realtime database that looks like this:
My main issue is that this part of the code is not called at all: .listen((Event event) {//...}).
This is the part of the code that I'm using to retrieve the data:
static Future<StreamSubscription<Event>> getTodoStream(String todoKey,
void onData(Todo todo)) async {
String accountKey = await Preferences.getAccountKey();
StreamSubscription<Event> subscription = FirebaseDatabase.instance
.reference()
.child("")
.child(account)
.child("")
.child(Key)
.onValue
.listen((Event event) {
var todo = new Todo.fromJson(event.snapshot.key, event.snapshot.value);
onData(todo);
});
}
I followed this tutorial:
https://www.youtube.com/watch?v=Bper2K92bd8&feature=youtu.be
And this is the code that I used as example:
https://gist.github.com/branflake2267/ea80ce71179c41fdd8bbdb796ca889f4
However, as I said the listen is not being triggered at all. Does any of you know why it's not working? Thanks for your advice.
A:
The main issue was the configuration of the database because it wasn't set as read/write.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Extracting weight importance from One-Layer feed-forward network
may question involves a simple neural network architecture (as simple as my expertise in neural networks):
$n_I$ input nodes (binary, ordinal, real, integers);
$n_H$ hidden nodes;
$n_O$ output nodes (count data).
There are no cycles/loops in the neural network, all nodes from a layer are connected to the previous and next ones. The architecture is a classical feed-forward one.
My question is this: I'm trying to understand which inputs are more relevant to each output. Is it sufficient to start from the output node of interest, look at the highest hidden node weights going into it and to the same between those hidden nodes found and the inputs? (I performed a standardization of the inputs of course)
Furthermore: If so: does one follow the same process when considering deeper nets?
Thank you in advance
A:
No! It is insufficient to use such greedy approach. Consider the following very simple example:
For simplicity, consider that all neurons are linear. Therefore, it is easy to see that the following holds:
Y = 405A + 500B
Obviously, B is more important, but your algorithm selects A as the winner. Note that the above network is a really simple one, where in real world, we usually have very non-linear neurons. Anyway, this example shows that your greedy algorithm is too simple to can determine the importance of inputs.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
iOS - UITableView section header part of grouped table
I want to add a section header to my tableview when it is in editing mode. Basically I would just want it to be a part of the datasource to have the same look as the rest of the table (see below image for wanted result). But inserting an object ("Add Contact") to the data source leads to alot of micro management when switching in and out from editing mode and it's actually not part of the data source it is more as a header.
I tried using the following code snippet to achieve the same effect but it didn't turn out right (just added the Add Contact text to sit on top of the section, but not part of it as a grouped table cell).
Anyone have any clues on what I'm missing?
- (UIView *)tableView:(UITableView *)tableView viewForHeaderInSection:(NSInteger)section {
UIView *headerView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, tableView.bounds.size.width, 30)];
if (section == 1) {
UITableViewCell *addContactCell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:nil];
addContactCell.textLabel.text = @"Add Contact";
addContactCell.textLabel.opaque = NO;
return addContactCell;
} else {
return nil;
}
}
A:
You don't need to add the 'Add Contact' row to the datasource. You just have to lie in your numberOfRowsInSection, and in cellForIndexPath: methods.
tableView:numberOfRowsInSection: needs to return the number of rows + 1.
tableView:cellForIndexPath: returns your insert cell if the row number is 0, and otherwise returns the cell for the data for indexPath.row - 1.
You'd have to have a little extra stuff in didSelect etc, but there shouldn't be a lot of micromanagement involved.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
¿Cómo quitar comillas de algunos elemento de una lista en Python?
Tengo la siguiente lista en python:
[43, '1', '1', '3', '11', 1.00, "['24466500', '5650000']", 100000000.00, 'AS6', "['92100000', '40000000']"]
necesito quitar las comillas dobles de los elementos tipo list de la lista que está arriba, es decir, que la lista me quede de la siguiente manera:
[43, '1', '1', '3', '11', 1.00, ['24466500', '5650000'], 100000000.00, 'AS6', ['92100000', '40000000']]
Traté con replace() pero no me funciona en este caso.
Gracias por sus respuestas
A:
No puedes quitar las comillas dobles porque no forman parte del objeto (de la cadena) en realidad. Solo son agregadas a la hora de imprimir o reproducir el objeto str para que sea identificado como una cadena por el usuario. Lo que necesitas es convertir esa cadena que representa una lista en una lista en si misma. Esto seria mas sencillo antes de construir esta lista, ya que al ser una lista mixta hay que filtrar aquellos objetos que son cadenas y que son representaciones validas de una lista.
Puedes usar una combinación de regex y ast.literal_eval para hacer lo que deseas:
import ast
import re
lista = [43, '1', '1', '3', '11', 1.00, "['24466500', '5650000']", 100000000.00, 'AS6', "['92100000', '40000000']"]
patt = re.compile(r"\[.*\]")
res = [ast.literal_eval(e) if isinstance(e, str) and patt.fullmatch(e) else e for e in lista]
print(res)
Salida:
[43, '1', '1', '3', '11', 1.0, ['24466500', '5650000'], 100000000.0, 'AS6', ['92100000', '40000000']]
Por cada elemento de la lista filtramos aquellos que sean cadenas con isinstance. Aquellos que lo sean pasan un segundo filtro para saber si son o no representaciones de listas, para lo que se usa la expresión regular. Aquellos elementos que cumplen ambas condiciones son pasados a ast.literal_eval que evalúa la cadena y retorna el objeto Python adecuado (una lista).
ast.literal_eval a diferencia de eval es segura ante la inyección de código ya que las expresiones que evalúa exclusivamente se limitan a strings, números, tuplas, listas, diccionarios, booleanos y None.
La expresión regular es un poco burda, puede perfeccionarse para descartar posibles string que empezando por "[" y terminando con "]" no son en realidad listas Python válidas.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can you fix the scrollbar on the screen?
I have a div which has a height bigger than the screen and I want to leave this bit as is, but this div also has a horizontal scrollbar at the bottom and this one is not visible until the user scrolls all the way down, and this is a bit strage. You have to scroll down, scroll to the right and then scroll back up to see what you need. So I was wondering if there is a way to fix the bottom scrollbar at the bottom of the screen so that it's available at any time to the user.
Here are two prints to demonstrate my problem
What I have right now: http://imageshack.us/a/img132/2087/38417747.png
How I want to make it look: http://imageshack.us/a/img267/7452/90387247.png
I was going to design a scrollbar myself using jQuery but I want to know if this is doable without this much effort.
Example: Here you go: jsfiddle.net/jy3HK/
Also, please try to answer my question without modifying it. I want my application to be as customizable as possible. There is already an option to cancel the bottom scroll but I want to add this option too. Thank you.
A:
It is very possible, but require a tricky JS approach - there is no possible CSS solution to your question as far as I can reckon.
I have created a fiddle here - http://jsfiddle.net/teddyrised/szuzk/, which is more or less a proof-of-concept modification to your original fiddle, but before you go off, I implore you to understand my approach first :)
The problem you have is that the scrollbar is at the bottom of the scrolling element, regardless of the height of the viewport. In order to bring the scrollbar to the bottom of the viewport (not the scrolling element), you will have to resize the element such that the bottom of the element is always at the bottom of the viewport, and not any lower.
This is done by detecting the .scroll event, and subsequently recalculating the element height on the go. However, this also means that the element will not take up the original 1500px height intended - so we create a wrap-around element and assign that height to it.
$(document).ready(function () {
// Create wrap around element
$('#big').wrap('<div id="scroll-wrap" />');
// Function: resize element when scrolling
function recalcHeight() {
$('#big').css({
'height': $(window).height() + $(window).scrollTop() - $('#big').offset().top
});
}
// Resize wrap around element upon resize
$(window).resize(function () {
$('#scroll-wrap').css({
'height': $('#big').css('max-height')
});
}).resize(); // Fires resize() event onload
// Recalculate height upon load, and upon scroll
recalcHeight();
$(window).scroll(function () {
recalcHeight();
});
});
That's all ;)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
This set is a manifold
let $S$ be the set of pairs $(x,y)$ where x,y are orthogonal unit vectors in $\mathbb R^3$. i am trying to show this is a topological manifold. for starters one needs to define a suitable topology on it. i was thinking let a set $U$ be open in $S$ iff $U \cap S^2$ (intersection with sphere) is open in $S^2$ in the subspace topology? am i going in the right direction? I'd really appreicate some help. thank you
this is really about topological manifold, sorry for not stating it earlier.
here is the definition:
a manifold is a second countable Hausdorff space that is locally
Euclidean.
A:
Your set $S$ is a subset of $\mathbb R^6$, so give it the subspace topology. That ensures it's 2nd countable and Hausdorff.
To show it's a manifold, notice $S$ is the pairs $(x,y) \in \mathbb R^3 \times \mathbb R^3$ such that:
$$ |x|^2 =1,\ \ |y|^2=1, \ \ x\cdot y = 0 $$
This is the same as saying $S = f^{-1}(1,1,0)$ where $f(x,y) = (|x|^2, |y|^2, x \cdot y)$.
So the idea would be to show $(1,1,0)$ is a regular-value of $f$, then apply the preimage theorem as Qiaochu cites. The pre-image theorem is basically just the implicit function theorem from calculus, but re-cast in a convenient formalism for saying things are manifolds.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to remove specific rows when a condition is fulfilled?
I am trying to remove/filter out some specific rows when it meets the condition of the two columns if not the column EP is flagged as 1. What is the specific code for this?
For example: in the dataframe df_NC when the column "Population_type" (binary type) is equal to 1 and the column NC (binary type) is equal to 0 remove the rows when this condition is satisfied, else flag EP as 1
df_ep <- df_NC %>% mutate(EP= case_when(
df_NC$Population_Type == 1 & df_NC$NC == 0 ~ 1,
TRUE ~ 0
))
A:
From your code I'm assuming you are using dplyr package. A couple of mistakes there.
You don't need to use the base notation like df_NC$NC inside dplyr functions, just use the name of the variable.
I don't see a reason create the column EP if you are filtering one of the values (0/FALSE).
df_NC %>%
mutate(EC = if_else(Population_Type == 1 & NC == 0, 1, 0)) %>%
filter(EC == 1)
# Or shorter, considering my second point
df_NC %>%
filter(Population_Type == 1, NC == 0) # Equivalent to EC == 1
Also, try to use boolean (TRUE/FALSE) instead of integer 1/0 to work with "binary" data type.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Multiple css3 transition types not using 'all'
I'm trying to transition both the scale and the opacity using CSS3 transitions - I can't work out how to transition multiple things without using all
transition-function: all;
transition-duration: 1s;
transition-timing-function: ease-in;
works, as does:
transition: all 1s ease-in;
and
transition-function: opacity;
or
transition-function: scale;
but not
transition-function: scale, opacity;
See the example here: http://jsfiddle.net/5PCGs/7/
Any help would be really appreciated! Thanks :) !
Edit:
I have worked out it's transition-property (thanks Simone), but now it's only animating opacity in Firefox, not both - http://jsfiddle.net/5PCGs/9 - compare this in FF and Chrome side-by-side
A:
Thanks to Boris Zbarsky and Simone Vittori.
The answer was to use transition-property and in not specify all the things you're transforming in there, just put transform in as one of the values, and let the differences in the transforms between the classes take care of itself.
transition-property: transform,opacity;
transition-duration: 1s;
transition-timing-function: ease-in;
EDIT: Don't for get to add any prefixes you need to these. For Webkit browsers for example:
-webkit-transition-property: -webkit-transform,opacity;
-webkit-transition-duration: 1s;
-webkit-transition-timing-function: ease-in;
Thanks again!
A:
Try to use transition-property instead of transition-function, that actually doesn't exist. :)
Each of the transition properties accepts a comma-separated list, allowing multiple transitions to be defined.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Akka.NET not recognising my custom logger and defaulting to BusLogger
I am learning Akka.NET. I am trying to create a custom logger. I followed a tutorial blog post here. I have been unable to get Akka to wire up my custom logger. Apparently the line var sys = ActorSystem.Create("AkkaCustomLoggingActorSystem"); reads in the Akka hocon and configures the logging as per the settings. When I check the value of sys after the actor system is created, I can see the configuration string saved but the logger is of type BusLogger instead of my custom logger.
I checked the Akka.NET source for the ActorSystemImpl class. At line 441 the logger is set to BusLogging and I cannot see anywhere that the logger set in the config is used.
I have a created an extremely simple Akka.NET project targeting .NET core 2.0 that demonstrates the issue. The complete source is below. What am I missing here? Why can I not wire up my custom logger as the tutorial describes?
Program.cs:
using Akka.Actor;
using Akka.Event;
using System;
namespace AkkaCustomLogging
{
class Program
{
static void Main(string[] args)
{
var sys = ActorSystem.Create("AkkaCustomLoggingActorSystem");
var logger = Logging.GetLogger(sys, sys, null); // Always bus logger
Console.ReadLine();
}
}
}
CustomLogger.cs:
using Akka.Actor;
using Akka.Event;
using System;
namespace AkkaCustomLogging
{
public class CustomLogger : ReceiveActor
{
public CustomLogger()
{
Receive<Debug>(e => this.Log(LogLevel.DebugLevel, e.ToString()));
Receive<Info>(e => this.Log(LogLevel.InfoLevel, e.ToString()));
Receive<Warning>(e => this.Log(LogLevel.WarningLevel, e.ToString()));
Receive<Error>(e => this.Log(LogLevel.ErrorLevel, e.ToString()));
Receive<InitializeLogger>(_ => this.Init(Sender));
}
private void Init(IActorRef sender)
{
Console.WriteLine("Init");
sender.Tell(new LoggerInitialized());
}
private void Log(LogLevel level, string message)
{
Console.WriteLine($"Log {level} {message}");
}
}
}
app.config:
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="akka" type="Akka.Configuration.Hocon.AkkaConfigurationSection, Akka" />
</configSections>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6.1" />
</startup>
<akka>
<hocon>
<![CDATA[
loggers = [ "AkkaCustomLogging.CustomLogger, AkkaCustomLogging" ]
loglevel = warning
log-config-on-start = on
stdout-loglevel = off
actor {
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
}
]]>
</hocon>
</akka>
</configuration>
A:
Your issue is that your HOCON is missing the akka namespace - it should read:
<akka>
<hocon>
<![CDATA[
akka {
loggers = [ "AkkaCustomLogging.CustomLogger, AkkaCustomLogging" ]
loglevel = warning
log-config-on-start = on
stdout-loglevel = off
actor {
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
}
}
]]>
</hocon>
</akka>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using PaintEventHandler - Visual C++ (Studio 2010) Windows Forms Application
So, I am creating a Windows Forms Application in Visual C++ 2010, and I want to add an event to a text box. When the program loads, a letter A is printed onto the screen. When you enter the text box, the letter is supposed to turn red.
The name of the textbox is AngleA, and this is the code I have so far:
this->AngleA->Enter += gcnew System::Windows::Forms::PaintEventHandler(this, &Form1::AngleA_Enter);
//many lines later
this->Controls->Add(this->AngleA);
//many lines later
public: System::Void Form1::AngleA_Enter(System::Object^ sender, PaintEventArgs^ e)
{
System::Drawing::Font^ textFontA = gcnew System::Drawing::Font("Arial", 16);
System::Drawing::SolidBrush^ textBrushA = gcnew System::Drawing::SolidBrush(Color::Red);
e->Graphics->DrawString("A", textFontA, textBrushA, 300, 120);
}
The original drawing of the letter happens in a separate function, here:
public: virtual Void Form1::OnPaint(PaintEventArgs^ pe ) override
{
Graphics^ g = pe->Graphics;
System::Drawing::Font^ textFont = gcnew System::Drawing::Font("Times New Roman", 16);
SolidBrush^ textBrushA = gcnew SolidBrush(Color::Black);
g->DrawString("A", textFont, textBrushA, 300, 120);
}
So, the drawing of the original letter works great, but every time I try to build the program with the Enter event, I get the following error:
error C2664: 'System::Windows::Forms::Control::Enter::add' : cannot convert parameter 1 from 'System::Windows::Forms::PaintEventHandler ^' to 'System::EventHandler ^'
1> No user-defined-conversion operator available, or
1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast
It seems to me that the form1 object (default name for class in windows forms apps) will only accept an EventHandler parameter for the "this->AngleA->Enter += gcnew " and not PaintEventHandler, but I dont understand why. Is there any way to create an Enter event function that will allow me to paint after the program has already loaded, based on an event?
Thanks for the help, I hope I was clear in my question :)
A:
You can only add a PaintEventHandler to the Paint event; not to the Enter event.
You probably want to add a normal EventHandler to the Enter event and call Invalidate() in the handler.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
When do the HttpApplication.BeginRequest and EndRequest events fire relative to ActionFilter.OnResultExecuted in the MVC pipeline?
The reason I ask is that I am trying to run MVC side-by-side with WebForms.
For WebForms we use an HttpModule to open and close an NHibernate Session once per request - opening the session during HttpApplication.BeginRequest and HttpApplication.EndRequest. For MVC, the recommended method is to use an ActionFilterAttribute but to keep things simple I have decided to stick with the HttpModule.
However, I am using an ActionFilter to open and close Transactions.
I am getting some strange intermittent problems though and I'm curious whether the issue could be that the HttpApplication.EndRequest method is firing before the ActionFilter's OnResultExecuted method.
A:
Here's the order of execution:
1. HttpApplication.BeginRequest
2. ActionFilterAttribute.OnResultExecuted
3. HttpApplication.EndRequest
The HttpApplication.EndRequest can never fire before ActionFilterAttribute.OnResultExecuted.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Issue with SQL Data Adapter Search function
This is my code, i'm trying to implement a search function im my application:
private void button1_Click(object sender, EventArgs e)
{
SqlConnection conn = new SqlConnection(@"Data Source=(LocalDB)\v11.0;AttachDbFilename=c:\users\dido\documents\visual studio 2012\Projects\CourseProjectCars\CourseProjectCars\DataCars.mdf;Integrated Security=True;Connect Timeout=30");
SqlDataAdapter SDA = new SqlDataAdapter("SELECT * FROM SuperCars where Car like " + textBox1.Text, conn);
DataTable dt = new DataTable();
SDA.Fill(dt);
dataGridView1.DataSource = dt;
}
When i'm trying to search in my database, for example "Bugatti", it said "Invalid column name 'Bugatti'." Maybe it's a simple mistake of mine, but i cannot find it ..
A:
Problem : You are not providing the Search parameter properly by ignoring single quotes.
Solution : You need to enclose the String types within single quotes.
Suggestion: your SELECT query is open to SQL Injection attacks. You need to use parameterised SQL queries to avoid this.
Parameterised queries helps to pass the parameters with valid types implicitly.
For example you don't need to enclose the string types within single quotes while passing parameters using parameterised queries
Solution 1: without using parameterised queries
SqlDataAdapter SDA = new SqlDataAdapter("SELECT * FROM SuperCars where Car like '" + textBox1.Text+"'", conn);
Solution 2: using parameterised queries
SqlCommand sqlcmd = new SqlCommand("SELECT * FROM SuperCars where Car like @Car);
sqlcmd.Parameters.AddWithValue("@Car","%"+textBox1.Text+"%");
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why does worker node not see updates to accumulator on another worker nodes?
I'm using a LongAccumulator as a shared counter in map operations. But it seems that I'm not using it correctly because the state of the counter on the worker nodes is not updated. Here's what my counter class looks like:
public class Counter implements Serializable {
private LongAccumulator counter;
public Long increment() {
log.info("Incrementing counter with id: " + counter.id() + " on thread: " + Thread.currentThread().getName());
counter.add(1);
Long value = counter.value();
log.info("Counter's value with id: " + counter.id() + " is: " + value + " on thread: " + Thread.currentThread().getName());
return value;
}
public Counter(JavaSparkContext javaSparkContext) {
counter = javaSparkContext.sc().longAccumulator();
}
}
As far as I can understand the documentation this should work fine when the application is run within multiple worker nodes:
Accumulators are variables that are only “added” to through an associative and commutative operation and can therefore be efficiently supported in parallel. They can be used to implement counters (as in MapReduce) or sums. Spark natively supports accumulators of numeric types, and programmers can add support for new types.
But here is the result when the counter is incremented on 2 different workers and as it looks like the state is not shared between the nodes:
INFO Counter: Incrementing counter with id: 866 on thread: Executor task launch worker-6
INFO Counter: Counter's value with id: 866 is: 1 on thread: Executor task launch worker-6
INFO Counter: Incrementing counter with id: 866 on thread: Executor task launch worker-0
INFO Counter: Counter's value with id: 866 is: 1 on thread: Executor task launch worker-0
Do I understand the accumulators conception wrong or is there any setting that I must start the task with?
A:
It shouldn't work:
Tasks running on a cluster can then add to it using the add method. However, they cannot read its value. Only the driver program can read the accumulator’s value, using its value method.
Each task has its own accumulator, which is updated locally, and merged with "shared" copy on the driver, once task has finished and result has been reported.
The old Accumulator API (now wrapping AccumulatorV2) actually thrown an exception when using value from within a task, but for some reason it has been omitted in AccumulatorV2.
What you experience is actually similar to the old behavior described here How to print accumulator variable from within task (seem to "work" without calling value method)?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to rename a table column in Oracle 10g
I would like to know:
How to rename a table column in Oracle 10g?
A:
SQL> create table a(id number);
Table created.
SQL> alter table a rename column id to new_id;
Table altered.
SQL> desc a
Name Null? Type
----------------------------------------- -------- -----------
NEW_ID NUMBER
A:
The syntax of the query is as follows:
Alter table <table name> rename column <column name> to <new column name>;
Example:
Alter table employee rename column eName to empName;
To rename a column name without space to a column name with space:
Alter table employee rename column empName to "Emp Name";
To rename a column with space to a column name without space:
Alter table employee rename column "emp name" to empName;
A:
alter table table_name rename column oldColumn to newColumn;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to scrape data from last table on website
I'm trying to scrape the "team per game stats" table from this website using this code:
from urllib.request import urlopen as uo
from bs4 import BeautifulSoup as BS
import pandas as pd
url = 'https://www.basketball-reference.com/leagues/NBA_2020.html'
html = uo(url)
soup = BS(html, 'html.parser')
soup.findAll('tr')
headers = [th.getText() for th in soup.findAll('tr')]
headers = headers[1:]
print(headers)
rows = soup.findAll('tr')[1:]
team_stats = [[td.getText() for td in rows[i].findAll('td')]
for i in range(len(rows))]
stats = pd.DataFrame(team_stats, columns=headers)
But it returns this error:
AssertionError: 71 columns passed, passed data had 212 columns
A:
The problem is that the data is hidden in a commented section of the HTML. The table you want to extract is rendered with Javascript in your browser. Requesting the page with requests or urllib just yields the raw HTML.
So be aware that you have to examine the source code of the page with "View page source" rather than the rendered page with "Inspect Element" if you search for the proper tags to find with BeautifulSoup.
Try this:
import requests
from bs4 import BeautifulSoup
import pandas as pd
url = 'https://www.basketball-reference.com/leagues/NBA_2020.html'
html = requests.get(url)
section_start = '<span class="section_anchor" id="team-stats-per_game_link" data-label="Team Per Game Stats">'
block_start = html.text.split(section_start)[1].split("<!--")[1]
block = block_start.split("-->")[0]
soup = BeautifulSoup(block)
data = [th.get_text(",") for th in soup.findAll('tr')]
header = data[0]
header = [x.strip() for x in header.split(",") if x.strip() !=""]
data = [x.split(",") for x in data[1:]]
pd.DataFrame(data, columns=header)
Explanation: You first need to find the commented section by simply splitting the raw HTML just before the section. You extract the section as text, convert to soup and then parse.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Set thickness on QLCDNumber
I am new to Qt and now I am playing with trolltech tutorial.
I managed to customize the QLCDNumber color and background using setPalette method, but I can't seem to change the thickness. I understand that there is no letter, so setFont method can't be used, so I tried setBrush, but still no progress.
Is it possible to set the thickness somehow? If not, please help me understand why.
edit
as @webclectic points out, it is necessary to reimplement paintEvent. But it calls drawString, which calls drawDigit which calls drawSegment which is in QLCDNumberPrivate class (??) see source. I have no idea how to reimplement it: since no methods are virtual, I'd have to rewrite them all, am I right? Is there any other way except rewrite the class completely?
A:
I don't think you can achieve what you want since the width of the LCD number is hardcoded (check this thread).
One solution is to subclass QLcdNumber and reimplement the paintEvent where you will specify the desired width.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to add custom header with to Spring Boot HttpHeaders?
Currently Spring Boot HttpHeaders header only takes <String, String> pairs. How can I add a header with <String, Long>?
I need to integrate with one of the external product API in my Java Program, to access this API, it needs a token header which for this product, all of its APIs only takes Numbers(long). It is so out of ordinary and it gives me a lot headache. I am using RestTemplate and I've searched a lot place with no luck.
Did any of you have done anything similar to this? Need some help.
A:
You can create generic header using MultiValueMap that accepts String as key and Object as value
MultiValueMap<String, Object> map = new LinkedMultiValueMap<>();
map.add("Header1", 11111);
And then create generic HttpEntity of object type by passing MultiValueMap as headers
HttpEntity<Object> entity = new HttpEntity<>(map);
System.out.println(entity);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I display a value of NULL where the join did not find any existing values?
How can I display a value of NULL where the join did not find any existing values?
SELECT u.display_name Associate
, ROUND(SUM(CASE WHEN w.startdate BETWEEN NOW() AND NOW() + INTERVAL 30 DAY THEN w.timeworked END/3600)) '30 Days'
, ROUND(SUM(CASE WHEN w.startdate BETWEEN NOW() AND NOW() + INTERVAL 60 DAY THEN w.timeworked END/3600)) '60 Days'
, ROUND(SUM(CASE WHEN w.startdate BETWEEN NOW() AND NOW() + INTERVAL 90 DAY THEN w.timeworked END/3600)) '90 Days'
FROM worklog w
JOIN cwd_user u
ON u.user_name = w.author
JOIN cwd_membership m
ON m.directory_id = u.directory_id
AND m.lower_child_name = u.lower_user_name
WHERE m.membership_type = 'GROUP_USER'
AND m.lower_parent_name = 'atl_servicedesk_it_agents'
AND w.startdate BETWEEN NOW() AND DATE_ADD(NOW(), INTERVAL 90 DAY)
GROUP
BY u. display_name
ORDER
BY u.last_name;
So on my join u.user_name = w.author I want to show all values where there is a u.user_name, even if there is not a w.author. The display_name should still show up, but the values for 40, 60, and 90 days would be NULL. Ideally I want to change the NULL to be 0 instead. Users don't appear in the worklog table unless they have logged work, so right now it only shows two rows for the two people who have. I still want to show everyone that exists in m.lower_parent_name = 'atl_servicedesk_it_agents' to know that they have not logged anything.
Anyone have any ideas?
A:
You can use LEFT JOIN, but you need to be careful about all the joins and the WHERE conditions:
SELECT u.display_name as Associate,
ROUND(SUM(CASE WHEN w.startdate BETWEEN NOW() AND NOW() + INTERVAL 30 DAY THEN w.timeworked END/3600)) as `30 Days`,
ROUND(SUM(CASE WHEN w.startdate BETWEEN NOW() AND NOW() + INTERVAL 60 DAY THEN w.timeworked END/3600)) as `60 Days`,
ROUND(SUM(CASE WHEN w.startdate BETWEEN NOW() AND NOW() + INTERVAL 90 DAY THEN w.timeworked END/3600)) as `90 Days`
FROM cwd_user u JOIN -- Not sure if this should be LEFT JOIN or not
cwd_membership m
ON m.directory_id = u.directory_id AND
m.lower_child_name = u.lower_user_name AND
m.membership_type = 'GROUP_USER' AND
m.lower_parent_name = 'atl_servicedesk_it_agents' LEFT JOIN
worklog w
ON u.user_name = w.author AND
w.startdate BETWEEN NOW() AND DATE_ADD(NOW(), INTERVAL 90 DAY)
GROUP BY u. display_name
ORDER BY u.last_name;
I'm not sure if the join to m should be an inner join or left join. It depends on whether you want filtering based on that table as well.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Incorrect signature on Rest api unittest
I have made a (POST) webservice. I am trying to write a unittest but keep getting the error incorrect signature: void createInvoice() from the type InvoiceManager before I can save my class, what am i missing?
Webservice:
@RestResource(urlMapping='/invoices/*')
global with sharing class InvoiceManager {
@HttpPost
global static ID createInvoice (
String customerId,
String addressId,
String invoiceId,
String invoiceType,
String invoiceTypeLocalized,
String invoiceDate,
String paymentDueDate,
String invoiceNumber,
String startDate,
String endDate,
String periodDescription,
Double amount,
Double vatAmount,
Double totalAmount
) {
invoice__c thisinvoice = new invoice__c (
customerId__c = customerId,
addressId__c = addressId,
invoiceId__c = invoiceId,
invoiceType__c = invoiceType,
invoiceTypeLocalized__c = invoiceTypeLocalized,
invoiceDate__c = Date.valueOf(invoiceDate.replace('T',' ')),
paymentDueDate__c = Date.valueOf(paymentDueDate.replace('T',' ')),
invoiceNumber__c = invoiceNumber,
startDate__c = Date.valueOf(startDate.replace('T',' ')),
endDate__c = Date.valueOf(endDate.replace('T',' ')),
periodDescription__c = periodDescription,
amount__c = amount,
vatAmount__c = vatAmount,
totalAmount__c = totalAmount
);
insert thisInvoice;
return thisInvoice.Id;
}
}
Unittest:
static testMethod void testPostRestService(){
String customerId;
String addressId;
String invoiceId;
String invoiceType;
String invoiceTypeLocalized;
String invoiceDate ='2015-02-13T00:00:00' ;
String paymentDueDate ='2015-02-20T00:00:00' ;
String invoiceNumber;
String startDate ='2015-01-01T00:00:00';
String endDate ='2020-01-01T00:00:00';
String periodDescription;
Double amount;
Double vatAmount;
Double totalAmount;
invoice__c thisinvoice = new invoice__c (
customerId__c = customerId,
addressId__c = addressId,
invoiceId__c = invoiceId,
invoiceType__c = invoiceType,
invoiceTypeLocalized__c = invoiceTypeLocalized,
invoiceDate__c = Date.valueOf(invoiceDate.replace('T',' ')),
paymentDueDate__c = Date.valueOf(paymentDueDate.replace('T',' ')),
invoiceNumber__c = invoiceNumber,
startDate__c = Date.valueOf(startDate.replace('T',' ')),
endDate__c = Date.valueOf(endDate.replace('T',' ')),
periodDescription__c = periodDescription,
amount__c = amount,
vatAmount__c = vatAmount,
totalAmount__c = totalAmount
);
insert thisInvoice;
String JsonMsg=JSON.serialize(thisInvoice);
Test.startTest();
RestRequest req = new RestRequest();
RestResponse res = new RestResponse();
req.requestUri = 'https://eu10.salesforce.com/services/apexrest/invoices/'; //Request URL
req.httpMethod = 'POST';//HTTP Request Type
req.requestBody = Blob.valueof(JSONMsg);
RestContext.request = req;
RestContext.response= res;
InvoiceManager.createInvoice('1','8212BJ154','70ec3a54a43d014aa9e8','AdvancePayment','Voorschot','2015-02-13T00:00:00','2015-02-20T00:00:00','157005888','2015-03-01T00:00:00','2015-04-01T00:00:00','Maart 2015',165.29,34.71,200.00);
Test.StopTest();
}
A:
Part 1: Calling Static Methods
In unit tests, you don't test RestResource by calling it like a regular rest call, but instead call your code directly, just as if it were any other method. Your unit test would look more like this:
InvoiceManager.createInvoice('value a','value b','value c', ...);
You can read more about how to call methods in the Class Methods documentation.
Part 2: Decimal vs Double
In your method, you specified the Double data type, but in your unit test, you're using the default Decimal data type. For some reason, the compiler won't allow you to pass in Decimal values, so you need to either change the parameter type to Decimal, or specify Double numbers:
InvoiceManager.createInvoice(
'1', '8212BJ154', '70ec3a54a43d014aa9e8', 'AdvancePayment', 'Voorschot',
'2015-02-13T00:00:00','2015-02-20T00:00:00','157005888',
'2015-03-01T00:00:00','2015-04-01T00:00:00','Maart 2015',
165.29d, 34.71d, 200.00d);
The suffix d specifies that this is a smaller Double value, not a Decimal value.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
New homepage navigation breaks back button
On the new Stack Overflow start page, clicking on a question link, followed by pressing the “back” button in the browser, won’t return to the previously shown question list — instead it will show a new question list.
This is a shame, because I often click on a link and then notice that there’s another question that sounds interesting, too. So I go back and open that link as well. Except, I can no longer do this, since the question list is dynamically generated at every page load, rather than a static list (I’m assuming).
The same happens for when I (accidentally) close the question list tab and undo its closing in Chrome. I’m generally not sure why the list of shown questions is that unstable — I’m showing “recommended”, “recently active” questions on the “new” tab, the question list should be more stable than it is.
A:
I'm experiencing similar issues with the Back button, when navigating back and forth around a tagged question list.
I can consistently recreate an issue, via the following steps in Chrome:
Click on one of my favorite tags: sql-server
This navigates to: https://stackoverflow.com/questions/tagged/sql-server
I guess it takes in to account my preferred settings in the new nav and redirects to: https://stackoverflow.com/questions/new/sql-server?show=all&sort=newest
This shows the following (accurate list):
If I then click on the tag again at the top of the page: sql-server, it just reloads the same page as you'd expect.
If I then click the back button, I'd expect nothing to change, but it loads an older version of the tagged question list:
Notice the first question here is the last question on the previous list and everything is old: Views, Answered, Asked Time.
I've experienced the same when going into questions and back to the list via a tag.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Retrofit return null in body() method in Android
I can not find where the semantic error is in these lines of code:
PuntoGpsRecorridoDTO puntoGpsRecorridoDto = new PuntoGpsRecorridoDTO();
puntoGpsRecorridoDto.setDescripcion(puntoGpsRecorrido.getDescripcion());
puntoGpsRecorridoDto.setIdRecorrido(37);
puntoGpsRecorridoDto.setDemoraSeg(2);
puntoGpsRecorridoDto.setPrecisionMts(10);
puntoGpsRecorridoDto.setEstado(puntoGpsRecorrido.getEstado());
puntoGpsRecorridoDto.setFechaHora(puntoGpsRecorrido.getFechaHora());
puntoGpsRecorridoDto.setIdDispositivo("943953977-OFICINA");
puntoGpsRecorridoDto.setLatitud(puntoGpsRecorrido.getLatitud());
puntoGpsRecorridoDto.setLongitud(puntoGpsRecorrido.getLongitud());
puntoGpsRecorridoDto.setPrecisionMts(puntoGpsRecorrido.getPrecisionMts());
Gson gson = new GsonBuilder().registerTypeAdapter(Date.class, new JsonDateDeserializer()).create();
Retrofit retrofit = new Retrofit.Builder().baseUrl(Util.URL_WS).addConverterFactory(GsonConverterFactory.create(gson)).build();
LocationService locationService = retrofit.create(LocationService.class);
Call<EstadoDTO> callEstadoDto = locationService.enviarPuntoGpsRecorrido(puntoGpsRecorridoDto);
Response<EstadoDTO> exec = callEstadoDto.execute();
estadoDto = exec.body(); // <<<------ body() return NULL
LocationService Interface for retrofit client:
public interface LocationService
{
@POST("recorrido/sending")
Call<EstadoDTO> enviarPuntoGpsRecorrido(@Body PuntoGpsRecorridoDTO puntoGpsRecorridoDto);
}
Connect to service ? ---> YES,
It works by another way? --> Yes, with SoapUI test,
Server ? ---> Apache tomcat + Mysql + Hibernate
PuntoGpsRecorridoDTO class:
public class PuntoGpsRecorridoDTO
{
private Integer idRecorrido;
private String idDispositivo;
private Double latitud;
private Double longitud;
private Boolean estado;
private String descripcion;
private Integer precisionMts;
private Integer demoraSeg;
private Date fechaHora;
public PuntoGpsRecorridoDTO()
{
}
}
PuntoGpsRecorrido class:
@DatabaseTable(tableName = "PuntoGpsRecorrido")
public class PuntoGpsRecorrido
{
@DatabaseField(generatedId = true)
private Integer idRecorrido;
@DatabaseField(foreign = true, canBeNull = false)
private Dispositivo dispositivo;
@DatabaseField
private Double latitud;
@DatabaseField
private Double longitud;
@DatabaseField
private Boolean estado;
@DatabaseField
private String descripcion;
@DatabaseField
private Integer precisionMts;
@DatabaseField
private Integer demoraSeg;
@DatabaseField
private Date fechaHora;
public PuntoGpsRecorrido()
{}
}
EstadoDTO Class:
public class EstadoDTO
{
public static final String EXITO="001";
public static final String ERROR="000";
private String code;
private String msg;
private String extra;
public EstadoDTO()
{}
}
Error:
Test with SoapUI, It goes very well:
What am I doing wrong ? Please let me know if you for more
information need. Thanks in advance.
A:
This is the solution!
Detected Error: Wrong serialization Date data type in an attribute
of the object PuntoGpsRecorridoDTO send POJO Web service.
Gson gson = new GsonBuilder().registerTypeAdapter(Date.class, new JsonDateDeserializer()).create();
Solution: Replace by:
Gson gson = new GsonBuilder().registerTypeAdapter(Date.class, new JsonDateSerializer()).create();
Aqui las clases:
JsonDateDeserializer class:
public class JsonDateDeserializer implements JsonDeserializer<Date>
{
@Override
public Date deserialize(JsonElement json, Type typeOfT, JsonDeserializationContext context) throws JsonParseException
{
String s = json.getAsJsonPrimitive().getAsString();
long l = Long.parseLong(s); //long l = Long.parseLong(s.substring(6, s.length() - 2));
Date d = new Date(l);
return d;
}
}
JsonDateSerializer class:
public class JsonDateSerializer implements JsonSerializer<Date>
{
@Override
public JsonElement serialize(Date src, Type typeOfSrc, JsonSerializationContext context)
{
long l = src.getTime();
JsonElement json = new JsonPrimitive(l);
return json;
}
}
Happy programming...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Simplification of the sum $\sum_{k=0}^M x^k\binom{M}{k}\binom{r}{k}?$
For any $r\in\mathbb{R}$ and $k\in\mathbb{N}$ let
$$\binom{r}{k}=\frac{r(r-1)(r-2)...(r-k+1)}{k!}$$
be a generalized binomial coefficient.
For $k, M\in\mathbb{N}$ and $r,x\in\mathbb{R}$ is there a way to calculate/simplify the expression
$$\sum_{k=0}^M x^k\binom{M}{k}\binom{r}{k}?$$
A:
$$\sum_{k=0}^M \binom{M}{k}\binom{r}{k}x^k=\, _2F_1(-M,-r;1;x)$$ where appears the gaussian hypergeometric function
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Set the binding value directly
Is it possible to set the value behind a two-way binding directly, without knowing the bound property?
I have an attached property that is bound to a property like this:
<Element my:Utils.MyProperty="{Binding Something}" />
Now I want to change the value that is effectively stored in Something from the perspective of the attached property. So I cannot access the bound property directly, but only have references to the DependencyObject (i.e. the Element instance) and the DependencyProperty object itself.
The problem when simply setting it via DependencyObject.SetValue is that this effectively removes the binding, but I want to change the underlying bound property.
Using BindingOperations I can get both the Binding and the BindingExpression. Now is there a way to access the property behind it and change its value?
A:
Okay, I have solved this now myself using a few reflection tricks on the binding expression.
I basically look at the binding path and the responding data item and try to resolve the binding myself. This will probably fail for more complex binding paths but for a simple property name as in my example above, this should work fine.
BindingExpression bindingExpression = BindingOperations.GetBindingExpression(dependencyObj, dependencyProperty);
if (bindingExpression != null)
{
PropertyInfo property = bindingExpression.DataItem.GetType().GetProperty(bindingExpression.ParentBinding.Path.Path);
if (property != null)
property.SetValue(bindingExpression.DataItem, newValue, null);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Do they have a HTML/XAML like concept in the Flash/Flex world?
In Silverlight graphic artists are expected to know XAML. With webstites it is HTML/CSS. What do the graphic artists using Flash/Flex use?
A:
I don't really agree with you on the "are expected to know" part. There is really no authority to decide what's expected of graphic artists, other than being good graphic artists.
But since Flash comes with extensive graphics functionality, it is probably most important for graphic artists working in a Flash/Flex-related production process to know how to use these. Also, they should be instructed on how to prepare bitmap or video content for embedding in Flash, and to have some idea of the object model used in Flash, so they are able to create and manage library items in a way that can be effectively used and/or extended by ActionScript programmers.
Apart from that, for anyone working with more than "just" graphics, i.e. graphic artists who do active programming on UI components in Flash/Flex, it is handy to know:
MXML: The Flex xml dialect used to create views.
HTML: Flash allows for a limited number of HTML tags (a, b, p, font, span, br, img, i, ul, li, u, plus the generic "textformat") when using styled text.
CSS: Flash can use CSS to style text. As with HTML, this is not a complete implementation, but restricted to font and character formats
Also, the Flash player is often embedded in an HTML page - for this, some JavaScript and whatever other languages used on the web server will also come in handy.
It is also good to have a firm knowledge of XML and JSON when working with ActionScript, but on the graphics end of the production process, these will probably not be as important.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Add Currency Sign £, $ to certain fields ORACLE
I want to display the dollar or pound sign in my fields that hold job_salary any one knows? how to
A:
Using $ in the format mask hardcodes a dollar sign (after all, Oracle is an American corporation). Other currencies are determined by the setting for NLS_TERRITORY. Use C to see the ISO currency abbreviation (UKP) and L for the symbol (£).
SQL> select to_char(sal, 'C999,999.00') as ISO
2 , to_char(sal, 'L999,999.00') as symbol from emp
3 /
ISO SYMBOL
------------------ ---------------------
GBP3,500.00 £3,500.00
GBP3,750.00 £3,750.00
...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Java RecursiveTask fork vs compute
i have code of febunacci algorithm using RecursiveTask i found it in
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/RecursiveTask.html
Code 1
public class Fibonacci extends RecursiveTask<Integer> {
final int n;
public Fibonacci (int n) { this.n = n; }
public Integer compute() {
if (n <= 1)
return n;
Fibonacci f1 = new Fibonacci(n - 1);
f1.fork();
Fibonacci f2 = new Fibonacci(n - 2);
return f2.compute() + f1.join();
}
public static void main(String args[]){
ForkJoinPool fjpool = new ForkJoinPool();
RecursiveTask task = new Fibonacci(30);
long startTime1=System.currentTimeMillis();;
Integer O=(Integer) fjpool.invoke(task);
long endTime1 = System.currentTimeMillis();
long duration1 = (endTime1 - startTime1);
System.out.println(duration1);
}}
this code give executed in 83ms i modified it
Code 2
public class Fibonacci extends RecursiveTask<Integer> {
final int n;
public Fibonacci (int n) { this.n = n; }
public Integer compute() {
if (n <= 1)
return n;
Fibonacci f1 = new Fibonacci(n - 1);
Fibonacci f2 = new Fibonacci(n - 2);
return f2.compute() + f1.compute();
}
public static void main(String args[]){
ForkJoinPool fjpool = new ForkJoinPool();
RecursiveTask task = new Fibonacci(30);
long startTime1=System.currentTimeMillis();;
Integer O=(Integer) fjpool.invoke(task);
long endTime1 = System.currentTimeMillis();
long duration1 = (endTime1 - startTime1);
System.out.println(duration1);
}}
Now this code executed in 20ms can someone explain me why the second is faster i read the documentation it said that fork excutes asynchronously then why it is running slower then using compute.
A:
The first solution is likely to perform poorly because the smallest subtasks are too small to be worthwhile splitting up.
RecursiveTask task = new Fibonacci(30);
Instead, as is the case for nearly all fork/join applications, you'd pick some minimum granularity size (for example 30 here) for which you always sequentially solve rather than subdividing.
Remember, Fork/Join framework is huge when it comes to bigger data sets to parallelize.
Source
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/RecursiveTask.html
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Google Maps: Unable to add auto search options in map contains marks from database
I am trying to add store markers from database in Google map, and auto search option to zoom the particular location.
I am able to add markers from database, but i am unable to add auto search options in map contains marks from database.
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Maplocator.aspx.cs" Inherits="Maplocator" %>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>Show/Add multiple markers to Google Maps from database in asp.net website</title>
<style type="text/css">
html {
height: 100%
}
body {
height: 100%;
margin: 0;
padding: 0
}
#map_canvas {
height: 100%
}
</style>
<script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?v=3.exp&libraries=places"></script>
<script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=MyKey&sensor=false">
</script>
<script type="text/javascript">
function initialize() {
var markers = JSON.parse('<%=ConvertDataTabletoString() %>');
var mapOptions = {
center: new google.maps.LatLng(markers[0].lat, markers[0].lng),
zoom: 5,
mapTypeId: google.maps.MapTypeId.ROADMAP
};
var infoWindow = new google.maps.InfoWindow();
var map = new google.maps.Map(document.getElementById("map_canvas"), mapOptions);
var input = (document.getElementById('txtsearch'));
map.controls[google.maps.ControlPosition.TOP_LEFT].push(input);
var searchBox = new google.maps.places.SearchBox((input));
for (i = 0; i < markers.length; i++) {
var data = markers[i]
var myLatlng = new google.maps.LatLng(data.lat, data.lng);
var marker = new google.maps.Marker({
position: myLatlng,
map: map,
title: data.title
});
(function(marker, data) {
// Attaching a click event to the current marker
google.maps.event.addListener(marker, "click", function(e) {
infoWindow.setContent(data.Descriptions);
infoWindow.open(map, marker);
});
})(marker, data);
google.maps.event.addListener(map, 'bounds_changed', function() {
var bounds = map.getBounds();
searchBox.setBounds(bounds);
});
}
}
</script>
</head>
<body onload="initialize()">
<form id="form1" runat="server">
<div style="width: 500px; height: 200px"> </div>
<input id="txtsearch" class="apply" type="text" placeholder="Enter Search Place e.g C# Corner Noida" />
<div id="map_canvas" style="width: 600px; height: 500px"></div>
</form>
</body>
</html>
Code behind:
public string ConvertDataTabletoString()
{
DataTable dt = new DataTable();
using (SqlConnection con = new SqlConnection(constr))
{
using (SqlCommand cmd = new SqlCommand("select title=City,lat=latitude,lng=longitude,Descriptions from LocationDetails", con))
{
con.Open();
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(dt);
System.Web.Script.Serialization.JavaScriptSerializer serializer = new System.Web.Script.Serialization.JavaScriptSerializer();
List<Dictionary<string, object>> rows = new List<Dictionary<string, object>>();
Dictionary<string, object> row;
foreach (DataRow dr in dt.Rows)
{
row = new Dictionary<string, object>();
foreach (DataColumn col in dt.Columns)
{
row.Add(col.ColumnName, dr[col]);
}
rows.Add(row);
}
return serializer.Serialize(rows);
}
}
}
A:
As mentioned in Place Autocomplete Results
The Place Autocomplete response does not include the scope or alt_ids fields that you may see in search results or place details. This is because Autocomplete returns only Google-scoped place IDs. It does not return app-scoped place IDs that have not yet been accepted into the Google Places database.
Additionally, when you add a place,
the new place also enters a moderation queue to be considered for Google Maps.
That means, you only get to see those added markers or places in auto search option if they passed the moderation process and are being accepted into Google Places database.
Important Note:
To make it more likely that the place will pass the moderation process and be added to the Google Maps database, the add request should include as much information as possible. In particular, the following fields are most likely to improve the chances of passing the moderation process: phone number, address and website.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to find absolute path within a project
I have a mavenproject (java).
Now there is a file in that project. A file, NOT an object of the class File. (txt, actually) In the end, this file will be bundled within the jar. I want to use that during the execution of the project and I do need the absolute path to that file.
Of course that differs depending on the computer I am using.
If we were talking about a File object, I would simply find the absolutepath by using myfile.getAbsolutePath.
Is there also a way to find the path in that case?
A:
this file will be bundled within the jar.
That file is not a stand alone file on the filesystem anymore. It has no unique filesystem path which points it exclusively.
Its part of a packaged file(similar to a zip) after packaging. That is the main reason methods like class.getResourceAsStream() and classLoader.getResourceAsStream() are provided.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Use post_type in admin_menu action callback with wordpress
How can I access post_type in the callback function for admin_menu action?
add_action( 'admin_menu' , 'remove_post_thumb_meta_box' );
function remove_post_thumb_meta_box()
{
global $pagenow, $_wp_theme_features;
if ( in_array( $pagenow,array('post.php','post-new.php')))
{
unset( $_wp_theme_features['post-thumbnails']);
}
}
I need to hide featured image metabox only for post_type location.
A:
You can do something like this:-
Add following line into your theme's functions.php file:-
remove_post_type_support( 'location', 'thumbnail' );
This will only effect your location post type.
Hope it will help you.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Theme.AppCompat.Dialog theme creates empty (dead) space
In my application, I am using an Activity with the theme "Theme.AppCompat.Dialog" to display it as a dialog. That works out well, however, the dialog fills the entire screen height, leaving a lot of space empty. To illustrate my issue, here is a picture of opening the dialog (on an unusually high resolution to demonstrate the issue better):
The higher the resolution, the greater this space.
Here is a code snippet:
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical"
xmlns:tools="http://schemas.android.com/tools">
<!--This is the yellow box-->
<LinearLayout
android:id="@+id/dialog_button_bar"
android:layout_alignParentBottom="true"
style="?android:buttonBarStyle"
android:layout_width="match_parent"
android:layout_height="wrap_content">
[Buttons...]
</LinearLayout>
<!--This is the red box-->
<ScrollView
android:layout_above="@id/dialog_button_bar"
android:layout_width="match_parent"
android:layout_height="wrap_content">
[LinearLayout containing rows...]
</ScrollView>
If I remove the android:layout_alignParentBottom="true" and the android:layout_above="@id/dialog_button_bar" attributes, the whole layout jumps to the top and now the empty space is below my layout.
What am I doing wrong? :(
A:
It seems like this is some kind of intended behavior. The standard Android app installation dialog seems to behave the same way (leaving a lot of blank space between the permission part and the buttons) so I guess I'll keep it this way...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Problems when creating containers in CodenameOne
For my main App I would need a custom container for navigation purpose, see here (Stackoverflow).
As the solution posted did not work for me (I tried with simple System.out.println), I began a new Project to understand, how the container embedding works, but it does not work the way I expected.
So the new App is the Hi World application with the orange color.
I created a new blank container in the GUI designer and added 3 Buttons.
I created a new blank container in the GUI designer and added 3 Buttons.
I added an actionListener to them within the container
I added the container to the form
My StateMachine looked like this:
@Override
protected void onButtonCont_ButtonAction(Component c, ActionEvent event) {
System.out.println("button1 clicked");
}
@Override
protected void onButtonCont_Button1Action(Component c, ActionEvent event) {
System.out.println("button2 clicked");
}
@Override
protected void onButtonCont_Button2Action(Component c, ActionEvent event) {
System.out.println("button3 clicked");
}
(The container I created was named ButtonCont..) Nothing else modified in StateMachine or elsewhere!
Now I started the app and clicked the buttons - but nothing happened.
So I
Opened the GUI Builder
Selected MainForm
selected the three Buttons one after another
added ActionListeners to each one
Now my StateMachine looks like this:
@Override
protected void onMain_Button2Action(Component c, ActionEvent event) {
System.out.println("button3 -now- clicked");
}
@Override
protected void onMain_Button1Action(Component c, ActionEvent event) {
System.out.println("button2 -now- clicked");
}
@Override
protected void onMain_ButtonAction(Component c, ActionEvent event) {
System.out.println("button1 -now- clicked");
}
(in addition to the previous onButtonCont- methods)
Starting the app and clicking the buttons results in this output:
button1 -now- clicked
button3 -now- clicked
button2 -now- clicked
What am I doing wrong?
A:
I found the answer myself.
Instead of adding the container to the Form under the section "user defined", you simply need to add an embedded container and select "embedded | [null]" to your own container
|
{
"pile_set_name": "StackExchange"
}
|
Q:
C# uwp record two webcams at 60 frames per second
What I need to do is record 2 usb webcams at 60 fps 1280x720 format in UWP c#.
The camera's are currently previewed in a CaptureElement
This is how it start previewing:
public async Task StartPreviewSideAsync(DeviceInformation deviceInformation)
{
if (deviceInformation != null)
{
var settings = new MediaCaptureInitializationSettings {VideoDeviceId = deviceInformation.Id};
try
{
_mediaCaptureSide = new MediaCapture();
var profiles = MediaCapture.FindAllVideoProfiles(deviceInformation.Id);
Debug.WriteLine(MediaCapture.IsVideoProfileSupported(deviceInformation.Id) + " count: " + profiles.Count);
var match = (from profile in profiles
from desc in profile.SupportedRecordMediaDescription
where desc.Width == 1280 && desc.Height == 720 && Math.Abs(Math.Round(desc.FrameRate) - 60) < 1
select new {profile, desc}).FirstOrDefault();
if (match != null)
{
settings.VideoProfile = match.profile;
settings.RecordMediaDescription = match.desc;
}
await _mediaCaptureSide.InitializeAsync(settings);
SideCam.Source = _mediaCaptureSide;
await _mediaCaptureSide.StartPreviewAsync();
_displayRequestSide = new DisplayRequest();
_displayRequestSide.RequestActive();
DisplayInformation.AutoRotationPreferences = DisplayOrientations.Landscape;
CameraManager.GetCameraManager.CurrentSideCamera = deviceInformation;
IsPreviewingSide = true;
}
catch (UnauthorizedAccessException)
{
Debug.WriteLine("The app was denied access to the camera");
IsPreviewingSide = false;
}
catch (Exception ex)
{
Debug.WriteLine("MediaCapture initialization failed. {0}", ex.Message);
IsPreviewingSide = false;
}
}
}
And this is the method that starts the recording:
public IAsyncOperation<LowLagMediaRecording> RecBackCam(StorageFile fileBack)
{
var mp4File = MediaEncodingProfile.CreateMp4(VideoEncodingQuality.HD720p);
if (mp4File.Video != null)
mp4File.Video.Bitrate = 3000000;
return _mediaCaptureBack.PrepareLowLagRecordToStorageFileAsync(mp4File, fileBack);
}
but it does not record 60fps because it cannot find a profile for it (in the preview method).
and when I use this (in the recoding method):
mp4File.Video.FrameRate.Numerator = 3600;
mp4File.Video.FrameRate.Denominator = 60;
it records 60 frames per second but frame 1 and 2 are the same 3 and 4 and so on. But I need actual 60 frames per second.
all the basics of the code comes from the mdsn website
link to code on msdn.
A:
To set the frame rate for MediaCapture, we can use VideoDeviceController.SetMediaStreamPropertiesAsync method like the following:
await _mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoRecord, encodingProperties);
For the encodingProperties, we can get it from VideoDeviceController.GetAvailableMediaStreamProperties method, this method returns a list of the supported encoding properties for the video device. If your camera supports 60 fps, 1280x720 resolution then you should be able to find the corresponding VideoEncodingProperties in this list.
For more info, please see the article Set format, resolution, and frame rate for MediaCapture and also Camera resolution sample on GitHub.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Combine multiple row data using sum() over(Partion by) in mysql
SELECT orders.Stock ,lflayouts.sides, count(*) as Quantity FROM dash_relationship
JOIN orders ON orders.UID = dash_relationship.form_id
JOIN lfitems ON lfitems.uid = orders.UID
Join lflayouts ON lflayouts.id = lfitems.layout_id
WHERE dash_relationship.machine_id='108'
GROUP BY orders.stock,lflayouts.sides;
The above query outputs as follows
STOCK SIDES QUANTITY
paper1 1 214
paper1 2 210
paper2 1 7
paper3 1 2
Now my question is what if I want to get total of individual stocks based on different sides. So I tried using the below query and it threw me an error saying server version for the right syntax to use near '(partition) at line 1
SELECT orders.Stock ,lflayouts.sides, count(*) as Quantity, SUM(lflayouts.sides) OVER(partition by orders.stock) as Total FROM dash_relationship
JOIN orders ON orders.UID = dash_relationship.form_id
JOIN lfitems ON lfitems.uid = orders.UID
Join lflayouts ON lflayouts.id = lfitems.layout_id
WHERE dash_relationship.machine_id='108'
GROUP BY orders.stock,lflayouts.sides;
EXPECTED OUTPUT
STOCK SIDES QUANTITY TOTAL
paper1 1 214 414 or 214
paper1 2 210 414
paper2 1 7 7
paper3 1 2 2
A:
MySQL 5.5.62 does not support window functions.
You can use standard join to achieve it but the query will look more complex.
SELECT T1.Stock
,T1.sides
,Sum(T2.Quantity) as RunningTotal
FROM (SELECT orders.Stock ,lflayouts.sides, count(*) as Quantity
FROM
dash_relationship
JOIN orders ON orders.UID = dash_relationship.form_id
JOIN lfitems ON lfitems.uid = orders.UID
JOIN lflayouts ON lflayouts.id = lfitems.layout_id
WHERE dash_relationship.machine_id='108'
GROUP BY orders.stock,lflayouts.sides
) T1
INNER JOIN
(
SELECT orders.Stock ,lflayouts.sides, count(*) as Quantity
FROM
dash_relationship
JOIN orders ON orders.UID = dash_relationship.form_id
JOIN lfitems ON lfitems.uid = orders.UID
JOIN lflayouts ON lflayouts.id = lfitems.layout_id
WHERE dash_relationship.machine_id='108'
GROUP BY orders.stock,lflayouts.sides
) T2
ON T1.sides >= T2.sides
AND T1.Stock = T2.Stock
GROUP BY T1.Stock
,T1.sides
Order BY T1.Stock
,T1.sides
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Were the aliens from Pitch Black a bio-weapon?
A species reacting so badly to sunlight seems unlikely to have evolved on a planet with continual sunlight. Furthermore, the planet seems to have had large animals at one point in the past, but has been stripped bare, presumably by the monsters.
Sounds like a bio-weapon to me. Perhaps the planet was involved in a war? I know that there is some extended stuff with Pitch Black...was there ever any discussion of this?
A:
It is doubtful. A planet that receives perpetual sunlight for twenty two years followed by a month of darkness implies a solar system with a complex configuration (similar to Asimov's Nightfall). Such systems are not as stable as the sort of solar system that we live in. Some time in the past, after the creatures evolved, the system must have gone through some form of orbital perturbation resulting in the current seasonal cycles.
During the first month of darkness, the creatures very likely devoured everything that the lack of sunlight didn't already kill. After this, the creatures adapted from being daily nocturnal predators to being predators that hibernate between periods of darkness and then wake up extremely cranky.
It is very doubtful that someone would engineer a bio-weapon with two glaring weaknesses -- inability to tolerate even a flashlight and a blind spot smack at the center of its field of vision.
A:
In the canon notes for the movie, Pitch Black, the alien creatures were just there. No explanation is given. The movie is a film about survival and what a person might do to survive. With that out of the way, let's speculate on whether the creatures could actually BE bio-engineered weapons of mass destruction.
What makes a good bio-weapon, anyway?
What makes a good bio-weapon, especially if your targets are human, space-faring, and perhaps not always as well armed as modern humans are today? You want your bio-weapon to be:
Fear-causing, first-striking, and mysterious appearing to humans
Strong, fast and aggressive, difficult to drive away. Strong hunting instincts so that the creature seeks the target as fast as possible.
Durable or plentiful. In an ideal world, both.
Well adapted to its environment, giving it an advantage especially against humans. The ability to work in low-light conditions gives the weapons an advantage against human populations.
High level of maneuverability, including flight
Just enough intelligence to make them tenacious seeking prey
Able to be transported, perhaps it hibernates or has a dormant state which can be triggered artificially.
The care and feeding of your bio-weapon
Yet you would want an easy means of controlling or destroying your bio-weapon once its done its job. Or worse you just want to make a planet a wasteland, scorching it and making a government have to gather resources to reclaim its planet. They also make fine shock troops for night attacks and night operations.
light vulnerability make sense if you wanted to make it possible to reclaim areas where the creature has spread
You could reclaim territory after the targets or other life-forms are destroyed, have a trained and prepared reclamation teams who know all the weaknesses of the creatures.
Who among the normal population would think a flashlight might be able to temporarily deter the creatures? Bomb their local powerplant, turn off the lights, unleash your beasties after sunset, and let them go to work!
Who would you use your bio-weapon on?
With these parameters, I could see an excellent reason for considering them a bio-weapon. But what would you target with such a weapon? A trick question:
unarmed or lightly-armed populations with longer darkness periods
populations on planetary moons with unusual or erratic rotations or periods of darkness
underground populations (negates movement advantages) but allows sonar to work just fine.
But where would you test such a weapon?
A rogue moon, off the beaten path, with no people, but a thriving ecosystem. (Sound familiar, yet?) Unleash the creatures, study the results, leave the creatures to starve once the tests are done. You don't tell anyone where you do this work and if a survey team happens to blunder onto your test area and doesn't report this, or if the creatures don't die on schedule, there might be unfortunate, unreported accidents...
A:
I don't know about biological weapons, but from an evolutionary point of view they seem pretty unlikely. No species could survive by eating every living thing in an ecosystem (in this case, the entire planet) without leaving some species of prey behind to reproduce to provide more prey. As for the roots of plants: a) there do not appear to be any plants on the planet, and b) the aliens' teeth and jaws are obviously carnivorous. Cannibalism, as HNL says, cannot work indefinitely as the 'trade-off' between the energy used to reproduce and the energy gained from eating offspring does not add up.
Good film though ;)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Type Mismatch Unit and String in Scala
I am trying to take from a list of tuples (e.g. List[(String, String)]) some words that have the difference between the number of syllables smaller than 2.
If that is ok, I return them - however, I have some issues: I get Unit found and String expected.
def ecrire():String = {
// Choose deux output: List[(Word, Word)]
// I take every tuple of the list and proceed as the element "b"
for (b <- choose_deux()){
val x = b._1
val y = b._2
val diff = Math.abs(x.syllabes - y.syllabes)
// Check if difference between syllables is smaller than 2
if(diff <= 2)
return x.toString() + "\n" + y.toString()
}
}
}
Now I know that probably I have to do a yield at the bottom, but yield what exactly? The idea is that if the condition shown in the "if" is respected, I write the string made of these two elements.
The error is shown at the for loop: type mismatch; found: Unit; required: String
Could you please help me a bit? I am still new and learning!
A:
Type mismatch error is because your for loop doesn't have else statement and you can't return using if inside a for loop. So for loop is not returning anything so scala compiler assumes the return type to be () i.e. unit() and you have defined the return type as String.
Defining the functions in the following way should solve your issue
def diff(x) = Math.abs(x._1.syllabes - x._2.syllabes)
for (b <- choose_deux() if(diff(b) <= 2)) yield b._1.toString() + "\n" + b._2.toString()
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Command + C does not always work on first try on MacOS
Command + C does not always work on first try on any of my four Macs. I often highlight text, press Command + C, head to another window, press Command + V only to find that the original text did not copy.
I recently installed a fresh copy of macOS High Sierra and I continue to run into this issue, particularly in RStudio.* The issue never occurs on my PCs.
Do others have this issue? How can I resolve this?
*I use bettersnaptool and enable three finger dragging, but the issue still appears when I disable them.
A:
I think it's possible for Apple's Continuity Universal Clipboard feature to cause something like this, depending on the quality of the signal connection with any nearby iOS devices.
Perhaps try turning off "Allow Handoff between this Mac and your iCloud devices" in System Preferences > General and see if that helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Groovy script cannot import java code
The problem I have is larger, but I will simplify the concept that is failing.
I am working on Ubuntu.
Here is my directory structure:
~/mydirectory
--/groovy
--/myjavafiles
I have a script, script.groovy, that lives inside ~/mydirectory/groovy and a java file called Hello.java that lives inside ~/mydirectory/myjavafiles. script.groovy has the following inside:
#!/usr/bin/env groovy
package groovy;
import myjavafiles.Hello;
println("hello");
Hello.java has this:
package myjavafiles;
public class Hello {
public Hello() {
System.out.println("hello");
}
}
I have tried running:
$./script.groovy
aswell as
$groovy script.groovy
But I only get an error, "unable to find class".
Here are the steps I have taken to fix this error:
i. set CLASSPATH = ~/mydirectory, that didn't work.
ii. used
$jar cf myjavafiles.jar myjavafiles
and placed myjavafiles.jar in ~/.groovy/lib, that didn't work.
iii. As mentioned here, I tried to modify groovy.script as follows:
#!/bin/bash
//usr/bin/env groovy
package groovy;
import myjavafiles.Hello;
println("hello");
That also did not work.
Other, maybe relevant
If it's any help, I'm using Ubuntu, Java 7 and Groovy 2.1.5
GROOVY_HOME=/opt/groovy/groovy-2.1.5/
and $GROOVY_HOME/bin is in my PATH
I would greatly appreciate any help.
A:
Change script.groovy to:
package groovy
import myjavafiles.Hello
println "hello"
Compile the java code with:
javac myjavafiles/Hello.java
Then run
groovy groovy/script.groovy
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Methods a subclass of HttpServlet override
I have read in HttpServlet documentation that
A subclass of HttpServlet must override at least one method, usually one of these:
doGet, if the servlet supports HTTP GET requests
doPost, for HTTP POST requests
doPut, for HTTP PUT requests
doDelete, for HTTP DELETE requests
init and destroy, to manage resources that are held for the life of the servlet
getServletInfo, which the servlet uses to provide information about itself
What rule that helps compiler in checking that any one of the above method is overrided in subclass?
A:
Nothing. You (actually, the enduser) will just face a HTTP 405 Method Not Implemented error when requesting the servlet using a non-overridden HTTP method, because that's what the default implementation does.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
implement an algorithm from its pseudocode in c++
Im trying to implement a recursive algorithm of Change problem from the following pseudocode
yet I couldn't know how to implement it correctly, my goal is to learn how read the pseudocode more than how to solve the change problem.
here is my code in c++
int recChange(int money){
int coins [6] = { 50, 25, 20, 10, 5, 1 };
if (money == 0) return 0;
int minNumberCoins;
for (int i=0; i < 6; ++i){
if (money >= coins[i]) {
int numberCoins = recChange(money - coins[i]);
if (numberCoins + 1 < minNumberCoins ){
minNumberCoins = numberCoins + 1;
}
}
}
return minNumberCoins;
}
A:
The pseudocode:
MinNumCoins ← ∞
This states "Initialize MinNumCoins to an infinity".
Your equivalent code:
int minNumCoins;
This declares minNumCoins, but leaves this uninitialized. This is not only not what the pseudocode states, but since the code subsequently uses the uninitialized variable, this results in undefined behavior.
There are two basic approaches to implement this pseudocode:
1) Use a second variable, that indicates whether the value has been set, or not. The intent of initializing a variable to infinity to is to find the smallest value that gets computed for it. On each attempt, a potential candidate value for MinNumCoins gets computed, and if it's less than the current value of MinNumCoins, the new value replaces it. MinNumCoins ends up having the smallest value that gets computed for it.
By initializing the variable with a positive infinity, this has the effect of taking the first computed value for MinNumCoins, and setting it (since the first computed value will always be less than infinity).
The replacement logic uses a second variable, a flag, to indicate whether the value has been set. If it is not, the value gets set no matter what it is; but if the variable has been set, the code compares it to the new computed value, as usual, and updates it if the computed value is less than the existing value.
2) The second approach is to initialize the variable to the highest possible value. There is no value for "infinity", that an int can be set to. The closest candiate would be the highest maximum value an int can possibly be set to. This would be:
#include <limits>
int MinNumCoins = std::numeric_limits<int>::max();
Whether this somewhat "hack" would be an acceptable solution for your problem, is something for you to decide.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why is formic acid a stronger acid than acetic acid?
I am told that because the methyl group is electron donating in the conjugate base of acetic acid, this destabilizes the conjugate base by exacerbating the existing negative formal charge on the deprotonated oxygen, while in formic acid the electron donating methyl is absent in lieu of a hydrogen, which is neither withdrawing nor donating.
Is this reasoning correct, and are there any other reasons why formic acid might be stronger than acetic acid?
A:
We are discussing the following equilibrium
We can make the acid a stronger acid by pushing the equilibrium to the right. To push the equilibrium to the right we can
destabilize the starting acid pictured on the left side of the
equation, and \ or
stabilize the carboxylate anion pictured on the right side of the
equation.
Comparing acetic acid ($\ce{R~ =~ CH3}$) to formic acid ($\ce{R~ =~ H}$), the methyl group is electron releasing compared to hydrogen. Therefore the methyl group will stabilize the dipolar resonance form of the starting acid where there is a partial positive charge on the carbonyl carbon. This should stabilize the starting acid. Further, this electron releasing ability of the methyl group will tend to destabilize the resultant carboxylate anion which already has a full unit of negative charge.
Therefore, because the methyl group 1) stabilizes the starting acid and 2) destabilizes the carboxylate anion product, the methyl group will push the equilibrium to the left, compared to the case where the methyl group is replaced by a hydrogen. Consequently, acetic acid is a weaker acid than formic acid.
A:
The strength of carboxylic acid can be discussed by positive inductive effect. Here, in ethanoic acid there is an electron releasing inductive effect from the alkyl group. The larger the positive inductive effect, the lesser will be the strength of an acid. Similarly in formic acid there is no alkyl group and hence no inductive effect takes place. So formic acid is a stronger acid than ethanoic acid.
A:
Yes the reasoning is correct . The more the charge is increased on a molecule the more it gets destabilised.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to remove ImageButton's padding programmatically?
image = BitmapFactory.decodeResource(res, R.drawable.image);
button = new ImageButton(this);
button.setImageBitmap(image);
I want to remove padding between image and border of button.
How can I do that?
A:
You can use setPadding() to try to remove the space between the image and the border.
button.setPadding(0, 0, 0, 0);
Otherwise I suggest using a regular ImageView with an OnClickListener.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Check if modulo of a number exists in a tuple
I'm trying to check modulo of a number against a tuple of numbers, if the modulo is equals to one of the values in the tuple I want to return True else return False.
This is what I had tried so far:
def check(y):
k = (2, 5, 8, 10, 13, 16, 19, 21, 24, 27, 29)
for i in range(0, len(k)):
if k[i] == y % 30:
return True
else:
return False
def main():
print(check(1439))
main()
It always returns false.
A:
This is always returning false as only first item is checked. If first item is a match then it will return true. For example, if y is 32 then it will return true. You need to return false after checking all values, i.e. outside of for loop. Or a better solution is to use in operator.
def check(y):
k = (2, 5, 8, 10, 13, 16, 19, 21, 24, 27, 29)
return y % 30 in k
|
{
"pile_set_name": "StackExchange"
}
|
Q:
При установки 1с при помощи SLS файла не ставиться служба
Есть SLS файл собственного написания. Планируется использовать для установки 1С на удаленных точках. На сервера в точках нужно ставить сервер на кассы не нужно, условия описаны. Проблема в том что при установки на машину где должен стоять сервер 1С не создается служба, консоль администрирования ставиться, ragent.exe тоже есть по пути установки, нет только службы.
Пример SLS файла.
1c_32:
{% for version in ['8.3.12.1714',
'8.3.12.1685',
'8.3.10.2667',
'8.3.10.2252',
'8.3.8.2137',
'8.3.6.2152',
'8.2.19.83'] %}
{% if grains['windowsdomain'] == 'WORKGROUP' %}
{% set install_location = 'APTEKA' %}
{% else %}
{% set install_location = 'srv-fs' %}
{% endif %}
'{{ version }}':
full_name: '1C:Предприятие 8 ({{ version }})'
installer: '//{{ install_location }}/install/1c/{{ version }}/windows/x32/1CEnterprise 8.msi'
{% if grains['host'] == 'APTEKA' %}
install_flags: '/lv D:\log.txt /qr TRANSFORMS=1049ph-2.mst DESIGNERALLCLIENTS=1 SERVER=1 SERVERCLIENT=1 LANGUAGES=RU'
{% else %}
install_flags: '/lv D:\log.txt /qr TRANSFORMS=1049ph-2.mst DESIGNERALLCLIENTS=1 LANGUAGES=RU'
{% endif %}
uninstaller: '//{{ install_location }}/install/1c/{{ version }}/windows/x32/1CEnterprise 8.msi'
uninstall_flags: '/qn /norestart '
reboot: False
msiexec: True
{% endfor %}
Если я правильно понял то в итоге формируется строка вида
"\COMPUTERNAME\install\1c\VRSION\windows\x32\1CEnterprise 8.msi" /lv D:\log.txt /qr TRANSFORMS=1049ph-2.mst DESIGNERALLCLIENTS=1 THINCLIENTFILE=0 THINCLIENT=0 WEBSERVEREXT=0 SERVER=1 CONFREPOSSERVER=0 CONVERTER77=0 SERVERCLIENT=1 LANGUAGES=RU
Так вот если эту же строку выполнить из командной строки то все ставиться как надо включая службу.
A:
Проблема оказалась в самом файле 1049ph, при его редактировании была допущена ошибка, а именно было отключена установка службы. И в соответствии с параметрами в 1049ph все ставилось корректно. После изменения файла все стало работать как нужно.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there a library out there to analyze VB.Net code complexity from my C# code?
Given VB.Net code in a string, is there a library (or a command line tool) out there that could calculate Cyclomatic Complextiy and LOC?
This has to be done within my C# code.
Thanks.
A:
There is Refactor!, which does supply some extensibility and also supplys the mesurements (And an extesibility point)
Besides that, there is also NDepend, which allows you to query your code for such infos:
http://www.ndepend.com/Features.aspx
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Best practice to extend Model in View Model
I'm facing an issue where I have a Model which I want to implement a GUI for. As I'm exploring MVVM and WPF I will of course use a View Model between the View and Model. I will also have a need to add coordinates to some model classes that represents objects that will be drag able in the GUI. At this time, this is the only addition to these particular model classes that I need. At least at this moment. At the same time almost all, if not all, information contained in the model objects is of interest to the view.
The most important model classes are:
Table
Columns
Dependencies
Tables know of its columns.
Columns know which Table they belongs to.
Columns know 1..* Dependencies
Dependencies know 1..* column it represents a dependency to.
It would of course be possible to modify how these objects know of each other, if needed.
The real question here though, is; How do I make the model classes available to the View, when at the same time adding coordinates to Tables, Columns and Dependencies?
One way might be to create something like this:
public class TableViewModel
{
public Model.Table ModelTable { get; private set; }
public List<ColumnViewModel> Columns { get; set; }
public float X { get; set; }
public float Y { get; set; }
}
Is this a preferred way to forward an "extended" Model object to the View or are there patterns specifically engineered for this task?
Would regular inheritance between the objects in Model and View Model work? I'm not sure though, I like the coupling that would cause.
Or should I create View Model classes that are tailored to fit the View and then map properties from the Model object to the View Model object? This would completely remove the coupling between View and Model, but create two objects in memory, and in this case these might be very big models. Will the garbage collector drop the model objects since there will be no more references to them after they have been mapped to the view model?
A:
The real question here though, is; How do I make the model classes available to the View, when at the same time adding coordinates to Tables, Columns and Dependencies?
Your example ViewModel should do just fine. Another way that I've seen it accomplished is with a BaseViewModel<T> class that all ViewModels inherit from and takes the type of model and then exposes that as T Model { get; }
Is this a preferred way to forward an "extended" Model object to the View or are there patterns specifically engineered for this task?
The ViewModel has the responsibility of getting whatever models/data and keeping and modifying whatever UI state the View needs. Your example ViewModel seems to fit this responsibilty. However, I would only use a ViewModel class within a ViewModel if I'm also presenting the a "sub view" that goes along with that ViewModel. For instance, I would only use List<ColumnViewModel> if I was going to be looping through them and dynamically adding or including a control that's tied to the ColumnViewModel.
Would regular inheritance between the objects in Model and View Model work?
Although it could work, I think inheriting your ViewModel from the Model would be confusing. Plus, what if your Model is really a list of Model objects? Inheritance would be trickier in that case because you would need to make your base class a List<Model> and you would be exposing a bunch of unncessary methods to the View. I think wrapping is a better alternative.
Or should I create View Model classes that is tailored to fit the View and then map properties from the Model object to the View Model object?
ViewModels are generally tailored to fit the specific needs of a View. I tend to think of ViewModels as the code-behind for the view. The mapping of the model properties is generally done through the binding in the WPF View.
Will the garbage collector drop the model objects since there will be no more references to them after they have been mapped to the view model?
This all depends on your context, any references held by the objects in question and a variety of other factors. Generally, your view model should have the same life-time as your view and that includes the data. However, if there are specific performance concerns, you can always manually clear out the Model once it's been loaded.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Slicing numpy ayarray with padding independent of array dimension
Given an array image which might be a 2D, 3D or 4D, but preferable nD array, I want to extract a contiguous part of the array around a point with a list denoting how I extend in along all axis and pad the array with a pad_value if the extensions is out of the image.
I came up with this:
def extract_patch_around_point(image, loc, extend, pad_value=0):
offsets_low = []
offsets_high = []
for i, x in enumerate(loc):
offset_low = -np.min([x - extend[i], 0])
offsets_low.append(offset_low)
offset_high = np.max([x + extend[i] - image.shape[1] + 1, 0])
offsets_high.append(offset_high)
upper_patch_offsets = []
lower_image_offsets = []
upper_image_offsets = []
for i in range(image.ndim):
upper_patch_offset = 2*extend[i] + 1 - offsets_high[i]
upper_patch_offsets.append(upper_patch_offset)
image_offset_low = loc[i] - extend[i] + offsets_low[i]
image_offset_high = np.min([loc[i] + extend[i] + 1, image.shape[i]])
lower_image_offsets.append(image_offset_low)
upper_image_offsets.append(image_offset_high)
patch = pad_value*np.ones(2*np.array(extend) + 1)
# This is ugly
A = np.ix_(range(offsets_low[0], upper_patch_offsets[0]),
range(offsets_low[1], upper_patch_offsets[1]))
B = np.ix_(range(lower_image_offsets[0], upper_image_offsets[0]),
range(lower_image_offsets[1], upper_image_offsets[1]))
patch[A] = image[B]
return patch
Currently it only works in 2D because of the indexing trick with A, B etc. I do not want to check for the number of dimensions and use a different indexing scheme. How can I make this independent on image.ndim?
A:
Based on my understanding of the requirements, I would suggest a zeros-padded version and then using slice notation to keep it generic on number of dimensions, like so -
def extract_patch_around_point(image, loc, extend, pad_value=0):
extend = np.asarray(extend)
image_ext_shp = image.shape + 2*np.array(extend)
image_ext = np.full(image_ext_shp, pad_value)
insert_idx = [slice(i,-i) for i in extend]
image_ext[insert_idx] = image
region_idx = [slice(i,j) for i,j in zip(loc,extend*2+1+loc)]
return image_ext[region_idx]
Sample runs -
2D case :
In [229]: np.random.seed(1234)
...: image = np.random.randint(11,99,(13,8))
...: loc = (5,3)
...: extend = np.array([2,4])
...:
In [230]: image
Out[230]:
array([[58, 94, 49, 64, 87, 35, 26, 60],
[34, 37, 41, 54, 41, 37, 69, 80],
[91, 84, 58, 61, 87, 48, 45, 49],
[78, 22, 11, 86, 91, 14, 13, 30],
[23, 76, 86, 92, 25, 82, 71, 57],
[39, 92, 98, 24, 23, 80, 42, 95],
[56, 27, 52, 83, 67, 81, 67, 97],
[55, 94, 58, 60, 29, 96, 57, 48],
[49, 18, 78, 16, 58, 58, 26, 45],
[21, 39, 15, 93, 66, 89, 34, 61],
[73, 66, 95, 11, 44, 32, 82, 79],
[92, 63, 75, 96, 52, 12, 25, 14],
[41, 23, 84, 30, 37, 79, 75, 33]])
In [231]: image[loc]
Out[231]: 24
In [232]: out = extract_patch_around_point(image, loc, extend, pad_value=0)
In [233]: out
Out[233]:
array([[ 0, 78, 22, 11, 86, 91, 14, 13, 30],
[ 0, 23, 76, 86, 92, 25, 82, 71, 57],
[ 0, 39, 92, 98, 24, 23, 80, 42, 95], <-- At middle
[ 0, 56, 27, 52, 83, 67, 81, 67, 97],
[ 0, 55, 94, 58, 60, 29, 96, 57, 48]])
^
3D case :
In [234]: np.random.seed(1234)
...: image = np.random.randint(11,99,(13,5,8))
...: loc = (5,2,3)
...: extend = np.array([1,2,4])
...:
In [235]: image[loc]
Out[235]: 82
In [236]: out = extract_patch_around_point(image, loc, extend, pad_value=0)
In [237]: out.shape
Out[237]: (3, 5, 9)
In [238]: out
Out[238]:
array([[[ 0, 23, 87, 19, 58, 98, 36, 32, 33],
[ 0, 56, 30, 52, 58, 47, 50, 28, 50],
[ 0, 70, 93, 48, 98, 49, 19, 65, 28],
[ 0, 52, 58, 30, 54, 55, 46, 53, 31],
[ 0, 37, 34, 13, 76, 38, 89, 79, 71]],
[[ 0, 14, 92, 58, 72, 74, 43, 24, 67],
[ 0, 59, 69, 46, 68, 71, 94, 20, 71],
[ 0, 61, 62, 60, 82, 92, 15, 14, 57], <-- At middle
[ 0, 58, 74, 95, 16, 94, 83, 83, 74],
[ 0, 67, 25, 92, 71, 19, 52, 44, 80]],
[[ 0, 74, 28, 12, 12, 13, 62, 88, 63],
[ 0, 25, 58, 86, 76, 40, 20, 91, 61],
[ 0, 28, 42, 85, 22, 45, 64, 35, 66],
[ 0, 64, 34, 69, 27, 17, 92, 89, 68],
[ 0, 15, 57, 86, 17, 98, 29, 59, 50]]])
^
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Rendering equation - why unsolvable directly?
Why is the rendering equation, introduced by Kajiya in 1986, not solvable directly/analytically?
A:
I'm sadly not able to add a comment to the answer above (not enough reputation), so I will do it like this.
I'd like to point out that what Dragonseel describes is simply an integral equation (specifically a Fredholm equation of the second kind). There are many such equations which do have an analytic solution; even some forms of the rendering equation have one (e.g. the solution of a white furnace can be given using a simple convergent geometric series, even though the rendering equation is infinitely recursive).
It is also not necessary to bias the estimated solution by bounding the number of recursions. Russian Roulette provides an useful tool for giving us an unbiased solution for an infinitely recursive rendering equation.
The main difficulty lies in the fact that the functions for reflectance (BRDF), emitted radiance and visibility are highly complex and often contain many discontinuities. In these cases there often is no analytic solution, or it is simply unfeasible to find such a solution. This is also true in the one dimensional case; most integrals lack analytic solutions.
Finally I'd like to note that even though most cases of the rendering equation do not have analytic solutions, there is a lot of research in forms of the rendering equation which do have an analytic solution. Using such solutions (as approximations) when possible can significantly reduce noise and can speed up rendering times.
A:
The rendering equation is as follows:
Now, the integral is over the sphere around the point $x$.
You integrate over some attenuated light, incoming from every direction.
But how much light comes in? This is the light $L(x',\omega_i)$ that some other point $x'$ reflects in the direction $\omega_i$ of point $x$.
Now you have to calculate how much light that new point $x'$ reflects, which requires solving the rendering equation for that point. And the solution for that point depends on a huge number of other points, including $x$.
In short, the rendering equation is infinitely recursive.
You cannot solve it exactly and analytically because it has infinite integrals over infinite integration domains.
But since light gets weaker each time it is reflected, at some point a human simply cannot notice the difference any more. And so you do not actually solve the rendering equation, but you limit the number of recursions (say reflections) to something that is 'close enough'.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Possible grouping of "Analysis", "System analysis" and "Business analysis" tags
There are currently several analysis tags which significantly overlap, but some having relatively few questions:
Analysis: 80 questions - 8 this year
Business-analysis: 26 questions - 2 this year
17 were related to a requirement tag or had requirement in the title
4 more had another analysis or system-analysis tag or referred to SA in the title
Systems-analysis: 5 questions - 1 this year
4 had another-analysis or business-analysis tag
the last refereed in the title to "software analysis" and in its content to analysis and modeling techniques
Would it be possible to group all the 3 tags under the single more general "analysis" ?
Additional remarks
The OOAD tag (object oriented analysis and design) is sufficiently specific and distinguishable from the general analysis tag
Other "-analysis" tags are more specific to code, algorithms and performance: algorithm-analysis (96), static-analysis (53), code-analysis (30), and dependency-analysis(13) and do not require action in my opinion.
"System analysis" is, despite the word "system" which sounds technical, a discipline that analysis business and business organization as well.
"Business analysis" seems frequently used for questions more related to functional requirements or requirement gathering from users. On the other hand, a "requirement" tag is already used very consistently. In this perspective, after the grouping, the combination "analysis" and "requirement", would seem almost equivalent to "business-analysis" and "requirement" IMO.
A:
There are only 7 tags about analysis. As you pointed out, algorithm-analysis, static-analysis, and dependency-analysis are already well-defined.
Looking at code-analysis, this could use a little work, too. Some of the questions are really about static analysis. Others are about code quality. Others are about managing source code. There are likely to be better tags than code-analysis. I'm guessing this tag probably isn't even necessary and everything can be retagged.
I'd like to do something with business-analysis and systems-analysis. Looking at the Wikipedia pages for business analysis and systems analysis, I would favor getting a definition into the systems analysis tag and moving business analysis into it. Based on our scope, I think we're closer to the definition of systems analysis. I don't think we're in a position to help people "determe solutions to business problems" (with the exception of problems related to the processes and methods used to build software), but as software engineers, we do create the systems and procedures that solve business problems. I'd like to hear more feedback from the community before doing this, though.
Once we sort out code-analysis, business-analysis, and systems-analysis, I'd like to take a final pass at analysis since there will probably be better tags at that point.
Status updates:
19 Nov 16: I wrote a tag description for algorithm-analysis and linked to some resources in the body of the tag wiki. I then went through the 90+ questions and did a cleanup to remove the tag where it wasn't appropriate and do any other cleanup (adding other tags, removing other tags, editing). I also merged business-analysis into systems-analysis. Next steps: (1) review systems-analysis questions (2) review static-analysis questions (3) review dependency analysis questions (4) review code analysis questions (5) review analysis questions
25-Nov-16: I went through the dependency-analysis questions. They looked good, but I did some minor edits to a couple of questions for other reasons. I also merged code-analysis into static-analysis as most of the questions were actually about static analysis. This will be the next clean up to make sure it's 100% accurate. I also cleaned up the systems-analysis questions post-merge. Next steps: (1) review static-analysis questions (2) review analysis questions
10-Dec-16: I finished the static-analysis tags. I only have to review analysis questions next - there are 66 questions to review.
12-Dec-16: We're down to less than 20 questions with the analysis tag.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using SignInManager without calling app.UseIdentity()
I need to use basic SignIn mechanism to log in user to my website:
var result = await _signInManager.PasswordSignInAsync(username, password, false, lockoutOnFailure: false);
Unfortunately it throws an error:
No authentication handler is configured to handle the scheme: Identity.Application
The reason probably is: I'm not calling app.UseIdentity(); in Startup.cs.
I'm not calling it, because I want to configure cookies on my own. So instead of UseIdentity I use this:
app.UseJwtBearerAuthentication(new JwtBearerOptions
{
AutomaticAuthenticate = true,
AutomaticChallenge = false,
TokenValidationParameters = tokenValidationParameters
});
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AutomaticAuthenticate = true,
AutomaticChallenge = false,
AuthenticationScheme = "Cookie",
CookieName = "access_token",
TicketDataFormat = new CustomJwtDataFormat(
SecurityAlgorithms.HmacSha256,
tokenValidationParameters)
});
If I call app.UseIdentity();, PasswordSignInAsync works correctly, but CookieAuthentication behaves like it is configured in UseIdentity (it redirects to /Account/Login, this behaviour is disabled in my configuration).
As I see in source code, UseIdentity doesn't do more than I do (it simply use UseCookieAuthentication, so similarly as I do). Why then, my soultion causes problems?
What should I do to make PasswordSignInAsync work witout using app.UseIdentity()?
A:
It does do some things different, it uses IdentityOptions to set the cookie middleware and those same identityoptions are used elsewhere within signInManager when authenticating a user.
Specifically it uses "Identity.Application" as the name of the auth scheme for the application cookie, since it uses that also when it tries to authenticate the user it throws an error because there is no cookie middleware matching the authscheme since you are naming yours differently.
If you name your AuthenticationScheme and CookieName "Identity.Application" then it should get past that error.
Or you can configure the IdentityOptions to match the authenticationscheme and cookiename of your choice, so that matching values are passed in from signinManager when it tries to authenticate the user
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Recording audio inside ionic 4 app using media-capture cordova plugin
I'm trying to record audio inside ionic 4 app using media-capture cordova plugin, But the problem is when I hit the record button it opens an external recorder app and records the audio, then I can play it inside my app. I would like to record the audio inside my app itself without opening an external recorder. Please help!
Here's my home.page.html:
<ion-header>
<ion-toolbar>
<ion-title>
Audio Testing App
</ion-title>
</ion-toolbar>
</ion-header>
<ion-content>
<div class="ion-padding">
<ion-button color="primary" (click)="record()">Record</ion-button>
<ion-list>
<ion-item-sliding *ngFor="let f of files">
<ion-item (click)="openFile(f)">
<ion-icon name="mic" slot="start"></ion-icon>
<ion-label class="ion-text-wrap">
{{ f.name }}
<p>{{ f.fullPath }}</p>
</ion-label>
</ion-item>
<ion-item-options side="start">
<ion-item-option (click)="deleteFile(f)" color="danger">
<ion-icon name="trash" slot="icon-only"></ion-icon>
</ion-item-option>
</ion-item-options>
</ion-item-sliding>
</ion-list>
</div>
</ion-content>
home.page.ts file:
import { Component, OnInit } from '@angular/core';
import { NativeAudio } from '@ionic-native/native-audio/ngx';
import { MediaCapture, MediaFile, CaptureError } from '@ionic-native/media-capture/ngx';
import { File, FileEntry } from '@ionic-native/File/ngx';
import { Platform } from '@ionic/angular';
import { Media, MediaObject } from '@ionic-native/media/ngx';
const MEDIA_FOLDER_NAME = 'audio_app_media';
@Component({
selector: 'app-home',
templateUrl: 'home.page.html',
styleUrls: ['home.page.scss'],
})
export class HomePage implements OnInit {
files = [];
constructor(private nativeAudio: NativeAudio,
private mediaCapture: MediaCapture,
private file: File,
private media: Media,
private plt: Platform) { }
ngOnInit() {
this.plt.ready().then(() => {
let path = this.file.dataDirectory;
this.file.checkDir(path, MEDIA_FOLDER_NAME).then(
() => {
this.loadFiles();
},
err => {
this.file.createDir(path, MEDIA_FOLDER_NAME, false);
}
);
});
}
loadFiles() {
this.file.listDir(this.file.dataDirectory, MEDIA_FOLDER_NAME).then(
res => {
this.files = res;
},
err => console.log('error loading files: ', err)
);
}
record() {
this.mediaCapture.captureAudio().then(
(data: MediaFile[]) => {
if (data.length > 0) {
this.copyFileToLocalDir(data[0].fullPath);
}
},
(err: CaptureError) => console.error(err)
);
}
openFile(f: FileEntry) {
const path = f.nativeURL.replace(/^file:\/\//, '');
const audioFile: MediaObject = this.media.create(path);
audioFile.play();
}
deleteFile(f: FileEntry) {
const path = f.nativeURL.substr(0, f.nativeURL.lastIndexOf('/') + 1);
this.file.removeFile(path, f.name).then(() => {
this.loadFiles();
}, err => console.log('error remove: ', err));
}
copyFileToLocalDir(fullPath) {
let myPath = fullPath;
// Make sure we copy from the right location
if (fullPath.indexOf('file://') < 0) {
myPath = 'file://' + fullPath;
}
const ext = myPath.split('.').pop();
const d = Date.now();
const newName = `${d}.${ext}`;
const name = myPath.substr(myPath.lastIndexOf('/') + 1);
const copyFrom = myPath.substr(0, myPath.lastIndexOf('/') + 1);
const copyTo = this.file.dataDirectory + MEDIA_FOLDER_NAME;
this.file.copyFile(copyFrom, name, copyTo, newName).then(
success => {
this.loadFiles();
},
error => {
console.log('error: ', error);
}
);
}
}
Here's the screenshots of the app:
Image 1: home page
Image 2: External recorded, on clicking record button this opens up
(DON'T WANT TO GO HERE, INSTEAD RECORD DIRECTLY INSIDE MY APP)
A:
You should try cordova-plugin-media.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Puppet: nested hash/dict,array: how to access in erb?
playing with puppet, I have ended up in a nested dictionary/hash - which looks more or less like
$settings =
{
"var1" =>
{
"ip" => "0.0.0.0",
"port" => "1234",
"option" => ["foo", "bar"],
"machines" =>
{
"maschine-1" => { "ip" => "1.2.3.4", "port" => "1234"},
"maschine-2" => { "ip" => "1.2.3.5", "port" => "1235"},
}
}
}
however, I have not managed to parse properly it in the correspondig erb template.
<% @settings.each_pair do |settings_key, settings_value_hash| %>
<%= settings_value_hash['ip']%>:<%= settings_value_hash['port'] %>
option <% @settings_value_hash['option'].each do |option| -%> <%= option %> <% end -%>
<% @{settings_value_hash['machines']}.each_pair do |machine_key, machine_value_hash| %>
server <%= machine_key %> <%= machine_value_hash['ip'] %>:<%= machine_value_hash['port'] %>
<% end %>
Thus, I can get the values in my top dictionary without problems, i.e., "ip" and "port",
However, puppet throws me compile errors, when I try to get to the array "option" or the dict "machines" within the top dict.
My guess at the moment is, that arrays and dicts/hashes are not "hashable" in Ruby/Puppet, or?
Cheers and thanks for ideas,
Thomas
A:
Not sure what you are doing but there a few glaring issues like @settings_value_hash is not defined it would be settings_value_hash piped variable not instance variable. @{settings_value_hash['machines']} is in correct as well what happens if you run this
<% @settings.each do |settings_key, settings_value_hash| %>
<%= "#{settings_value_hash['ip']}:#{settings_value_hash['port']}" %>
option
<% settings_value_hash['option'].each do |option| %>
<%= option %>
<% end %>
<% settings_value_hash['machines'].each do |machine_key, machine_value_hash| %>
server <%= "#{machine_key} #{machine_value_hash['ip']}:#{machine_value_hash['port']}" %>
<% end %>
<% end %>
Also why is your initial hash set to a global $settings but you are accessing it through an instance variable @settings.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Header Files Cross Project
So I have two projects, A and B, where B is dependent on A (A is a library, while B is a console application). A uses the boost library, and has been configured to include the header and library files, but B has not.
Visual studio throws an error saying the Boost Header files cannot be found (in project B). For example:
error C1083: Cannot open include file: 'boost/asio.hpp': No Such file or directory [Project: B]
My question is: Is there a way such that B does not have to include the Boost library as well?
A:
Is there a way such that B does not have to include the Boost library as well?
Yes, but only if you can avoid using the features as part of A's type/function definitions. If they can be used truly implementation-only, then you can avoid the header dependency -- you'll still need to link against compiled libraries (asio requires boost-system).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Users not marking answers,voting, or commenting?
Often I regret spending the time to answer a question because the user who asked the question goes missing. No, comments/replies, up/downvotes, no marking as answer or anything.
Should there be a rule or penalty established for this type of behavior?
Perhaps, not visiting or participating in your Stackexchange will cause demotions and loss of reputation?
Just some ideas...
What can we do to motivate all users to participate?
A:
This has been discussed before. Most users are looking for an immediate answer and will not want to be a part of a community with voting and acknowledging contributions.
They ask, you answer, they learn. But they will not necessarily acknowledge your answer. If you can't handle that, stop answering. If you want to contribute solutions, regardless of reputation, keep answering.
see: Are SharePoint.SE users lazy voters?
A:
Even if you did punish them, "they" wouldn't care because there not coming back so your only hurting yourself. Perhaps you could just make it more apparent that It considered polite to upvote if it was helpful
edit:
I think it behooves the entire stack environment to resist the urge to be seen as snotty and elitist as much as possible. 100 people may come and go but if you can keep just one with a good attitude (as frustrating as the other 99 may be) it is a step in the right direction
A:
I'm new to this site and I have asked a question, but with my low reputation, I have no possibility to reward the answers to this questions by upvoting.
I noticed that compared to other stack exchange sites, questions are only very rarely upvoted. But that's another topic, I think.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
When giving feedback to interviewers, should I be honest?
I recently had a job interview at company x that went terribly. I was late to the interview by less than 5 minutes (due to cell signal issues I had trouble locating the location using my cell phone), the company did not send me the technical test before the interview, and the interviewers reviewed some of my GitHub code that was about 18 months old and doesn't reflect my current knowledge or ability. The senior developer interviewer constantly interrupted me, the original interviewer mostly sat through this until the end whereby he engaged in a rather odd line of questioning that went something like this.
"Just one last question, whose fault was it you were late today?"
"Mine, my apologies again."
"It's just that the receptionist said you blamed it on the signal, I don't agree with that at all. I would have come down here a day early to scout it out and make sure I wouldn't be late."
A few days later I received an offer from another company I interviewed with and accepted it. Company x sent me the technical test they originally failed to send to complete retrospectively. I replied to company x's email and said that I am withdrawing my application, thanking all parties involved for their time. They responded and among the response was them asking for feedback regarding the interviewers, the interview format, etc., to see if that had anything to do with it.
In short the interviewer did influence my decision. I am not sure whether I am just sensitive, but I just found it to not be a good sign when a prospective manager makes me uncomfortable at the get-go. Is it a good idea to be honest with HR and mention how I felt, or just to leave it?
A:
Probably better to just leave it. You never know if you could end up in a position where you have to work with, or for, the senior developer in company X at some time in the future. It could prove awkward or personally costly at that time if he hears what you had to say about him.
A:
In America, we have a saying, "**** 'em". Be honest. 100% honest. If you have something to lose at this point, then you're probably better off losing it.
Mention the rude interviewer. Mention the 18 month old code thing too. They probably didn't know how to interview at all if that's how they went about it. I'm guessing they had a bad read on the market too. You'd probably be doing whoever has to interview with this company after you a lot of good if you just gave this company the means to improve their interviewing process.
A:
A possible benefit of providing honest feedback is that the company in question improves their interview process based on your experience. That would be lovely for them, but you'll never see any benefit from it yourself.
A possible negative consequence of doing so is that the company and the interviewers decide that you are in some way a disruptive or abrasive person. Any future applications you make to that company, or to another company after the problematic interviewer changes jobs, or to someone he knows at a different company, might be tainted as a result.
I do not comment on which outcome is more likely.
There is very minimal upside for you, if any, in providing the requested feedback. There is non-zero risk for you, however. On that basis, it would seem sensible to either not offer feedback at all, or avoid being too committal (withdrawing your application and thanking those involved for their time, as you have done, seems plenty good enough to me).
On a related note, companies will often decline to offer feedback to disappointed candidates for very similar reasons (but in reverse).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why didn't Dr. Strange tell Quill about Gamora in advance?
I see a lot of "Why didn't Dr. Strange do this and that?" questions and the justification for some was that Dr. Strange knew it would've failed either way as he saw millions of outcomes, only one of which goes to their favour. He knows that Tony Stark needs to survive for some reason, hence why he gave up the Time stone to Thanos.
However, unless there is a pretty damn good justification in the sequel, I refuse to believe that this was the best course of action to take, especially considering the info Dr. Strange must have after seeing so many outcomes.
For example, he must've known that Thanos killed Gamora. I know that Quill is very impulsive, but surely he could've handled it better if Dr. Strange just told him in advance, before they had Thanos on the ropes? If Quill didn't lash out like he did, they could've taken the Infinity Gauntlet away from Thanos and he would've been a lot weaker without it. You can even see how he just barely manages to get it back after it was taken off of his arm for a second. How could Thanos have possibly won if they took away the Gauntlet?
Unless there is some strange rule about him not being able to tell others specific details about what he saw, I don't see why Dr. Strange didn't just tell Quill.
A:
Boring Answer: We don't know until Infinity War II Comes out
Presumably included in the futures Dr. Strange saw is the one where he tells Quill about Gamora ahead of time. Maybe your question (and the countless of similar ones we are all coming up with) will be addressed in the movie, but there's probably no way to account for all of them.
Of course, since Dr. Strange told Tony as he was disappearing that This was the only way, we have to presume that the one unique winning future hadn't been ruled out by that moment, meaning Quill's actions didn't damn the universe.
Better Answers
Part of the fun of addressing alleged plot holes in movies is coming up with counter-examples for them, right? So let's have fun and throw some options out:
1. Quill wouldn't have been able to contain himself during the fight
Maybe Dr. Strange saw every possible future where an attempt was made to get an informed Quill to contain his emotions, but in each scenario the fight was lost.
2. Thanos couldn't have been defeated without Quill
Dr. Strange couldn't just take out Quill (e.g., infinite falling a la Loki in Ragnarok) ahead of time because the tide of the battle would have turned to Thanos' favor without Quill's contributions.
3. Maybe Thanos needed to almost lose the gauntlet in order for future events to transpire
Thanos was within about a second of losing everything he had worked for. Maybe in that moment, as happens cinematically with a near-death experience, his life choices had been flashed in front of him. Maybe the recollection of that will influence his choices in the next film.
4. Maybe defeating Thanos on the spot would have been certain doom in other ways
I know it's perverse to think of our Avengers this way, but maybe Dr. Strange saw the futures where the gauntlet does come off and land on the ground in all sorts of ways. Subsequently, mere sight and reach of unimaginable power corrupts one, or several, or all of the present Avengers (looking at the rabbit in particular, here). Maybe one of them grabs it and does terrible things. Or maybe none of them do and the gems get put in some "safe place" that is eventually raided and a worse outcome than the Thanos one happens. I mean, really, in the MCU where are you going to safely stash the gauntlet that someone won't come looking for it?
Or something else? We will be in a better place to answer questions about Dr. Strange's decision after the next (and hopefully last) Infinity War movie comes out. Until then, hopefully the above suggestions are enough to ease your mind that there are possibilities that the writers didn't just botch this whole thing.
A:
While Dr. Strange had seen all possible outcomes, there is no way for him to know exactly which future was happening while they were planning. There is nothing to suggest that while there is only 1 successful future out of 14 million, that they have actual control of making this happen (though they will try).
The more he sees things unfold with the Thanos battle on Titan, and as more information is presented, he can narrow down the number of possible futures that are taking place. By the time he gives the time stone to Thanos, he has enough information to know the best possible action to make the successful outcome possible.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Are data encrypted or just hashed in the Corda Ledger?
Are the transactions and states stored encrypted or just hashed in the H2 database?
A:
Corda doesn't control whether transactions and states are encrypted. This functionality is delegated to the node's database:
If the node's database supports encryption of data at rest, this can be used to store the transactions and states in an encrypted form
If the node's database supports encrypted JDBC connections, these can be used to transmit the transactions and states to the database in an encrypted form
For example, Azure SQL DBs are encrypted at rest.
Once SGX is integrated with Corda, transactions will always be encrypted using the enclave's private key, with only the SGX enclave able to decrypt them.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Would it be efficient to sort with indexes
So I was thinking of a new sorting algorithm that might be efficient but I am not too sure about that.
1) Imagine we have an array a of only positive numbers.
2) We go through the array and find the biggest number n.
3) We create a new array b of the size n+1.
4) We go through every entry in the unsorted array and increase the value in the second array at the index of the number of the unsorted array we are looking at by one. (In pseudo-code this means: b[a[i]]++; while a[i] is the number we are currently looking at)
5) Once we have done this with every element in a, the array b stores at every index the exact amount of numbers of this index. (For example: b[0] = 3 means that we had 3 zeros in the initial array a)
6) We go through the whole array b and skip all the empty fields and create a new List or Array out of it.
So I can imagine that this algorithm can be very fast and efficient for smaller numbers only since at the end we have to go through the whole array b to build the sorted one, which is going to be really time consuming.
If we for example have an array a = {1000, 1} it would still check 1001 elements in array b wether or not they are 0 even though we only have 2 elements in the initial array.
With smaller numbers however we should almost get a O(n) result? I am not too sure about that and that's why I am asking you. Maybe I am even missing something really important. Thanks for your help in advance :)
A:
Congratulations on independently re-discovering the counting sort.
This is indeed a very good sorting strategy for situations when the range is limited, and the number of items is significantly greater than the number of items in your array.
In situations when the range is greater than the number of items in the array a traditional sorting algorithm would give you better performance.
Algorithms of this kind are called pseudo-polynomial.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Wait for handler for every item in with_lines - Ansible
Ansible Version: ansible 2.4.2.0
I want to start VM sequentially depends on the role(master/backup). Multiple VM IDs are stored in 2 files master & backup. The controller flow should like below
Iterate VM IDs one by one from a file
For every iteration, the handler should notified. i.e the iteration should WAIT for handler complete
Iteration should not move foreword if handler is failed(or in WAITING state).
For reference, you see the below playbook
- name: Performs Power Actions VMs
hosts: localhost
vars:
- status: "{% if action=='stop' %}SHUTOFF{% else %}ACTIVE{% endif %}" # For Checking VM status
tasks:
- name: Staring Master VM
shell: |
echo {{ item }} > /tmp/current
echo "RUN nova start {{ item }} HERE!!!"
when: action == "start"
with_lines: cat ./master
notify: "Poll VM power status"
- name: Starting Backup VM
shell: |
echo {{ item }} > /tmp/current
echo "RUN nova start {{ item }} HERE!!!"
when: action == "start"
with_lines: cat ./backup
notify: "Poll VM power status"
handlers:
- name: Poll VM power status
shell: openstack server show -c status --format value `cat /tmp/current`
register: cmd_out
until: cmd_out.stdout == status
retries: 5
delay: 10
For above playbook, what I see is the handlers is notified after entire iteration is complete.
PLAY [Performs Power Actions on ESC VMs] **********************************************************************************************
TASK [Stopping Backup VM] *********************************************************************************************************
skipping: [localhost] => (item=Test)
TASK [Stopping Master VM] *********************************************************************************************************
skipping: [localhost] => (item=Test)
TASK [Staring Master VM] **********************************************************************************************************
changed: [localhost] => (item=Test)
TASK [Starting Backup VM] *********************************************************************************************************
changed: [localhost] => (item=Test)
TASK [Removing tmp files] *************************************************************************************************************
changed: [localhost] => (item=./master)
changed: [localhost] => (item=./backup)
RUNNING HANDLER [Poll VM power status] ********************************************************************************************
FAILED - RETRYING: Poll ESC VM power status (5 retries left).
^C [ERROR]: User interrupted execution
Is there any better approach to solve this problem? or Any suggestion how to fit block in this playbook to solve?
PS: The dummy command in tasks RUN nova start {{ item }} HERE!!! doesn't wait. That's why I have to check the status manually.
A:
By default, handlers are run at the end of a play.
You can however force the already notified handlers to run at a given time in your play by using the meta module.
- name: force running of all notified handlers now
meta: flush_handlers
In your case, you just have to add it in between your two vm start tasks
Edit: This will actually work in between your two tasks but not for each iteration in a single task so it is not actually answering your full requirement.
An other approach (to be developed) would be to include your check command directly in your task that should not return until conditions are met.
Have you considered exploring the galaxy of openstack related modules? They might solve your current problems as well.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Does event.target work differently on mobiles?
I'm currently creating a toolbar component with an "overflow" menu. When somebody clicks outside the menu, I want the menu to close, so I've attached a simple click handler to the document which checks to see if the target of the click is inside the menu or outside of it. The check looks like this:
var eventOutsideTarget = (overflowEl[0] !== event.target)
&& (overflowEl.find(event.target).length === 0);
So, this works in all instances in Chrome on my PC. If you click outside of the menu it is set to true. If you click on another menu to open, then the original menu closes and the new one opens, as expected.
On Chrome Android and iOS Safari the behavior is different though. If you click anywhere on the page that is not a menu it closes any open menus; however if you click on a different menu it opens the new one, but the old one is still opening.
I suspect this is to do with the second part of the check: overflowEl.find(event.target).length === 0.
This does not find the element on desktop, but on mobile it evaluates to true, even if you're clicking in a different menu.
This seems like a bug to me, but it is strange that it is happening on Android and iOS but not on Chrome desktop.
Any help would be greatly appreciated.
Edit: Adding a bit more of my code for completeness
angular.module('s4p.directives').directive('s4pToolbar', function ($compile, $document) {
return {
restrict: 'E',
scope: {},
controller: 's4pToolbarCtrl',
transclude: true,
template: '<s4p-toolbar-main><div transclude-main></div></s4p-toolbar-main>' +
'<s4p-toolbar-overflow-button ng-class="{"is-open":overflowOpen}">' +
'<s4p-button button-style="circle" icon="/images/iconSprite.svg#dot-menu" ng-click="toggleOverflow()"></s4p-button>' +
'<s4p-toolbar-overflow ng-show="overflowOpen" class="ng-hide" ng-cloak><div transclude-overflow></div></s4p-toolbar-overflow>' +
'</s4p-toolbar-overflow-button>'
,
link: function (scope, element, attrs, controller, transclude) {
// Copy the contents of the toolbar into both slots in the template
transclude(scope, function(clone) {
element.find('[transclude-main]').replaceWith(clone);
});
transclude(scope, function(clone) {
element.find('[transclude-overflow]').replaceWith(clone);
});
// Handle clicking anywhere on the page except the overflow to close it.
var overflowEl = element.find('s4p-toolbar-overflow-button');
var documentClickHandler = function (event) {
var eventOutsideTarget = (overflowEl[0] !== event.target) && (overflowEl.find(event.target).length === 0);
if (eventOutsideTarget) {
scope.$apply(function () {
scope.overflowOpen = false;
});
}
};
$document.on("click", documentClickHandler);
scope.$on("$destroy", function () {
$document.off("click", documentClickHandler);
});
// Update the visibility of all the sections
controller.updateSectionsVisibility();
}
};
})
A:
OK, so the answer was nothing to do with event.target, although that didn't stop me wasting 3 hours thinking it was!
The problem was that clicks on the document body simply weren't registering when you clicked on another menu button, although the click on the menu button was firing and opening the menu, the click on the document body was being ignored, despite the fact that it did work when clicking on other parts of the document.
The fix is that this line
$document.on("click", documentClickHandler);
needed to be this...
$document.on("click touchstart", documentClickHandler);
I still don't fully understand why, or why the original version worked on most elements on the page (maybe elements that didn't have their own events?), but it works. Apologies to anybody who comes here looking for an answer to the original question.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Import Git repo with signed commits into Launchpad
I need to import this GitHub repository with signed commits into Launchpad to build packages for this PPA.
A direct import will not work, due to this bzr-git bug, but a fast-export/fast-import is a workaround, according to https://bugs.launchpad.net/ubuntu/+source/bzr-git/+bug/1084403/comments/9.
I want to use my Raspberry Pi, which runs 24/7 as a web server, to make the conversion to bzr using fast-export/fast-import then have Launchpad automatically import it.
What is the best way to do this?
The conversion needs to be able to be run as a cron job, and the converted repository needs to be able to be imported by Launchpad automatically.
A:
This is no longer an issue because, as of November 2016, Lauchpad supports direct git-to-git code imports and mirroring of git repositories hosted elsewhere. Simply mirror/import via git instead of bzr and everything will be copied across, signed commits and all.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MISRA Rule 10.4 Violation
I'm getting the next MISRA error:
Rule-10.4 The operands of this equality operator are expressions of different 'essential type' categories (Boolean and unsigned).
The code is showed below:
#define TRUE (1!=0)
#define FALSE (0!=0)
typedef unsigned char boolean;
boolean active;
getActive(&active);
if (TRUE == active) <<<<<<<<<<<< HEre is the conflicting line
{
// DO Something
}
If I remove the TRUE :
if (active)
MISTA Rule 14.4 appears: "Controlling expression is not an 'essentially Boolean' expression"
So I cannot figure out the solution,
I see that using
#define TRUE 1U
#define FALSE 0U
solves the problem but I'm afraid I cannot afford this solution since I'm using a big inherited code from a 3rd party using the (1!=0) expression.
I guess that expression is more 'smart' and portable since in some systems the meaning of TRUE/ FALSE might chenge to 0/1 but I wonder if I can keep the:
#define TRUE (1!=0)
#define FALSE (0!=0)
and write my conditional expressions in a manner to cope with the MISRA issues
A:
Your MISRA checker is unable to determine that these are your boolean type.
In case you are stuck with C90 and only then: you need to inform your tool somehow about which custom bool type you are using. Otherwise it won't be able to tell what these macros are for.
Otherwise, simply use stdbool.h. There are very few excuses not to in the year 2019.
A:
Thanks to all for the answers and comments.
Unfortunately I'm not allowed to use stdbool.h , I'm using an AUTOSAR (automotive) stack and they don't use any of the standard libraries.
I'd prefer don't to trick the MISRA tool to indicate the boolean type, I thinkit would be something that we'd have to export to any PC of the team (ad-hoc solution)
Thanks to @R about the (1 != 0) clarification; it makes no sense , C is C (not bash or whatever where the true might be 0) , and true will be always 1, so unless you are ignorant about the bool value in your programming language the definition as a expression is useless.
I think that for my porpuses the best solution would be to redefine the macros as:
#define TRUE (1U)
#define FALSE (0U)
I've seen the AUTOSAR stack gives me the option to redefine this values in a Integration file
And this way I keep the compatibility with all my Application code and the existing AUTOSA stack and also I don't need to change anything in the inherited code.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to remove Headers and Footers programmatically in IE while printing, instead of using IE Page Setup?
How to remove Headers and Footers programmatically in IE while printing, instead of using IE Page Setup?
A:
Answer from microsoft:
Users can easily change page margins, header and footer settings, and the default Internet Explorer printer through the Internet Explorer user interface. However, there are no methods under Internet Explorer or the WebBrowser control to change these settings programmatically.
You cannot use the ExecWB command to set page margins and the header or footer. These values are stored in the registry.
There might be a need to change the print settings of Internet Explorer or the WebBrowser control programmatically. The only settings that can be changed are page margins, and header and footer information. There is no supported way to change other settings like page orientation or printer.
Important This section, method, or task contains steps that tell you how to modify the registry. However, serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs. For more information about how to back up and restore the registry, click the following article number to view the article in the Microsoft Knowledge Base:
322756 How to back up and restore the registry in Windows
This is how Microsoft Internet Explorer accesses the printing settings:
For Page Margins, Microsoft Internet Explorer first tries to get the values from this registry key:
HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\PageSetup
If there is no such a key, Internet Explorer create this key by copying the values from the following:
HKEY_LOCAL_MACHINE\Software\Microsoft\Internet Explorer\PageSetup
If there is no such key, default values are provided.
For the Header and Footer, the values are picked up from the following:
HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\PageSetup
If there is no such key, default values are provided.
The defaults are 0.75 for margins,
For the Internet Explorer default printer, default values are provided from:
HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\PageSetup\printer
The developer could alter the above registry entries for the printing settings accordingly.
Please note that these values are system-wide and affect all instances of the WebBrowser control and Internet Explorer for the current user.
So:
I think you can offer users run your .reg file. (feautiful decision for legendary IE)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
NSDate - Changing the year value
I need to change a NSDate object. What I am basically doing is changing the year value.
for example:
NSString *someYear = @"2093";
NSDate *date = [NSDate date]; // Gets the current date.
... Create a new date based upon 'date' but with specified year value.
So with 'date' returning 2011-03-06 22:17:50 +0000 from init, I would like to create a date with 2093-03-06 22:17:50 +0000.
However I would like this to be as culturally neutral as possible, so it will work whatever the timezone.
Thanks.
A:
Here's my code for setting the UIDatePicker limits for a Date Of Birth selection. Max age allowed is 100yrs
_dateOfBirth.maximumDate = [NSDate date];
//To limit the datepicker year to current year -100
NSDate *currentDate = [NSDate date];
NSUInteger componentFlags = NSYearCalendarUnit;
NSDateComponents *components = [[NSCalendar currentCalendar] components:componentFlags fromDate:currentDate];
NSInteger year = [components year];
NSLog(@"year = %d",year);
[components setYear:-100];
NSDate *minDate = [[NSCalendar currentCalendar] dateByAddingComponents:components toDate:currentDate options:0];
_dateOfBirth.minimumDate = minDate;
A:
Take a look at NSCalendar, especially components:fromDate: and dateFromComponents: methods.
A:
I managed to figure the answer with the pointer Hoha gave me.
NSNumber *newYear = [[NSNumber alloc] initWithInt:[message intValue]];
NSCalendar* gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar];
unsigned int unitFlags = NSYearCalendarUnit | NSDayCalendarUnit | NSMonthCalendarUnit;
NSDateComponents* dateComponents = [gregorian components:unitFlags fromDate:[NSDate date]];
[dateComponents setYear:[newYear intValue]];
NSDate *newDate = [gregorian dateFromComponents:dateComponents];
[newYear release];
|
{
"pile_set_name": "StackExchange"
}
|
Q:
PHP: tcpdf security considerations of cache folder with write permission
I have installed tcpdf on my web server and use it to generate pdf invoices. It has a cache folder and my web server user group www-data can create and delete files.
Could a hacker
a) create files in that folder and
b) execute them as php?
Should I move the cache folder outside of the www directory? I tried to cd into the folder but get a permission error with my own username, so I was wondering if that step is necessary.
A:
If you have not made any changes to your user groups a www-data group is only used for logging purposes and is not able to accessed by the browser. The data user will be able to create but it should not be deleting anything. But as for worrying about hackers accessing your site as long as you have not changed any permissions for this user No.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
cryptopp plaintext fixed length limit
When i pass initialText86, the code below works as it should. When i pass initialText87 it fails to construct StringSource ss1, and we meet exception invalid argument!
How can i encode string with length 87?
#include <string>
#include <new>
using namespace std;
#include <../pub/cryptopp/rsa.h>
#include <../pub/cryptopp/osrng.h>
#include <../pub/cryptopp/oaep.h>
#include <../pub/cryptopp/sha.h>
using namespace CryptoPP;
AutoSeededRandomPool& rng_get() {
static AutoSeededRandomPool defRng;
return defRng;
}
string rsa_encode( const string& plainText, const RSA::PublicKey& pubKey ) {
RSAES_OAEP_SHA_Encryptor rsaEnc( pubKey );
string cipherText;
StringSource ss1( reinterpret_cast< const byte* >( plainText.c_str() ), plainText.size(), true,
new PK_EncryptorFilter( rng_get(), rsaEnc,
new StringSink( cipherText )
) // PK_EncryptorFilter
); // StringSource
return move( cipherText );
}
string rsa_decode( const string& cipherText, const RSA::PrivateKey& secretKey ) {
RSAES_OAEP_SHA_Decryptor rsaDec( secretKey );
string plainText;
StringSource ss2( reinterpret_cast< const byte* >( cipherText.c_str() ), cipherText.size(), true,
new PK_DecryptorFilter( rng_get(), rsaDec,
new StringSink( plainText )
) // PK_DecryptorFilter
); // StringSource
return move( plainText );
}
static const size_t keyLength = 1024;
RSA::PrivateKey _secretKey;
RSA::PublicKey _pubKey;
bool test( const string& initialText ) {
auto cipherText = rsa_encode( initialText, _pubKey );
auto plainText = rsa_decode( cipherText, _secretKey );
return plainText == initialText;
}
int main() {
_secretKey.GenerateRandomWithKeySize(rng_get(), keyLength );
new( &_pubKey ) RSA::PublicKey( _secretKey );
string initialText87 = "111111111111111111111111111111111111111111111111111111111111111111111111111111111111111";
string initialText86 = "11111111111111111111111111111111111111111111111111111111111111111111111111111111111111";
auto testResult = test( initialText87 );
assert( testResult );
return testResult ? 0 : -1;
}
A:
The length of data that can be encrypted using RSA is determined primarily by the size of the key you're using. You appear to be using OAEP, so the maximum length is:
keyLength - 2 - 2 * hashLength
Where keyLength is the length of the RSA modulus in bytes. You're using a 1024 bit key so:
keyLength = 1024 / 8 = 128
And since you're using OAEP with SHA-1
hashLength = 20
So the maximum you can encrypt is:
128 - 2 - 2 * 20 = 86
Which is exactly what your example shows.
To encrypt more data, you could use a larger RSA key, but RSA really shouldn't be used for encrypting large amounts of data, so is usually used as part of a hybrid cryptosystem where RSA is instead used to encrypt a randomly generated key for a symmetric algorithm like AES, and then the symmetric algorithm is used to encrypt the actual data so avoiding the length limits associated with RSA.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Converting Numpy Lstsq residual value to R^2
I am performing a least squares regression as below (univariate). I would like to express the significance of the result in terms of R^2. Numpy returns a value of unscaled residual, what would be a sensible way of normalizing this.
field_clean,back_clean = rid_zeros(backscatter,field_data)
num_vals = len(field_clean)
x = field_clean[:,row:row+1]
y = 10*log10(back_clean)
A = hstack([x, ones((num_vals,1))])
soln = lstsq(A, y )
m, c = soln [0]
residues = soln [1]
print residues
A:
See http://en.wikipedia.org/wiki/Coefficient_of_determination
Your R2 value =
1 - residual / sum((y - y.mean())**2)
which is equivalent to
1 - residual / (n * y.var())
As an example:
import numpy as np
# Make some data...
n = 10
x = np.arange(n)
y = 3 * x + 5 + np.random.random(n)
# Note that polyfit is an easier way to do this...
# It would just be "model, resid = np.polyfit(x,y,1,full=True)[:2]"
A = np.vstack((x, np.ones(n))).T
model, resid = np.linalg.lstsq(A, y)[:2]
r2 = 1 - resid / (y.size * y.var())
print r2
|
{
"pile_set_name": "StackExchange"
}
|
Q:
When is a homomorphism an epimorphism?!
I want to prove the following characterization of an $R$-module homomorphism $g$ to be surjective:
"whenever the composition of $g:M→N$ and $k:N→Y$ is zero, then $k=0$".
It is easy to go one side: if $g$ is surjective then for any $n\in N$ we have $n=g(m)$ for some $m\in M$. Now, if the composition is zero we would have $k(n)=k(g(m))=0$. But, for the converse, I could not choose suitable $Y$ and $k$. Thanks, in advance, for any cooperation!
A:
Let us assume that $g$ has the quoted property.
Let $Y = N/\text{im}(g)$ and $ k : N \rightarrow Y$ the natural projection. Then $k \circ g = 0$, so $k = 0$ by assumption. But $k$ is surjective, so we have to have $Y = 0$, hence $N = \text{im}(g)$ and so $g$ is surjective.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to compare two array objects and remove matched objects from one array object
My Code Scenario is:
var Employees= [{name:"Ram",htno:1245},{name:"mohan",htno:1246},
{name:"madhu",htno:1247},{name:"ranga",htno:1248}]
var seletedEmployees= [{name:"mohan"},{name:"ranga"}];
var employeesdataAfterremoveSelected = [?];
A:
You can store selected employees names in an array and then filter Employees array and check if employee's name is in this array:
var employees= [{name:"Ram",htno:1245},{name:"mohan",htno:1246},{name:"madhu",htno:1247},{name:"ranga",htno:1248}]
var selectedEmployees= ["mohan","ranga"];
var result = employees.filter(emp => selectedEmployees.includes(emp.name));
console.log(result);
To programatically get array of strings instead array of objects, you can use map:
var seletedEmployees= [{name:"mohan"},{name:"ranga"}].map(emp => emp.name);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
First POST to Azure MVC3 app very slow
As the title implies, the first "cold" POST to our MVC3 app in the Azure cloud is very slow. Once it "spins up", the normal requests are blazing fast. The first spin-up after a brief period of rest takes a few seconds. Subsequent requests can be measured in milliseconds.
How can we keep this thing awake?
A:
This is probably due to the appplication pool unloading after a period on inactivity. The next request has to take the overhead of starting it up again.
To confirm this, you need to turn on the performance counters and look at the numbers of app domain loads and unloads.
Either way, this blog post explains how to fix it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
If $A$ is infinite and bounded, the infimum of the difference set of $A$ is zero.
Let $A$ be a non-empty subset of $\mathbb{R}$. Define the difference set to be
$A_d := \{b-a\;|\;a,b \in A \text{ and } a < b \}$
If $A$ is infinite and bounded then $\inf{A_d} = 0$.
Since $a < b$ we have $b - a > 0$. Thus zero is a lower bound for $A_d$ and $\inf(A_d) \geq 0$. I then want to show that if $\inf(A_d) = \epsilon > 0$ and $A$ is bounded, then $A$ is finite.
Let $\inf(A) = \beta$ and $\sup(A) = \alpha$. Then there can be at most $\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor$ real numbers in $A$. Suppose that there are greater than $\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1$ numbers in $A$. Since $b - a > \epsilon$ for each $a , b \in A$, We have $\alpha > (\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1)(\epsilon) + \beta$. However this is a contradiction, since $(\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1)(\epsilon) + \beta > (\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor)(\epsilon) + \beta\geq ( \frac{\alpha - \beta}{\epsilon} )(\epsilon) + \beta = \alpha$. Thus the cardinality of $A$ must be less than or equal to $\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1$ and thus finite.
We have show that if $\inf($A_d$) > 0$ and $A$ is bounded then, $A$ cannot be infinite.
One question I have is whether this would be enough to prove the theorem. I'm sure that there are more effecient ways to formulate the above argument. I feel like this is a good opportunity for the pigeon hole principle but I don't really know how to "invoke" it. Critique is welcomed and appreciated.
A:
Your argument is ok for me. If you want to apply the Pigeon-Hole Principle: We have $A\subset [\inf A, \sup A]=[x,y]$ with $x<y$. For any $r>0$ take $n\in N$ such that $(y-x)/n<r.$ The set of $n$ intervals $S= \{[x+j(y-x)/n,x+(j+1)(y-x)/n] : 0\leq j<n\}$ covers $[x,y].$ Take any set $B$ of $n+1$ members of $A.$ At least two distinct $c,d\in B $ belong to the same member of $S.$ So $\exists c,d\in A\;(0<|c-d|\leq (y-x)/n<r).$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Android - Checkbox not responding to clicks inside of FrameLayout
I'm working on a music player application, and I've created a FrameLayout which works as a "card," displaying information about each song (album art, title, artist, etc.) I'm trying to add checkboxes in this card, which I plan to customize to create upvote and downvote buttons. However, the checkboxes within the card do not respond to clicks. If I draw a checkbox in the main activity of the application (where I'm drawing the cards), it works fine.
Here is the onDraw() method of the card view ("MusicCardView"):
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
//Draw the voting controls
try {
CheckBox upvoteButton = new CheckBox(getContext());
upvoteButton.setX(20);
upvoteButton.setTop(-60);
upvoteButton.setOnClickListener(new View.OnClickListener() {
public void onClick(View view) {
// Is the button now checked?
Log.d("MainActivity", "upvote clicked");
}
});
CheckBox downvoteButton = new CheckBox(getContext());
downvoteButton.setX(20);
downvoteButton.setTop(-150);
downvoteButton.setOnClickListener(new View.OnClickListener() {
public void onClick(View view) {
// Is the button now checked?
Log.d("MainActivity", "downvote clicked");
}
});
upvoteButton.draw(canvas);
downvoteButton.draw(canvas);
} catch (Exception e) {
Log.e("MusicCardView", "exception", e );
}
}
The xml of the main activity ("PartyActivity"):
<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/main_activity_container"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context="com.musique.PartyActivity">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
android:id="@+id/main_linear_view"
android:layout_marginBottom="56dp">
<com.musique.MusicCardView
android:layout_width="match_parent"
android:id="@+id/now_playing_view"
android:layout_height="80dp"
android:background="#ccc"
android:paddingBottom="5dp"
android:paddingLeft="5dp"
app:exampleColor="#000000"
app:songTitle="Money"
app:artist="Pink Floyd"
app:album="The Dark Side of the Moon"
app:exampleDimension="18sp"
app:exampleDrawable="@drawable/darkside"
app:exampleString="example" />
<Space
android:layout_width="match_parent"
android:layout_height="80dp"
android:id="@+id/controller_space"/>
<EditText
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/query"
android:layout_gravity="center"
android:layout_marginTop="20dp" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/searchBtn"
android:layout_gravity="center"
android:layout_marginTop="20dp"
android:text="Search"
android:background="#5eca99"/>
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/tvData"/>
<ScrollView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
android:animateLayoutChanges="true"
android:id="@+id/main_scroll_view">
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical"
android:id="@+id/scroll_child">
</LinearLayout>
</ScrollView>
</LinearLayout>
<android.support.design.widget.BottomNavigationView
android:id="@+id/navigation"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_marginEnd="0dp"
android:layout_marginStart="0dp"
android:background="?android:attr/windowBackground"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:menu="@menu/navigation" />
</android.support.constraint.ConstraintLayout>
And an example of a MusicCardView being the draw in PartyActivity
LayoutInflater vi = (LayoutInflater) getApplicationContext().getSystemService(Context.LAYOUT_INFLATER_SERVICE);
MusicCardView v = (MusicCardView) vi.inflate(R.layout.music_card_template, null);
v.setLayoutParams(new ViewGroup.LayoutParams(LayoutParams.MATCH_PARENT, 210));
LinearLayout linearLayout = findViewById(R.id.scroll_child);
v.setSongAttributes(song);
new RetrieveArtTask(v).execute(song);
Log.d("PartyActivity", "SongAttributes Set");
linearLayout.addView(v);
I've had a look at this question and this question, but had no luck with the solutions posted, and would greatly appreciate any advice you might have.
A:
If you draw the controls directly to the canvas they are basically just bitmaps, you need to add them to the view hierarchy (activity/fragment layout) for them to receive touch events.
As a side note, it's a big no-no to instantiate things in onDraw, especially views.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Tesseract Appears to be learning characters as you perform more OCRs, how do I save the learning data between uses?
I have a particular set of 10 images to perform OCRs. They are all digits; somewhat short, about 20 digits in each image. There is one particular image, if I run it first, it will have some mismatches; however, if I run other tests first, then come back to that one, all characters match.
I am inclined to conclude that Tesseract is learning the characters as more OCR operations are performed, which makes me very happy. Now the question is, if it's possible, for me to save the learning data, so Tesseract would know to pick it up the next time I use it?
A:
You can set classify_save_adapted_templates to 1 in your Tesseract config file to save the adapted templates and set classify_use_pre_adapted_templates to 1 to load the templates next time you run Tesseract
The code that specifies the behavior of these options is here:
http://code.google.com/p/tesseract-ocr/source/browse/trunk/classify/classify.cpp?r=570
|
{
"pile_set_name": "StackExchange"
}
|
Q:
excel 2010 Blank cell in place of 0
How do I use this formula and add if 0,"" so the cells are blank rather than 0
=SUMIF(NotReady!AA17:AA50,5,NotReady!AB17:AB50)
I would like it in formula rather then cell format please
Thanks in advance!
A:
Let's call your formula x
x => "=SUMIF(NotReady!AA17:AA50,5,NotReady!AB17:AB50)"
If x = 0 then return "", otherwise display value. The formula for this would be
=if(x=0,"",x)
So, replace the above x's in the formula with your formula to get
=If(SUMIF(NotReady!AA17:AA50,5,NotReady!AB17:AB50)=0,"",SUMIF(NotReady!AA17:AA50,5,NotReady!AB17:AB50))
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Rebranding a GPL'd app as SaaS
Just a quick question since I'm a little iffy on exactly how the GPL works. Say I am developing a hosted software-as-a-service application, and I've found a free GPL app that does 90% of what I was going to write myself. Can I:
A) Take the code from the app, rebrand it by changing the name and/or logo and, without modifying a single line of code, sell it to people as a hosted service? Would I have to say something like "We are using Project X" with a link to its site? Or does nobody have to know that I'm using an open-source application unless I want them to?
B) Change the structure of the application, add in my own stuff (an extra module that the original app doesn't have, for instance) and not merge the code back into the main branch if the app will only ever be hosted, and not distributed to people?
C) Scrap the front-end entirely and write my own using another technology (Flex, for instance) but use the existing code (possibly modified as with scenario B above) as the back-end?
Can I do any of these? All of them? I'm really not 100% sure but it seems a shame to have to reinvent the wheel if there's an open source app that already does most of what my project would do; it seems a lot easier to be able to take that and add onto it to provide a better solution.
A:
IANAL, nor do I play one on the Internet. Get competent legal advice in your jurisdiction before going ahead with something like this.
A) Yes, No, Yes.
B) Yes.
C) Yes.
Now, as to whether it's moral to do all those things, well, that's a different question. I would expect that people would probably get a mite upset if you didn't contribute your changes back (it being not in the spirit of the GPL) but providing the output of a program (HTML, etc) isn't counted as redistribution. The AGPL, on the other hand, does have this restriction, to stop exactly this sort of thing.
A:
The GPL requires you to deliver the source code of the entire application under GPL if you redistribute your software publicily. You are not doing this so you won't have to. Companies like Google and Yahoo make handy use of this.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why is $|\sigma(\{A_1,A_2,\dots,A_N\})|\leq 2^{2^N}$?
I am trying to understand mathematically why $|\sigma(\mathcal{M})|\leq 2^{2^{N}}$ where $\mathcal{M}=\{A_{1},A_{2},\dots,A_{N}\}$ is a finite system of subsets of $X$. I found this below Definition 3 on page 1. If the proof is hard, so please avoid taking your time on proving it since I am not really good at understanding some proofs in this subject.
There is no problem for me to imagine how the $\sigma$-algebra of a system of a single set of $X$ looks like. It is kind of hard to imagine a $\sigma$-algebra of a system of sets of $X$.
If we have $N=1$, then $\sigma(\{A_{1}\})=\{\emptyset,A_{1},X\setminus A_{1},X\}=\mathcal{P}(X)$.
It would be problematic if we construct a $\sigma$-algebra for $N=2$, since we don't know how $A_{1}$ and $A_{2}$ look like. If we assume that $\{A_{1},A_{2}\}$ is a set parition of $X$, then we have $\sigma(\{A_{1},A_{2}\})=\mathcal{P}(X)$.
How would the construction of $\sigma$-algebra look like without trying constructing one if $A_{1}\cap A_{2}\neq \emptyset$? For example, is there a system that contains $\sigma(\{A_{1},A_{2}\})$? I agree that are many sets in $\sigma(\{A_{1},A_{2}\})$.
My incomplete imagination is $\sigma(\{A_{1},A_{2}\})\subseteq\mathcal{P}(X)\subseteq \mathcal{P}(\mathcal{P}(\{A_{1},A_{2}\}))$ and then I would conclude that
$$|\sigma(\{A_{1},A_{2}\})|\leq |\mathcal{P}(\mathcal{P}(\{A_{1},A_{2}\}))|=2^{|\mathcal{P}(\{A_{1},A_{2}\})|}\leq 2^{2^{2}}.$$
I just feel like $\mathcal{P}(\mathcal{P}(\{A_{1},A_{2}\}))$ isn't a good idea, since there are some elements of itself that have nothing to do with $\sigma(\{A_{1},A_{2}\})$.
A:
Say $E$ is an atom if $$E=\bigcap_{j=1}^NB_j,$$where for every $j$ either $B_j=A_j$ or $B_j=A_j^c$. (Those need not be "real" atoms; call them that anyway.) There are at most $2^N$ atoms. You can check that the set of finite unions of atoms (including the empty union) is an algebra; hence it's the algebra generated by $A_1,\dots,A_N$, which hence has no more than $2^{2^N}$ elements.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why wrapping the Data.Binary.Put monad creates a memory leak? (Part 2)
As in my previous question, I'm trying to wrap the Data.Binary.Put monad into another monad so that later I can ask it questions like "how many bytes it's going to write" or "what is the current position in file".
Before, I thought that understanding why it leaks memory while using a trivial (IdentityT?) wrapper would lead me to solving my problem. But even though you guys have helped me resolve the problem with the trivial wrapper, wrapping it with something usefull like StateT or WriterT still consumes too much memory (and usually crashes).
For example, this is one way I'm trying to wrap it and which leaks memory for big input:
type Out = StateT Integer P.PutM ()
writeToFile :: String -> Out -> IO ()
writeToFile path out = BL.writeFile path $ P.runPut $ do runStateT out 0
return ()
Here is a more complete code sample that demonstrates the problem.
What I would like to know is this:
What is happending inside the program that causes the memory leak?
What can I do to fix it?
For my second question I think I should explain in more details what I intend the data to look on disk: It is basically a tree structure where each node of the tree is represented as an offset table to it's children (plus some additional data). So to calculate offset of n-th children into the offset table I need to know the sizes of children 0 to n-1 plus the current offset (to simplify things, let's say each node has fixed number of childs).
Thanks for looking.
UPDATE:
Thanks to nominolo I can now create a monad that wraps around the Data.Binary.Put, tracks current offset and uses almost no memory. This is done by dropping the use of StateT transformer in favor of a different state threading mechanism that uses Continuations.
Like this:
type Offset = Int
newtype MyPut a = MyPut
{ unS :: forall r . (Offset -> a -> P.PutM r) -> Offset -> P.PutM r }
instance Monad MyPut where
return a = MyPut $ \f s -> f s a
ma >>= f = MyPut $ \fb s -> unS ma (\s' a -> unS (f a) fb s') s
writeToFile :: String -> MyPut () -> IO ()
writeToFile path put =
BL.writeFile path $ P.runPut $ peal put >> return ()
where peal myput = unS myput (\o -> return) 0
getCurrentOffset :: MyPut Int
getCurrentOffset = MyPut $ \f o -> f o o
lift' n ma = MyPut $ \f s -> ma >>= f (s+n)
However I still have a problem with tracking how many bytes is MyPut going to write on disk. In particular, I need to have a function with signature like this:
getSize :: MyPut a -> MyPut Int
or
getSize :: MyPut a -> Int
My aproach was to wrap the MyPut monad inside WriterT transformer (something like this). But that started to consume too much memory again. As sclv mentions in comments under nominolos answer, WriterT somehow cancels out the effect of continuations. He also mentions that getting the size should be possible directly from the MyPut monad that I already have, but all my attempts to do so ended in non compilable code or an infinite loop :-|.
Could someone please help further?
A:
It looks like the monad transformer is too lazy. You can create a heap profile (without having to build it specially) by running the program with:
$ ./myprog +RTS -hT
$ hp2ps myprog.hp
$ open hp2ps.ps # Or whichever viewer you have
In this case it's not particularly helpful, because it only shows lots of PAPs, FUN_1_0s and FUN_2_0s. This means the heap is made up of lots of partially applied functions, and functions of one argument and two arguments. This usually means that something is not evaluated enough. Monad transformers are somewhat notorious for this.
The workaround is to use a more strict monad transformers using continuation passing style. (his requires {-# LANGUAGE Rank2Types #-}.
newtype MyStateT s m a =
MyStateT { unMyStateT :: forall r. (s -> a -> m r) -> s -> m r }
Continuation passing style means that instead of returning a result directly, we call another function, the continuation, with our result, in this case s and a. The instance definitions look a bit funny. To understand it read the link above (Wikipedia).
instance Monad m => Monad (MyStateT s m) where
return x = MyStateT (\k s -> k s x)
MyStateT f >>= kk = MyStateT (\k s ->
f (\s' a -> unMyStateT (kk a) k s') s)
runMyStateT :: Monad m => MyStateT s m a -> s -> m (a, s)
runMyStateT (MyStateT f) s0 = f (\s a -> return (a, s)) s0
instance MonadTrans (MyStateT s) where
lift act = MyStateT (\k s -> do a <- act; k s a)
type Out = MyStateT Integer P.PutM ()
Running it now gives constant space (the "maximum residency" bit):
$ ./so1 +RTS -s
begin
end
8,001,343,308 bytes allocated in the heap
877,696,096 bytes copied during GC
46,628 bytes maximum residency (861 sample(s))
33,196 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
Generation 0: 14345 collections, 0 parallel, 3.32s, 3.38s elapsed
Generation 1: 861 collections, 0 parallel, 0.08s, 0.08s elapsed
The downside of using such strict transformers is that you can no longer define MonadFix instances and certain laziness tricks no longer work.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Arduino + SIM900 Module - Not getting the command response correctly
I'm having troubles getting the correct response from a SIM900 module, if I use the code as it is in the example it works just fine.
for this command:
void GetContacts(){
mySerial.print("AT+CPBF=\"Mailbox\"");
delay(100);
mySerial.println();
}
and with this print code:
if (mySerial.available()){
Serial.write(mySerial.read());
}
I get:
AT+CPBF="Mailbox"
+CPBF: 1,"+584125089112",145,"Mailbox3"
+CPBF: 2,"+584264273127",145,"Mailbox1"
+CPBF: 3,"+584147373665",145,"Mailbox2"
OK
which is perfect, but if I try to read the output and then print it like this:
if (mySerial.available()){
int intValue = mySerial.read();
String stringOne;
stringOne = String(intValue, HEX); //int to HEX
char charConversion;
charConversion = hexNibbleToChar(stringOne[0]) * 16 + hexNibbleToChar(stringOne[1]); //HEX to Char
contactString += charConversion;
Serial.println(contactString);
}
char hexNibbleToChar(char nibble){
if (nibble >= '0' && nibble <= '9')
return nibble - '0';
else if (nibble >= 'a' && nibble <= 'f')
return 10 + nibble - 'a';
else
return 10 + nibble - 'A';
}
I get:
AT+CPBF="Mailbox"
+CPBF: 1,"+584125089112",145,"Mailbox3"
+CPBF: 2,"+58426
Suddenly stops there and I have no idea why, I've tried just reading and printing right after the int intValue = mySerial.read(); line, and when I convert the decimal string that I got there to char with any online converter the result it's the same.
Does any of you see what I'm doing wrong?
Thanks,
Juan Docal
A:
Well, for those of you with the same problem or something related, my solution was to save all the response in a String variable and then proccess it when the "mySerial" variable wasn't available anymore
String contactString = "";
if(mySerial.available()){
contactString += (char) mySerial.read();
}
else{
if(contactString != ""){
//Process response
}
contactString = "";
}
I was proccessing all the data within mySerial.available() and somehow I was truncating the response...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Recursive AngularJS directive
Currently I am working on a project using AngularJS and I need to create a directive which is able to display (and alter) a "recursive structure" - something similar to a family.
The structure defined in the controller looks like this:
$scope.members = [
{ firstName: 'Andrei', lastName: 'Croitoriu', age: 32 },
{ firstName: 'John', lastName: 'Doe', age: 25, members: [
{
firstName: 'Jane', lastName: 'Doe', age: 24
}
]}
];
I've managed to implement the directive which display the structure, but now I am struggling to implement a "Add Member" feature. Basically for each member I want to have a button next to it and be able to add a new member to the node. I would like to have a single method defined in the controller - all my attempts so far failed so in the plnkr code attached I've removed all attempts (left just a simple addMember method which just adds a new member to the top level collection.
Can anybody suggest any idea on how to pass the addMember method to the directives and implement the behavior I expect?
My code can be found here: http://plnkr.co/edit/hRzQiW?p=info
Thank you in advance!
Andrei
A:
fast tuning is to put addMember in familyMember directive, as the directive scope is isolated, the function under controller's scope is not accessible.
see the plunker here: http://plnkr.co/edit/h6vZu7?p=preview
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how to update Records using nsupdate?
We know that we can update a record (its IP) by doing these steps:
nsupdate
server ns.bar44.com
zone bar44.com
update delete somehost.bar44.com. A
update add somehost.bar44.com. 86400 A 10.10.10.1
show
send
As we can see we know that the somehost.bar44.com. is exist in the DB, this will work if I want to update the IP of an existing Record, but what if I want to change the hostname not the IP. E.g., I want to make 10.10.10.1 the IP of somehost22.bar44.com. what will let me know that IP is already taken by somehost.bar44.com.?
Is this a way to delete the whole DB of a certain zone using nsupdate?
A:
DISCLAIMER: use this script on your own risk
What it does ?
As O/P wants, lat say he want to add somedomain.bar44.com and the somedomain44.bar44.com exists in zone then it should remove somedomain44.bar44.com and should be add somedomain.bar44.com into zone. this process will done using this script. Tested on Ubuntu bind9.
in short it will add xyz.bar.com if not exist and if exists then it will remove (xyz*.bar.com) re-add with new information provided by you.
Script :
#!/bin/bash
#
## Update DNS Records Interactive
## Rahul Patil <http://www.linuxian.com>
#
## Functions
#
ask() {
while [[ $ans == "" ]]
do
read -p "${@}" ans
done
echo $ans
}
forward_zone_update() {
local rr=${@}
echo "
server $DNS_SERVER
zone $DNS_ZONE
update add $rr
show
send" | nsupdate
}
delete_record() {
local rr=${@}
echo "
server $DNS_SERVER
zone $DNS_ZONE
update delete $rr
show
send" | nsupdate
}
#
## Global Variable
#
DNS_IP="127.0.0.1"
DNS_SERVER="ns1.rahul.local"
DNS_ZONE="rahul.local"
DIG_CMD='dig +noquestion +nocmd +nostat +nocomments'
update_rr_a=$( ask "Enter FQDN of Record (Ex. xyz.${DNS_ZONE}) :-")
update_rr=$( ask "Enter IP of Record :-")
found_rr=$($DIG_CMD @${DNS_IP} AXFR ${DNS_ZONE} | grep ^"${update_rr_a%.$DNS_ZONE}" | tee /tmp/rr.tmp )
echo "Checking ${update_rr_a}..."
if [[ -z "${found_rr}" ]]
then
echo "${update_rr_a} does exists"
echo "${update_rr_a} adding to ${DNS_ZONE}"
forward_zone_update "${update_rr_a} 86400 IN A ${update_rr}"
echo "Done!!"
else
echo "${update_rr_a} already exists"
ans=$(ask "Do you want to Delete RR and want to re-add(y/n?)")
case $ans in
[yY]|[yY][eE][sS]) while read r;
do delete_record $r ;
done < /tmp/rr.tmp ;;
[nN]|[nN][oO]) exit 1 ;;
esac
forward_zone_update "${update_rr_a} 86400 IN A ${update_rr}"
echo "Done!!"
fi
|
{
"pile_set_name": "StackExchange"
}
|
Q:
NHibernate data retrieve problem
These codes are working well when saving data.
But it is unable to retrieve data from b_TeacherDetail-table. For example:
TeacherRepository tRep = new TeacherRepository();
Teacher t = tRep.Get(12);
Here, t.TeacherDetail is null. But I know that there is an entry in the b_TeacherDetail-table for teacher-id 12.
Why?
My tables are:
Teacher {ID, Name, IsActive, DesignationID, DepartmentID}
TeacherDetail {ID, TeacherID, Address, MobileNo}
Teacher.cs
public class Teacher
{
public virtual int ID { get; set; }
public virtual string Name { get; set; }
public virtual bool IsActive { get; set; }
public virtual TeacherDetail TeacherDetail { get; set; }
public virtual Designation Designation { get; set; }
public virtual Department Department { get; set; }
}
TeacherDetail.cs
public class TeacherDetail
{
public virtual int ID { get; set; }
public virtual Teacher Teacher { get; set; }
public virtual string Address { get; set; }
public virtual string MobileNo { get; set; }
}
Teacher.hbm.xml
<class name="Teacher" table="b_Teacher">
<id name="ID" column="ID">
<generator class="native"/>
</id>
<property name="Name" column="Name" />
<property name="IsActive" column="IsActive" />
<one-to-one class="TeacherDetail" name="TeacherDetail" cascade="all" />
<many-to-one name="Department" class="Department" unique="true" column="DepartmentID" />
<many-to-one name="Designation" class="Designation" unique="true" column="DesignationID" />
</class>
TeacherDetail.hbm.xml
<class name="TeacherDetail" table="b_TeacherDetail">
<id name="ID" column="ID">
<generator class="native"/>
</id>
<property name="Address" column="Address" />
<property name="MobileNo" column="MobileNo" />
<many-to-one name="Teacher" class="Teacher" column="TeacherID" unique="true" />
</class>
Repository.cs
public class Repository<T> : IRepository<T>
{
... ... ...
public T Get(object id)
{
T obj = default(T);
try
{
if (!_session.Transaction.IsActive)
{
_session.BeginTransaction();
obj = (T)_session.Get<T>(id);
_session.Transaction.Commit();
_session.Flush();
}
else
{
throw new Exception(CustomErrorMessage.TransactionAlreadyInProgress);
}
}
catch (Exception)
{
_session.Transaction.Rollback();
_session.Clear();
throw;
}
return obj;
}
... ... ...
}
TeacherRepository .cs
public class TeacherRepository : Repository<Teacher>
{
}
A:
you are missing the reference to TeacherDetail from the Teacher point of view in your mapping. (nhibernate does not know how to fetch the entity)
So in Teacher.hbm.cml change the the to:
<one-to-one class="TeacherDetail" name="TeacherDetail" cascade="all" property-ref="Teacher" />
which tell it to fetch a TeacherDetail that has its Teacher property id value equal to this (Teacher) class's id value.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Contextual Selectors in CSS2
I'm wondering why styling an element within a specific class, like this:
.reddish H1 { color: red }
is shown as an example of correct syntax in the CSS1 specification under Contextual selectors:
Cascading Style Sheets, level 1
but it's not shown as an example in the CSS2 spec:
Cascading Style Sheets, Level 2
At least I can't find an example of it. Has the syntax rules for this changed in CSS2, or is it simply inferred as correct syntax?
A:
That syntax is correct, but the example may have changed for a couple of reasons.
Firstly it is not best practice to name classes by the description of what they do. In the case of .reddish h1, the example CSS shows that it is to be coloured red. However, if in a later design change the h1 should in fact be blue then
.reddish h1 { color: blue; }
makes little sense. You should name your classes by their function or purpose on the page and not by what style they are supposed to represent.
Secondly, it isn't advised to use keywords for colours, as the colour you receive is down to browser interpretation. Instead, of 'red' you should use the hexcode '#ff0000' to get an accurate colour in all browsers. (red may not be the best example here, but there are some strange colour keywords out there).
While neither of these things is that bad, they both could add up to why the example had changed in the spec.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can someone remove an upvote from an answer that was posted a long time ago, and hasn't been edited?
Possible Duplicate:
How long can you change your vote?
Re: Problem with swfobject_api or jwplayer module in Drupal
I just received an 'unupvote' for the above answer. My understanding (from experience) is that once you have upvoted, you cannot remove said upvote after a certain amount of time has passed, unless the post has been edited.
The answer was posted in September last year, and hasn't been edited.
Who (apart from devs) has the power to do this? Or is this something available to everyone and I've just not worked out how to do it yet?
A:
One really fancy way to do this is to edit the post, remove your upvote, and then manually revert your edit (which destroys the revision, so there's no evidence of the edit).
Note: This method is only available to people with edit privileges and will likely not be around for long.
Interesting point: The timeline for that post doesn't show that the answer was ever upvoted, so apparently that is stricken from history as well.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What does knockout mean?
I've been reading the How To Brew book, which is generally good, but the author uses the term "knockout" without definition a couple of times. What does it mean?
A:
According to this page, it's the very end of the boil. Minute 0.
A:
Yep. It's when you shut off the burner (or heat source) on your boil kettle. You "knock-out" the flame.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Assign a specific user to a signature block in Docusign
I am using the docusign-csharp-client to enable electronic signatures in our application.
I have a PDF that requires a customer signature as well as a sales person's signature.
We use the C# client to to perform the embedded signature process for the customer and the salesperson. With the C# client we create the envelope, upload the PDF to be signed, and also add the signers and their info to the envelope. Everything seems to be working except I can't figure out how to get signature box 1 to show up for the customer to sign and signature box 2 to show up for the salesperson to sign.
The signature boxes have been added to the PDF. But when each of the two parties get into Docusign no signature box appears for either of them.
How do I link the customer to SignatureBox1 and the salesperson to SignatureBox2?
Below is the class I created for making the call to Docusign. The method SendContractToDocusign() is what kicks off the process.
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading.Tasks;
using Core.Domain.Models;
using Core.Interfaces;
using DocuSign.eSign.Api;
using DocuSign.eSign.Client;
using DocuSign.eSign.Model;
using Configuration = DocuSign.eSign.Client.Configuration;
namespace Infrastructure.Econtract
{
public class DocuSignContract : IDigitalContract
{
private readonly IConfigurationSettings _configurationSettings;
public DocuSignContract(IConfigurationSettings configurationSettings)
{
_configurationSettings = configurationSettings;
var username = configurationSettings.DocusignUserName;
var password = configurationSettings.DocusignPassword;
var integratorKey = configurationSettings.DocusignIntegratorKey;
var clientRestUrl = configurationSettings.DocusignClientRestUrl;
var apiClient = new ApiClient(clientRestUrl);
Configuration.Default.ApiClient = apiClient;
if (!Configuration.Default.DefaultHeader.ContainsKey("X-DocuSign-Authentication"))
{
// configure 'X-DocuSign-Authentication' header
string authHeader = "{\"Username\":\"" + username + "\", \"Password\":\"" + password + "\", \"IntegratorKey\":\"" + integratorKey + "\"}";
Configuration.Default.AddDefaultHeader("X-DocuSign-Authentication", authHeader);
}
}
private static string Authenticate()
{
var authApi = new AuthenticationApi();
LoginInformation loginInfo = authApi.Login();
//user might be a member of multiple accounts
string accountId = loginInfo.LoginAccounts[0].AccountId;
return accountId;
}
private static Document CreateDocusignDocument(ContractDocument document)
{
return new Document
{
DocumentBase64 = Convert.ToBase64String(document.Document),
Name = document.FormCode ?? "DocuSignContract.pdf",
DocumentId = document.TrackingIdentity.ToString(),
TransformPdfFields = "false"
};
}
private EnvelopeDefinition CreateEnvelope(Document doc)
{
//Set envelope status to "sent" to instruct Docusign to immediately send the signature request upon receipt
var envDef = new EnvelopeDefinition
{
EmailSubject = "[DocuSignContract] - Please eSign this doc",
Documents = new List<Document> { doc },
Status = "sent"
};
return envDef;
}
private static void AddSigners(EnvelopeDefinition envelope, List<ContractSigner> signers)
{
envelope.Recipients = new Recipients
{
Signers = new List<Signer>()
};
signers.ForEach(x => envelope.Recipients.Signers.Add(
new Signer
{
Name = x.FullName,
Email = x.Email,
RecipientId = ((int)x.Type).ToString(),
ClientUserId = x.UserId
}
));
}
private string SendContractToDocusign(string accountId, EnvelopeDefinition envelope, List<ContractSigner> signers)
{
if (signers == null) throw new ArgumentNullException(nameof(signers));
// Use the EnvelopesApi to create and send the signature request
var envelopesApi = new EnvelopesApi();
EnvelopeSummary envelopeSummary = envelopesApi.CreateEnvelope(accountId, envelope);
return envelopeSummary.EnvelopeId;
}
public async Task<byte[]> GetSignedDocument(string envelopeId)
{
string accountId = Authenticate();
var envelopesApi = new EnvelopesApi();
var docList = await envelopesApi.ListDocumentsAsync(accountId, envelopeId);
using (var docStream = (MemoryStream)envelopesApi.GetDocument(accountId, envelopeId, docList.EnvelopeDocuments[0].DocumentId))
{
return docStream.ToArray();
}
}
public string SendContractToDocusign(ContractDocument contractData, List<ContractSigner> signers)
{
//Authenticate with Docusign and get account Id
string accountId = Authenticate();
//Create Docusign document from contract
Document doc = CreateDocusignDocument(contractData);
//Create an envelope and insert the document
EnvelopeDefinition envelope = CreateEnvelope(doc);
//Add signers
AddSigners(envelope, signers);
//Send the document to docusign
var envelopeId = SendContractToDocusign(accountId, envelope, signers);
return envelopeId;
}
public string GetSigningUrl(string envelopeId, ContractSigner signer, string returnUrl)
{
//Authenticate with Docusign and get account Id
string accountId = Authenticate();
var envelopesApi = new EnvelopesApi();
var viewOptions = new RecipientViewRequest()
{
ReturnUrl = returnUrl,
ClientUserId = signer.UserId,
AuthenticationMethod = "Password",
UserName = signer.FullName,
Email = signer.Email
};
// create the recipient view (aka signing URL)
ViewUrl recipientView = envelopesApi.CreateRecipientView(accountId, envelopeId, viewOptions);
return recipientView.Url;
}
}
}
A:
If you are having PDF with the form fields and want that DocuSign should automatically convert your PDF form fields to DocuSign tabs then you need to created PDF with DS standard form field names. PDF Transform Standards will help you in knowing what should be your PDF form field name.
Also to convert PDF fields to DS Tabs automatically, you need to add TransformPdfFields = true, this tells DocuSign to convert the PDF Fields to DS Tabs.
or if you are not using above describer way, then you need to add the DS Tabs either using Anchor String or X/Y position. DS has recipes available at Recipes
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to convert percentile rank into z score in R
I have a vector of percentile ranks. I want to convert them into z-scores, so it will be interval scale. I have to do it in R, but I could not find a function or package that can do this. Does anyone have any idea?
A:
You would apply the inverse cdf to the percentile ranks to convert them to quantiles, so if you want standard normals, $z=\Phi^{-1}(p)$ should do what you seem to be asking for.
However, this transformation won't of itself make an ordinal scale into an interval scale.
In R, you would do this as:
z <- qnorm(p)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
malloc has too many arguments
I malloc a 2d array. The 2d array is part of a struct and when I try malloc is I get an error that malloc has too many arguments.
malloc(world->representation, sizeof(int *) * mapHeight);
int i;
for (i = 0; i < mapHeight, i++ )
{
malloc(world->representation[i], sizeof(int) * mapWidth);
}
How should this be malloced if its part of a struct?
A:
You are using malloc incorrectly. The proper usage is:
world->representation = malloc(sizeof(int *) * mapHeight);
and
world->representation[i] = malloc(sizeof(int) * mapWidth);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
bullets are not displaying with
Bullets are not displaying. When i tried the same through jsFiddle, it works. What might be the reason?
<div id="mainInfo" style="display: none;" title="Customer Information">
<div id="greenCheck" style="display: none;">
<img src= "IMAGE_SOURCE">
</div>
<div id="subInfo" style="display: none;">
</div>
</div>
<div>
//button definition onclick="show()"![enter image description here][1]
</div>
//javascript
function show()
{
var htmlstr = "<p><b>The customer provided the following:</b></p><br/><ul>";
if(true)//some condition which i made true for testing
{
htmlstr = htmlstr + "<li>Date of Birth</li>";
htmlstr = htmlstr + "<li>Social Security Number</li>";
}
htmlstr = htmlstr + "</ul>";
document.getElementById('mainInfo').style.display = 'block';
document.getElementById('greenCheck').style.display = 'block';
document.getElementById('subInfo').style.display = 'block';
jQuery("#subInfo").html(htmlstr);
jQuery("#mainInfo").dialog({
//dialog code
}
}
list-style-type:none is applied through a css file. I modified at local file and it worked :)
A:
You could add this CSS to make sure you get bullet point only in the list that you are dynamically adding:
#subInfo ul li {
list-style-type: disc;
}
Here are your options:
Formal syntax: disc | circle | square | decimal | decimal-leading-zero
| lower-roman | upper-roman | lower-greek | lower-latin | upper-latin
| armenian | georgian | lower-alpha | upper-alpha | none
Source: https://developer.mozilla.org/en-US/docs/Web/CSS/list-style-type
As for your updated question there are a lot of different approaches that would work, from putting the image and the bullet list in a table (what I tend to think is a bad idea), to using containing DIVs. However if you didn't want to change the HTML (and the JavaScript) then perhaps adding a padding-left to the li in the CSS to push the list items to appear to line up with the text. It's kind of a hack but it would be easy to reason about and require no change to the HTML.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Analyzing Server Logs
I currently have a medium-sized website, that probably has a few security flaws. That's probably normal, I think. You can't catch everything. The problem is, I also have a couple script kiddies whom think its fun to try day and night to attempt to crack my website, to do who knows what. Probably something like deleting the DB. I have backups, but it's still a toll on RAM and CPU, and I'd prefer to stop it. Is there any way I can analyze the server logs to easily find out which entries are caused by the script kiddies? They'd probably be identified by multiple hits per minute, but it's a pain to go through and pick out those entries when I could be doing something worthwhile.
A:
cat access_log | awk '{print $1}' | sort | uniq -c |sort -g
should produce an ordered list of ip addresses that are hitting your site, the first column will be the number of hits, the second the ip.
You might have to change the value $1 this is the position of the ip address field in the logfile line. On my webserver its first hence $1, otherwise a field is defined as 'separated by white space' so the next entry is $2 etc.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Confusing box2d forces
Hello Friends.
This is my demo game screen-shoot. Here i am using three buttons. Right-bottom button is used for jump and left-bottom buttons used for move left and right.
I have some questions
1) should i use linearImpuls for jump body??
2) For move right and left which types of force i applied???
PLease tell me i am confusing to use linearImpuls, applyforce and linearVelocity.
Thanks in advance
A:
For move left/right you can use velocity. When you need to stop your character, just set velocity to (0.f, 0.f). For jumps use impulse. If your b2World has gravity, it will calculate jump trajectory by itself.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Hashing an uploaded file in HTML
I am a beginner in HTML and JavaScript. I want to upload an image using HTML5 and get its hash by JavaScript. After that, I want to send the hash value to a PHO server. I uploaded an image with a web browser. But i don't know how i can hash the image. I saw many guides but no one helped me. My HTML code is below. Please fill it and help me.
HTML:
<!DOCTYPE html>
<html>
<body>
<input type="file" id="customerFile" name="Documents"/>
<script>
// Get hash value
// Send it to the server
</script>
</body>
</html>
I should hash it in html level. I shouldn't give the server any file
A:
Please check the below snippet. You can send hash value to the server by sending a POST request with result.
function calculateMD5Hash(file, bufferSize) {
let def = Q.defer();
let fileReader = new FileReader();
let fileSlicer = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice;
let hashAlgorithm = new SparkMD5();
let totalParts = Math.ceil(file.size / bufferSize);
let currentPart = 0;
let startTime = new Date().getTime();
fileReader.onload = function (e) {
currentPart += 1;
def.notify({
currentPart: currentPart,
totalParts: totalParts
});
let buffer = e.target.result;
hashAlgorithm.appendBinary(buffer);
if (currentPart < totalParts) {
processNextPart();
return;
}
def.resolve({
hashResult: hashAlgorithm.end(),
duration: new Date().getTime() - startTime
});
};
fileReader.onerror = function (e) {
def.reject(e);
};
function processNextPart() {
let start = currentPart * bufferSize;
let end = Math.min(start + bufferSize, file.size);
fileReader.readAsBinaryString(fileSlicer.call(file, start, end));
}
processNextPart();
return def.promise;
}
function calculate() {
let input = document.getElementById('file');
if (!input.files.length) {
return;
}
let file = input.files[0];
let bufferSize = Math.pow(1024, 2) * 10; // 10MB
calculateMD5Hash(file, bufferSize).then(
function (result) {
// Success
console.log(result);
// SEND result TO THE SERVER
},
function (err) {
// There was an error,
});
}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>test</title>
</head>
<body>
<script src="https://cdnjs.cloudflare.com/ajax/libs/q.js/1.4.1/q.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/spark-md5/2.0.2/spark-md5.min.js"></script>
<div>
<input type="file" id="file"/>
<input type="button" onclick="calculate();" value="Calculate Hash" class="btn primary"/>
</div>
</body>
</html>
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.