_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d7301 | train | Yes, SICP is still a great book! The second edition, which is available online, as of 1996. Although, if you just want to learn Scheme instead of fundamental computer science, you might be better off with Teach Yourself Scheme in Fixnum Days.
A: I strongly encourage you to check out the book How to Design Programs. It focuses on the fundamentals of programming, not on the specific language, but it also uses Scheme as its language. It's also available free online.
You can also check out the current release of the second edition, which is in preparation (or the less-stable but more up-to-date current draft).
A: Firstly, you're loooking at the first edition. The second edition is from 1996.
You should VERY MUCH tackle the book. I've gone through about half and my mind is blown. I can't begin to explain how amazing it is. Not only will you develop an appreciation for elegance in programming, but you'll see the line blurred between coding and computer science.
Don't approach this book like a programming book. Approach it as if you want to learn the fundamentals of computation and computer science using programming as a means of expression.
A: SICP is one of the best books I've read for learning how to write programs well. I never used scheme outside of the work I did in that book, but it's well worth your time. | unknown | |
d7302 | train | If it is a saved password, you will be able to find it in the security and password settings (see link for more information). If you've saved a password on a different portion of the site (say you have a form on www.site.com/login.html and www.site.com/admin.html for example), it could be pulling it from there. | unknown | |
d7303 | train | First you need the right permission to delete a registry key (try run the Perl script from a CMD with admin privileges), second according to the documentation you can only delete a key provided it does not contain any subkeys:
You can use the Perl delete function to delete a value from a Registry
key or to delete a subkey as long that subkey contains no subkeys of
its own.
Third, even if you run with admin privileges there can still be keys that you cannot delete, see this Q&A for more information.
So if you want to delete an entire subtree, you need to iterate bottom up through the tree and delete each subkey separately. Here is an example:
use feature qw(say);
use warnings;
use strict;
use Data::Dumper qw(Dumper);
use Win32::RunAsAdmin qw(force);
use Win32API::Registry qw(regLastError KEY_READ KEY_WRITE);
use Win32::TieRegistry( Delimiter=>"/", ArrayValues=>0 );
{
# Choose the top root node that should be deleted
# Note: This and all its subkeys will be deleted from the registry.
# Note: Will only succeed if you have permission to write to each sub key
my $top_key_name = "HKEY_CLASSES_ROOT/Directory/Background/shell/Foo";
my $tree = $Registry->Open(
$top_key_name,
{ Access=>KEY_READ()|KEY_WRITE(), Delimiter=>"/" }
);
die "Could not open key $top_key_name: $^E" if !defined $tree;
delete_subtree( $tree, my $level = 0);
}
sub delete_subtree {
my ($tree, $level) = @_;
my $path = $tree->Path();
my @subkeys = $tree->SubKeyNames();
for my $name (@subkeys) {
my $subtree = $tree->{$name."/"};
if (!defined $subtree) {
die "Cannot access subkey $name for $path: " . regLastError() . ". Abort.";
}
if (ref $subtree) {
delete_subtree($subtree, $level + 1);
}
else {
die "Subkey $name for $path is not a hash ref. Abort.";
}
}
# assuming the previous recursive code has deleted all sub keys of the
# current key, we can now try delete this key
say "Trying to delete $path ..";
my $res = delete $Registry->{$path};
if (!defined $res) {
die "..Failed to delete key : $^E";
}
else {
say " -> success";
}
} | unknown | |
d7304 | train | Use Array.reduce():
let test=[{section:"business",name:"Bob"},{section:"business",name:"John"},{section:"H&R",name:"Jen"},{section:"H&R",name:"Bobby"}];
let newArray = test.reduce((acc,cur) => {
if(acc.some(el => el.section === cur.section)){
acc.forEach((el,idx) => {
if(el.section === cur.section){
acc[idx].name.push(cur.name)
}
})
}else{
cur.name = [cur.name]
acc.push(cur)
}
return acc
},[])
console.log(newArray)
A: you can try this
const test = [{
section: "business",
name: "Bob"
},
{
section: "business",
name: "John"
},
{
section: "H&R",
name: "Jen"
},
{
section: "H&R",
name: "Bobby"
},
];
// gather sections
const sections = {};
test.forEach(t => {
sections[t.section] = sections[t.section] || [];
sections[t.section].push(t.name);
});
// convert sessions to array
const newArray = Object.keys(sections).map(k => {
return {
section: k,
name: sections[k]
};
});
console.log(newArray); | unknown | |
d7305 | train | You are resetting the top variable at each loop, your buttons are all in the same top position. Move the initialization of the top variable outside the first loop
private void button1_Click(object sender, EventArgs e)
{
int top = 60;
int n = Convert.ToInt32(textBox1.Text.ToString());
Button[,] v = new Button[n, n];
for (int i = 0; i < n; i++)
{
// int top = 60;
......
However, you are incrementing the top only by 2 pixel. This is not enough to avoid covering one row of button with the next row. You need to increment top by at least 25 pixels at the end of the inner loop.
....
top += 25;
left = 160;
}
A: You can also use your for loops to set the top and left property values (more than one variable can be defined and incremented in a for loop).
For each row, we increase the top value by the height + padding (padding is the gap between buttons), and for each col, we increase the left by width + padding.
We can also add each button to the controls collection as we create it rather than adding it to an array and then iterating again over that array, and we can add some input validation to the textbox.Text that we're assuming is a positive number, presenting the user with a message box if they entered something invalid.
Given this, we can simplify the code to something like:
private void button1_Click(object sender, EventArgs e)
{
int count;
// Validate input
if (!int.TryParse(textBox1.Text, out count) || count < 0)
{
MessageBox.Show("Please enter a positive whole number");
return;
}
// remove old buttons (except this one)
var otherButtons = Controls.OfType<Button>().Where(b => b != button1).ToList();
foreach (Button b in otherButtons)
{
Controls.Remove(b);
}
// Add button controls to form with these size and spacing values (modify as desired)
var width = 25;
var height = 25;
var padding = 2;
// For each row, increment the top by the height plus padding
for (int row = 0, top = padding; row < count; row++, top += height + padding)
{
// For each column, increment the left by the width plus padding
for (int col = 0, left = padding; col < count; col++, left += width + padding)
{
// Add our new button
Controls.Add(new Button
{
Top = top,
Left = left,
Width = width,
Height = height,
});
}
}
} | unknown | |
d7306 | train | I have read your question and I think you have face some problem to implement onclick listener
on alert dialog buttons. if we have use any custom layout for alert dialog so do not need any activity to handle this layout. According to your code sinppet you have create a alert dialog using custom layout and use a activity to handle this layout button click
Try this this code create custom alert dialog , and handle its click listener
AlertDialog.Builder alertDialogBuilder = new AlertDialog.Builder(this);
alertDialogBuilder.setTitle("Login Form");
View view= getLayoutInflater().inflate(R.layout.dialog_custom, null);
alertDialogBuilder.setCancelable(false);
alertDialogBuilder.setView(view);
alertDialogBuilder.create();
//alert dialog button
TextView ok_btn=view.findViewById(R.id.ok_btn);
//buttton click
ok_btn.setOnClickListener(new View.OnClickListener(){
@Override
public void onClick(View v){
Toast
.makeText(MainActivity.this, "Hello", Toast.LENGTH_SHORT)
.show();
}
});
alertDialogBuilder.show();
A: private void showAlertDialog(){
@SuppressLint("InflateParams") final View promptView = LayoutInflater.from(mainActivity).inflate(R.layout.enter_pin_dialog, null);
Button accept= (Button) promptView.findViewById(R.id.accept_btn);
Button deny= (Button) promptView.findViewById(R.id.deny_btn);
deny.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
// do what you want
}
});
accept.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
// do what you want
}
});
NotificationUtil.messageDialog(
mainActivity,
null,
null,
null,
false,
true,
promptView,
null,
null,
null,
null,
null,
null
);}
public class NotificationUtil {
private static ProgressDialog progressDialog;
private static AlertDialog cancelDialog;
private static AlertDialog messageDialog;
private static Vector<AlertDialog> dialogs= new Vector<AlertDialog>();
public static void messageDialog(Context context, String title, String message, Integer iconResId,
boolean cancelable, boolean openKeyboard, View view,
String positiveStr, DialogInterface.OnClickListener positiveListener,
String negativeStr, DialogInterface.OnClickListener negativeListener,
DialogInterface.OnShowListener showListener, DialogInterface.OnDismissListener dismissListener) {
AlertDialog.Builder builder = new AlertDialog.Builder(context);
builder.setTitle(title);
builder.setMessage(message);
builder.setCancelable(cancelable);
if (iconResId != null) {
builder.setIcon(iconResId);
}
if (view != null) {
builder.setView(view);
}
if (positiveStr != null) {
builder.setPositiveButton(positiveStr, positiveListener);
}
if (negativeStr != null) {
builder.setNegativeButton(negativeStr, negativeListener);
}
messageDialog = builder.create();
dialogs.add(messageDialog);
if (openKeyboard) {
Window window = messageDialog.getWindow();
if (window != null) {
window.setSoftInputMode(WindowManager.LayoutParams.SOFT_INPUT_STATE_VISIBLE);
}
}
if (showListener != null) {
messageDialog.setOnShowListener(showListener);
}
if (dismissListener != null) {
messageDialog.setOnDismissListener(dismissListener);
}
messageDialog.show();
ScrollView buttonsContainer = (ScrollView) messageDialog.findViewById(R.id.buttonPanel);
if (buttonsContainer != null) {
List<View> views = new ArrayList<>();
for (int i = 0; i < buttonsContainer.getChildCount(); i++) {
views.add(buttonsContainer.getChildAt(i));
}
buttonsContainer.removeAllViews();
for (int i = views.size() - 1; i >= 0; i--) {
buttonsContainer.addView(views.get(i));
}
}
}}
A: try this
AlertDialog.Builder alertbox = new AlertDialog.Builder(v.getRootView().getContext());
alertbox.setMessage("ddddd");
alertbox.setTitle("dddd");
alertbox.setIcon(R.drawable.ic_del);
alertbox.setPositiveButton("Yes",
new DialogInterface.OnClickListener() {
public void onClick(DialogInterface arg0,
int arg1) {
}
}
});
alertbox.setNegativeButton("No",new DialogInterface.OnClickListener() {
public void onClick(DialogInterface arg0,
int arg1) {
}
});
alertbox.show();
A: Open your MainActivity.java and add OnClickListener to your button to create an alert dialog :
btn.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
// Create a Custom Alert Dialog
final AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
LayoutInflater inflater = getLayoutInflater();
View view = inflater.inflate(R.layout.list_layout,null);
TextView tv = (TextView)view.findViewById(R.id.head);
ImageView iv = (ImageView)view.findViewById(R.id.iv);
builder.setView(view);
builder.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
// Dismiss the dialog here
dialog.dismiss();
}
});
builder.setPositiveButton("Ok", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
// Add ok operation here
}
});
builder.show();
}
});
You can also add custom buttons to your alert dialog. | unknown | |
d7307 | train | My knowledge of .net is limited, but it seems like you just need to prevent the default action from the form submission event. Here's some code to get you rolling, though it may not be perfect:
@using (Html.BeginForm("Delete", "Controller", new { viewModel.Id }, FormMethod.Post, null, new { onsubmit= "onFormSubmit", @style = "text-align: center" }))
{
<input type="submit" value="X" class="form-control btn btn-danger" />
}
function onFormSubmit (e) {
e.preventDefault() // The default event action should be ignored
return confirm('Do you really want to submit the form?');
} | unknown | |
d7308 | train | The use statement is in the wrong place. Try something like this:
I
exec sp_MSforeachdb 'use ? rest of statement here '
I just executed this and it worked fine:
exec sp_MSforeachdb 'use ? select * from sys.objects;'
If your proc is name sp_xxx and is in master it should be available in all databases. | unknown | |
d7309 | train | This is an interesting problem! I needed to implement exactly this in C# just recently for my article about grouping (because the type signature of the function is pretty similar to groupBy, so it can be used in LINQ query as the group by clause). The C# implementation was quite ugly though.
Anyway, there must be a way to express this function using some simple primitives. It just seems that the F# library doesn't provide any functions that fit for this purpose. I was able to come up with two functions that seem to be generally useful and can be combined together to solve this problem, so here they are:
// Splits a list into two lists using the specified function
// The list is split between two elements for which 'f' returns 'true'
let splitAt f list =
let rec splitAtAux acc list =
match list with
| x::y::ys when f x y -> List.rev (x::acc), y::ys
| x::xs -> splitAtAux (x::acc) xs
| [] -> (List.rev acc), []
splitAtAux [] list
val splitAt : ('a -> 'a -> bool) -> 'a list -> 'a list * 'a list
This is similar to what we want to achieve, but it splits the list only in two pieces (which is a simpler case than splitting the list multiple times). Then we'll need to repeat this operation, which can be done using this function:
// Repeatedly uses 'f' to take several elements of the input list and
// aggregate them into value of type 'b until the remaining list
// (second value returned by 'f') is empty
let foldUntilEmpty f list =
let rec foldUntilEmptyAux acc list =
match f list with
| l, [] -> l::acc |> List.rev
| l, rest -> foldUntilEmptyAux (l::acc) rest
foldUntilEmptyAux [] list
val foldUntilEmpty : ('a list -> 'b * 'a list) -> 'a list -> 'b list
Now we can repeatedly apply splitAt (with some predicate specified as the first argument) on the input list using foldUntilEmpty, which gives us the function we wanted:
let splitAtEvery f list = foldUntilEmpty (splitAt f) list
splitAtEvery (<>) [ 1; 1; 1; 2; 2; 3; 3; 3; 3 ];;
val it : int list list = [[1; 1; 1]; [2; 2]; [3; 3; 3; 3]]
I think that the last step is really nice :-). The first two functions are quite straightforward and may be useful for other things, although they are not as general as functions from the F# core library.
A: How about:
let splitOn test lst =
List.foldBack (fun el lst ->
match lst with
| [] -> [[el]]
| (x::xs)::ys when not (test el x) -> (el::(x::xs))::ys
| _ -> [el]::lst
) lst []
the foldBack removes the need to reverse the list.
A: Having thought about this a bit further, I've come up with this solution. I'm not sure that it's very readable (except for me who wrote it).
UPDATE Building on the better matching example in Tomas's answer, here's an improved version which removes the 'code smell' (see edits for previous version), and is slightly more readable (says me).
It still breaks on this (splitOn (<>) []), because of the dreaded value restriction error, but I think that might be inevitable.
(EDIT: Corrected bug spotted by Johan Kullbom, now works correctly for [1;1;2;3]. The problem was eating two elements directly in the first match, this meant I missed a comparison/check.)
//Function for splitting list into list of lists based on comparison of adjacent elements
let splitOn test lst =
let rec loop lst inner outer = //inner=current sublist, outer=list of sublists
match lst with
| x::y::ys when test x y -> loop (y::ys) [] (List.rev (x::inner) :: outer)
| x::xs -> loop xs (x::inner) outer
| _ -> List.rev ((List.rev inner) :: outer)
loop lst [] []
splitOn (fun a b -> b - a > 1) [1]
> val it : [[1]]
splitOn (fun a b -> b - a > 1) [1;3]
> val it : [[1]; [3]]
splitOn (fun a b -> b - a > 1) [1;2;3;4;6;7;8;9;11;12;13;14;15;16;18;19;21]
> val it : [[1; 2; 3; 4]; [6; 7; 8; 9]; [11; 12; 13; 14; 15; 16]; [18; 19]; [21]]
Any thoughts on this, or the partial solution in my question?
A: "adjacent" immediately makes me think of Seq.pairwise.
let splitAt pred xs =
if Seq.isEmpty xs then
[]
else
xs
|> Seq.pairwise
|> Seq.fold (fun (curr :: rest as lists) (i, j) -> if pred i j then [j] :: lists else (j :: curr) :: rest) [[Seq.head xs]]
|> List.rev
|> List.map List.rev
Example:
[1;1;2;3;3;3;2;1;2;2]
|> splitAt (>)
Gives:
[[1; 1; 2; 3; 3; 3]; [2]; [1; 2; 2]]
A: I would prefer using List.fold over explicit recursion.
let splitOn pred = function
| [] -> []
| hd :: tl ->
let (outer, inner, _) =
List.fold (fun (outer, inner, prev) curr ->
if pred prev curr
then (List.rev inner) :: outer, [curr], curr
else outer, curr :: inner, curr)
([], [hd], hd)
tl
List.rev ((List.rev inner) :: outer)
A: I like answers provided by @Joh and @Johan as these solutions seem to be most idiomatic and straightforward. I also like an idea suggested by @Shooton. However, each solution had their own drawbacks.
I was trying to avoid:
*
*Reversing lists
*Unsplitting and joining back the temporary results
*Complex match instructions
*Even Seq.pairwise appeared to be redundant
*Checking list for emptiness can be removed in cost of using Unchecked.defaultof<_> below
Here's my version:
let splitWhen f src =
if List.isEmpty src then [] else
src
|> List.foldBack
(fun el (prev, current, rest) ->
if f el prev
then el , [el] , current :: rest
else el , el :: current , rest
)
<| (List.head src, [], []) // Initial value does not matter, dislike using Unchecked.defaultof<_>
|> fun (_, current, rest) -> current :: rest // Merge temporary lists
|> List.filter (not << List.isEmpty) // Drop tail element | unknown | |
d7310 | train | Two things are missing:
First set the initial state (to make the component stateful)
export default class App extends React.Component {
constructor(props) {
super(props);
this.state = {activeClass: 'top'};
}
...
}
and then access the state from the state object:
<div className={`box-menu ${this.state.activeClass}`}>something here</div>
A: It doesn't update because you mislead local scoped variable with state. In the scope of event listener you set component state, but in your render function you try to refer to global variable which doesn't exist. In your case 'activeClass' is accessible in the scope of event listener only.
Other thing is that you should unlisten the scroll event listener in the componentWillUnmount() lifecycle method to prevent memory leaks.
You should also you className attribute instead of class - this is one of the differences between standard HTML and JSX.
If you want to access your state in a way like you did you could destructure the activeClass from the this.state object:
render() {
const { activeClass } = this.state;
return (
...
);
}
Working example:
import React from "react";
import "./styles.css";
export default class App extends React.Component {
constructor(props) {
super(props);
this.state = { activeClass: "" };
}
componentDidMount() {
window.addEventListener("scroll", () => {
let activeClass = "";
if (window.scrollY === 0) {
activeClass = "top";
}
this.setState({ activeClass });
});
}
componentWillUnmount() {
}
render() {
return (
<>
<div className="something">
<div className="box"></div>
<div className={`box-menu ${this.state.activeClass}`}>something here</div>
<div className="content">
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br /> SCROLLING
<br />
</div>
</div>
</>
);
}
}
A: There are couple of issues in your code.
You need to set a initial state
constructor(props) {
super(props);
this.state = {activeClass: 'top'};
}
In the jsx you need to use
<div className={`box-menu ${this.state.activeClass}`}>
Also you are missing a space in your css
.box-menu .top {
top: 0;
background: red;
} | unknown | |
d7311 | train | Django returns a different object each time you retrieve a record from the database:
>>> i1 = Invoice.objects.all()[0]
>>> i2 = Invoice.objects.get(pk=i1.pk)
>>> i1 == i2
True // since model instances compare by their id field only
>>> id(i1) == id(i2)
False // since they are actually separate instances
You instantiate inv = Invoice("Jack") before calling foo_1, so after foo_1 updates all invoices, you still have your old inv instance, which hasn't been updated (since foo_1 instantiates its model objects itself, which are separate instances and do not affect inv) and hasn't been reloaded from the database — nothing has modified inv.client_name, although the record in the database has been updated. foo_2, on the other hand, works on that specific instance that you pass as an argument, so you see the changed client_name; you would actually see that change even if you don't save the instance.
A: Jack isn't saved in the DB when you save it. You haven't created an object to the database for your query to iterate over. However, you do have an obj you can pass around. Which is why you can change the objects attribute and save them.
A: Your inv object is stored locally so has not changed in the first instance. If you wish to continue to use the object just refresh the object from the db to get the latest attribute saved.
print inv.client_name
foo_1()
inv.refresh_from_db()
print inv.client_name | unknown | |
d7312 | train | You can configure c3p0's JMX key to be something that will not change.
Please see http://www.mchange.com/projects/c3p0/#jmx_configuration_and_management
The simple story is:
*
*Be sure to set the c3p0 configuration property dataSourceName, which will become the value of a name attribute in the JMX key;
*Set (in a c3p0.properties file or as a system property or in typesafe-config file) com.mchange.v2.c3p0.management.ExcludeIdentityToken=true
If you are using a c3p0.properties file, it'd be something like
c3p0.dataSourceName=myPooledDataSource
com.mchange.v2.c3p0.management.ExcludeIdentityToken=true | unknown | |
d7313 | train | As written in the documentation of the function CreateChromatinAssay, the fragments = file should be a tabix file (ending in .tbi) containing the index with fragments of your data, or a Fragments object in R.
Apparently, first you will need to create the Fragments object in R from your tsv data, using the function CreateFragmentObject(). Take a look at: https://satijalab.org/signac/reference/fragments .
A: Based on the advise from the Signac developer in https://github.com/timoast/signac/issues/7#issuecomment-513934624 , the folder with the fragments file should also contain the fragment index file (.tbi). For example, for the human 10X PBMC vignette data (https://satijalab.org/signac/articles/pbmc_vignette.html): atac_v1_pbmc_10k/atac_v1_pbmc_10k_fragments.tsv.gz and
atac_v1_pbmc_10k/atac_v1_pbmc_10k_fragments.tsv.gz.tbi should be in the same folder. Hope this helps. Thanks.
A: Please see this Github issue to regenerate .tbi file
Briefly, run the following cmds in linux/unix terminal :
*
*gzip -d <fragment.tsv.gz>
*bgzip <fragment.tsv>
*tabix -p bed <fragment.tsv.gz> | unknown | |
d7314 | train | The second example demonstrates better data locality since it accesses elements in the same row. Basically it performs sequential memory read, while first example jumps over sizeof(float) * N bytes on each iteration putting extra stress on CPU cache / memory. | unknown | |
d7315 | train | Use JavaScript Date object.
var date = new Date("2015-04-21T09:31:04+05:00");
var months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'];
console.log(date.getDate() + ' ' + months[date.getMonth()] + ', ' + date.getFullYear());
A: If you are open to libraries and you plan to "heavily" use Date you have MomentJS which would do it easily:
moment().format('MMMM Do YYYY'); // April 22nd 2015
//or
moment("2015-04-21T09:31:04+05:00").format('MMMM Do YYYY'); // April 21nd 2015 | unknown | |
d7316 | train | Managed to work it out. Counter the scrolling that the browser does by doing document.getElementById('container').scrollTop = 0; whenever you programatically focus on an element outside of the visible area of an overflow: hidden div AND whenever a user inputs in such an element. Demonstrative JSFiddle (without prevention of input scrolling) http://jsfiddle.net/j2zurbbv/2/. | unknown | |
d7317 | train | I had the same problem, preprocess goes file by file, so I had to actually include all my mixins and vars in every file, which is absolutely not a good solution.
So for me the first solution was to remove postcss from sveltePreprocess, not emit the css file and to use postcss on the css bundle, that you get in the css function from svelte.
You can then or (1) use postcss directly in the css function of svelte, and then emit the resulting css file in your dist directory, or (2) you can emit this file in a CSS directory, and have postcss-cli watch this directory and bundle everything
Solution 1
// rollup.config.js
// rollup.config.js
import svelte from 'rollup-plugin-svelte';
import resolve from 'rollup-plugin-node-resolve';
import postcss from 'postcss';
import postcssConfig from './postcss.config.js';
const postcssPlugins = postcssConfig({});
const postcssProcessor = postcss(postcssPlugins);
export default {
input: 'src/main.js',
output: {
file: 'public/bundle.js',
format: 'iife',
},
plugins: [
svelte({
emitCss: false,
css: async (css) => {
const result = await postcssProcessor.process(css.code);
css.code = result.css;
css.write('public/bundle.css');
},
}),
resolve(),
],
};
and my postcss.config.js which returns a function that return an array of plugins:
export default (options) => {
const plugins = [
require('postcss-preset-env')()
];
if (options.isProd) {
plugins.push(require('cssnano')({ autoprefixer: false }));
}
return plugins;
};
Solution 2
// rollup.config.js
import svelte from 'rollup-plugin-svelte';
import resolve from 'rollup-plugin-node-resolve';
export default {
input: 'src/main.js',
output: {
file: 'public/bundle.js',
format: 'iife',
},
plugins: [
svelte({
emitCss: false,
css: async (css) => {
css.write('css/svelte-bundle.css');
},
}),
resolve(),
],
};
// package.json
{
//...
"scripts": {
"dev": "npm-run-all --parallel js:watch css:watch",
"js:watch": "rollup -c -w",
"css:watch": "postcss css/app.css --dir dist/ --watch",
},
}
/* css/app.css */
@import 'vars.css';
@import 'mixins.css';
/* all other code ... */
/* and svelte-bundle, which will trigger a bundling with postcss everytime it is emitted */
@import 'svelte-bundle.css';
Conclusion
All in all, I don't like this methods, for exemple because I can't use nesting, as svelte throws an error if the css is not valid.
I would prefer being able to use rollup-plugin-postcss after rollup-plugin-svelte, with emitCss set to false and the possibility to use rollup's this.emitFile in svelte css function, because since once the bundled file is emitted, we should be able to process it.
It seems there are some issues talking about using emitfile, let's hope it will happen sooner than later https://github.com/sveltejs/rollup-plugin-svelte/issues/71
A: Can't say for sure, but when i compare your setup with mine the most striking difference is that i have:
css: css => {
css.write('public/build/bundle.css');
}
in the svelte options additionally.
My whole svelte option looks like this:
svelte({
preprocess: sveltePreprocess({ postcss: true }),
dev: !production,
css: css => {
css.write('public/build/bundle.css');
}
})
Note, i'm using sveltePreprocess which would make your postcss superfluous, but i don't think that is causing your issue. | unknown | |
d7318 | train | I can give you one big advantage. We have an application that involves a client (WPF) and a Windows service. Normally the client calls the service (via WCF) to retrieve and/or save data etc. But, there are times we want the service to send the client a message, to notify the client it needs to perform a certain action (like shutdown or display a message to the user - could be anything). A callback is perfect for this. | unknown | |
d7319 | train | You could cancel the drag event in the right list sortable2 using receive event in sortable1 to prevent receiving any item from second list.
To drag grey lis back to the left side we will add helper class e.g s2 that will identify the sortable2 original items and cancel the drag only on them :
$("#sortable1").sortable({
receive: function(ev, ui) {
if(ui.item.hasClass("s2"))
ui.sender.sortable("cancel");
}
});
Hope this helps.
$(function() {
$( "#sortable1, #sortable2" ).sortable({
connectWith: ".connectedSortable"
}).disableSelection();
$("#sortable1").sortable({
receive: function(ev, ui) {
if(ui.item.hasClass("s2"))
ui.sender.sortable("cancel");
}
});
});
#sortable1, #sortable2 {
border: 1px solid #eee;
width: 142px;
min-height: 20px;
list-style-type: none;
margin: 0;
padding: 5px 0 0 0;
float: left;
margin-right: 10px;
}
#sortable1 li, #sortable2 li {
margin: 0 5px 5px 5px;
padding: 5px;
font-size: 1.2em;
width: 120px;
}
<html lang="en">
<head>
<meta charset="utf-8">
<title>jQuery UI Sortable - Connect lists</title>
<link rel="stylesheet" href="//code.jquery.com/ui/1.11.4/themes/smoothness/jquery-ui.css">
<script src="//code.jquery.com/jquery-1.10.2.js"></script>
<script src="//code.jquery.com/ui/1.11.4/jquery-ui.js"></script>
<link rel="stylesheet" href="/resources/demos/style.css">
<body>
<ul id="sortable1" class="connectedSortable">
<li class="ui-state-default">Item 1</li>
<li class="ui-state-default">Item 2</li>
<li class="ui-state-default">Item 3</li>
<li class="ui-state-default">Item 4</li>
<li class="ui-state-default">Item 5</li>
</ul>
<ul id="sortable2" class="connectedSortable">
<li class="ui-state-highlight s2">Item 1</li>
<li class="ui-state-highlight s2">Item 2</li>
<li class="ui-state-highlight s2">Item 3</li>
<li class="ui-state-highlight s2">Item 4</li>
<li class="ui-state-highlight s2">Item 5</li>
</ul>
</body>
</html>
A: Currently your connectWith selector matches both the sortable, i.e it's a two way connection. If you only want one way connection from left to right, just connect the left sortable to right sortable using a more specific selector (#sortable2) than a common one:
$(function() {
$("#sortable1").sortable({
connectWith: "#sortable2"
}).disableSelection();
$("#sortable2").sortable({}).disableSelection();
});
The demo below has the shorter code that does the same thing:
$(function() {
$(".connectedSortable").sortable({
connectWith: "#sortable2"
//----------^---------- #sortable2 connectWith #sortable2 has no effect
}).disableSelection();
});
#sortable1,
#sortable2 {
border: 1px solid #eee;
width: 142px;
min-height: 20px;
list-style-type: none;
margin: 0;
padding: 5px 0 0 0;
float: left;
margin-right: 10px;
}
#sortable1 li,
#sortable2 li {
margin: 0 5px 5px 5px;
padding: 5px;
font-size: 1.2em;
width: 120px;
}
<script src="//code.jquery.com/jquery-1.10.2.js"></script>
<script src="//code.jquery.com/ui/1.11.4/jquery-ui.js"></script>
<ul id="sortable1" class="connectedSortable">
<li class="ui-state-default">Item 1</li>
<li class="ui-state-default">Item 2</li>
<li class="ui-state-default">Item 3</li>
<li class="ui-state-default">Item 4</li>
<li class="ui-state-default">Item 5</li>
</ul>
<ul id="sortable2" class="connectedSortable">
<li class="ui-state-highlight">Item 1</li>
<li class="ui-state-highlight">Item 2</li>
<li class="ui-state-highlight">Item 3</li>
<li class="ui-state-highlight">Item 4</li>
<li class="ui-state-highlight">Item 5</li>
</ul>
A: The options you are looking for are cancel and update (s2 inspired by the post above), it will disable the drag on matched elements.
$(function() {
$( "#sortable1, #sortable2" ).sortable({
connectWith: ".connectedSortable",
cancel: ".ui-state-highlight, .s2",
update: function( event, ui ) {ui.item.addClass("s2");}
}).disableSelection();
}); | unknown | |
d7320 | train | To answer this, you have to pass the full path to the SqlConnection constructor, like so:
var dbFile = Windows.Storage.ApplicationData.Current.LocalFolder.Path + "//Sample.db";
var sql = new SqlConnection(dbFile);
Also note that if you're trying to work with SQLite from a portable library, you wouldn't be able to call ApplicationData.Current (very inconvenient). I had to supply this parameter from within executing app. | unknown | |
d7321 | train | x86's float and integer endianness is little-endian, so the significand (aka mantissa) is the low 64 bits of an 80-bit x87 long double.
In assembly, you just load the normal way, like mov rax, [rdi].
Unlike IEEE binary32 (float) or binary64 (double), 80-bit long double stores the leading 1 in the significand explicitly. (Or 0 for subnormal). https://en.wikipedia.org/wiki/Extended_precision#x86_extended_precision_format
So the unsigned integer value (magnitude) of the true significand is the same as what's actually stored in the object-representation.
If you want signed int, too bad; including the sign bit it would be 65 bits but int is only 32-bit on any x86 C implementation.
If you want int64_t, you could maybe right shift by 1 to discard the low bit, making room for a sign bit. Then do 2's complement negation if the sign bit was set, leaving you with a signed 2's complement representation of the significand value divided by 2. (IEEE FP uses sign/magnitude with a sign bit at the top of the bit-pattern)
In C/C++, yes you need to type-pun, e.g. with a union or memcpy. All C implementations on x86 / x86-64 that expose 80-bit floating point at all use a 12 or 16-byte type with the 10-byte value at the bottom.
Beware that MSVC uses long double = double, a 64-bit float, so check LDBL_MANT_DIG from float.h, or sizeof(long double). All 3 static_assert() statements trigger on MSVC, so they all did their job and saved us from copying a whole binary64 double (sign/exp/mantissa) into our uint64_t.
// valid C11 and C++11
#include <float.h> // float numeric-limit macros
#include <stdint.h>
#include <assert.h> // C11 static assert
#include <string.h> // memcpy
// inline
uint64_t ldbl_mant(long double x)
{
// we can assume CHAR_BIT = 8 when targeting x86, unless you care about DeathStation 9000 implementations.
static_assert( sizeof(long double) >= 10, "x87 long double must be >= 10 bytes" );
static_assert( LDBL_MANT_DIG == 64, "x87 long double significand must be 64 bits" );
uint64_t retval;
memcpy(&retval, &x, sizeof(retval));
static_assert( sizeof(retval) < sizeof(x), "uint64_t should be strictly smaller than long double" ); // sanity check for wrong types
return retval;
}
This compiles efficiently on gcc/clang/ICC (on Godbolt) to just one instruction as a stand-alone function (because the calling convention passes long double in memory). After inlining into code with a long double in an x87 register, it will presumably compile to a TBYTE x87 store and an integer reload.
## gcc/clang/ICC -O3 for x86-64
ldbl_mant:
mov rax, QWORD PTR [rsp+8]
ret
For 32-bit, gcc has a weird redundant-copy missed-optimization bug which ICC and clang don't have; they just do the 2 loads from the function arg without copying first.
# GCC -m32 -O3 copies for no reason
ldbl_mant:
sub esp, 28
fld TBYTE PTR [esp+32] # load the stack arg
fstp TBYTE PTR [esp] # store a local
mov eax, DWORD PTR [esp]
mov edx, DWORD PTR [esp+4] # return uint64_t in edx:eax
add esp, 28
ret
C99 makes union type-punning well-defined behaviour, and so does GNU C++. I think MSVC defines it too.
But memcpy is always portable so that might be an even better choice, and it's easier to read in this case where we just want one element.
If you also want the exponent and sign bit, a union between a struct and long double might be good, except that padding for alignment at the end of the struct will make it bigger. It's unlikely that there'd be padding after a uint64_t member before a uint16_t member, though. But I'd worry about :1 and :15 bitfields, because IIRC it's implementation-defined which order the members of a bitfield are stored in. | unknown | |
d7322 | train | You can dynamically add where statements to your query depending on values presence in the request and if they are not empty. Example:
public function multiSearch(Request $request){
$cars = Car::query();
if ( $request->filled('search') ) {
$cars->where('price', 'LIKE', "%{$request->input('search')}%");
}
if ( $request->filled('search_1') ) {
$cars->orWhere('model', 'LIKE', "%{$request->input('search_1')}%");
}
if ( $request->filled('search_2') ) {
$cars->orWhere('transmission', 'LIKE', "%{$request->input('search_2')}%");
}
return view('car.car-multisearch', ['cars'=> $cars->get() ]);
}
This solution do the job, but looks not good because you will need to add more if statements for each filter.) | unknown | |
d7323 | train | Answered here:
https://code.google.com/p/cookies/wiki/Documentation | unknown | |
d7324 | train | To answer your title question, you don't have to declare MyFrame and can just create and directly use wxFrame and use Bind() to connect the different event handlers. In all but the simplest situations, it's typically convenient to have a MyFrame class centralizing the data and event handlers for the window, but it is not required by any means.
If you decide to keep everything in MyApp instead, you may find wxGetApp() useful for accessing it from various places in your code. | unknown | |
d7325 | train | You may use this query to find all foreign key references on a table:
SELECT
fk.owner fk_schema_owner,fk.table_name fk_table,
fk.column_name fk_column, fk.constraint_name fk_constraint_name,
pk.r_owner pk_schema_owner,
c_pk.table_name pk_table_name, c_pk.constraint_name pk_constraint_name
FROM
all_cons_columns fk
JOIN all_constraints pk
ON fk.owner = pk.owner AND fk.constraint_name = pk.constraint_name
JOIN all_constraints c_pk
ON pk.r_owner = c_pk.owner AND pk.r_constraint_name = c_pk.constraint_name
WHERE pk.constraint_type = 'R' AND fk.table_name = :InputTableName | unknown | |
d7326 | train | This JSON is basically an Array of objects.
List<object> items = _serializer.Deserialize<List<object>>(jsonString);
You could then create a new class and assign the object to the class
Or simple use it as it.
A: If your structure is as simple as the one in your example and that the first 2 numbers always represent ChannelNumber and OtherNumber followed by 1 level array, then you can do something like this:
private static PoloniexResponseDataParent Parse(JArray objects)
{
var parent = new PoloniexResponseDataParent();
var channelNumber = objects[0];
var otherNumber = objects[1];
var children = objects[2];
parent.ChannelNumber = Convert.ToInt32(channelNumber);
parent.OtherNumber = (otherNumber as JValue).Value<int?>();
parent.Children = children.Select(item => new PoloniexResponseDataChild
{
Data = item switch
{
JValue jValue => jValue.Value,
_ => throw new ArgumentOutOfRangeException(nameof(item))
}
}).ToList();
return parent;
}
var jArray = Newtonsoft.Json.JsonConvert.DeserializeObject<JArray>(jsonStr);
var parent = Parse(jArray); | unknown | |
d7327 | train | Similar to Anton's answer, but using apply
users = df.groupby('buyer_id').apply(lambda r: r['item_id'].unique().shape[0] > 1 and
r['date'].unique().shape[0] > 1 )*1
df.set_index('buyer_id', inplace=True)
df['good_user'] = users
result:
item_id order_id date good_user
buyer_id
139 57 387 2015-12-28 1
140 9 388 2015-12-28 1
140 57 389 2015-12-28 1
36 9 390 2015-12-28 0
64 49 404 2015-12-29 0
146 49 405 2015-12-29 0
81 49 406 2015-12-29 0
140 80 407 2015-12-30 1
139 81 408 2015-12-30 1
EDIT because I thought of another case: suppose the data shows a buyer buys the same two (or more) goods on two different days. Should this user be flagged as 1 or 0? Because effectively, he/she does not actually choose anything different on the second date.
So take buyer 81 in the following table. You see they only buy 49 and 50 on both dates.
buyer_id item_id order_id date
139 57 387 2015-12-28
140 9 388 2015-12-28
140 57 389 2015-12-28
36 9 390 2015-12-28
64 49 404 2015-12-29
146 49 405 2015-12-29
81 49 406 2015-12-29
140 80 407 2015-12-30
139 81 408 2015-12-30
81 50 406 2015-12-29
81 49 999 2015-12-30
81 50 999 2015-12-30
To accomodate this, here's what I came up with (kinda ugly but should work)
# this function is applied to all buyers
def find_good_buyers(buyer):
# which dates the buyer has made a purchase
buyer_dates = buyer.groupby('date')
# a string representing the unique items purchased at each date
items_on_date = buyer_dates.agg({'item_id': lambda x: '-'.join(x.unique())})
# if there is more than 1 combination of item_id, then it means that
# the buyer has purchased different things in different dates
# so this buyer must be flagged to 1
good_buyer = (len(items_on_date.groupby('item_id').groups) > 1) * 1
return good_buyer
df['item_id'] = df['item_id'].astype('S')
buyers = df.groupby('buyer_id')
good_buyer = buyers.apply(find_good_buyers)
df.set_index('buyer_id', inplace=True)
df['good_buyer'] = good_buyer
df.reset_index(inplace=True)
This works on buyer 81 setting it to 0 because once you group by date, both dates at which a purchase was made will have the same "49-50" combination of items purchased, hence the number of combinations = 1 and the buyer will be flagged 0.
A: You could groupby by buyer_id, then aggregate column with np.unique. Then you'll get np.ndarrays for rows where you have several dates and item_ids. You could find that rows with isinstance of np.ndarray and you'll get bool series which you could pass to aggregated dataframe and find interested buyer. By filtering original dataframe with obtained buyers you could fill rows for flag with loc:
df_agg = df.groupby('buyer_id')[['date', 'item_id']].agg(np.unique)
df_agg = df_agg.applymap(lambda x: isinstance(x, np.ndarray))
buyers = df_agg[(df_agg['date']) & (df_agg['item_id'])].index
mask = df['buyer_id'].isin(buyers)
df['flag'] = 0
df.loc[mask, 'flag'] = 1
In [124]: df
Out[124]:
buyer_id item_id order_id date flag
0 139 57 387 2015-12-28 1
1 140 9 388 2015-12-28 1
2 140 57 389 2015-12-28 1
3 36 9 390 2015-12-28 0
4 64 49 404 2015-12-29 0
5 146 49 405 2015-12-29 0
6 81 49 406 2015-12-29 0
7 140 80 407 2015-12-30 1
8 139 81 408 2015-12-30 1
Output from first and second steps:
In [146]: df.groupby('buyer_id')[['date', 'item_id']].agg(np.unique)
Out[146]:
date item_id
buyer_id
36 2015-12-28 9
64 2015-12-29 49
81 2015-12-29 49
139 [2015-12-28, 2015-12-30] [57, 81]
140 [2015-12-28, 2015-12-30] [9, 57, 80]
146 2015-12-29 49
In [148]: df_agg.applymap(lambda x: isinstance(x, np.ndarray))
Out[148]:
date item_id
buyer_id
36 False False
64 False False
81 False False
139 True True
140 True True
146 False False | unknown | |
d7328 | train | As you already mentioned, the Size of the form includes borders, title bars etc. So, try to set the ClientSize which defines the client area of the form:
this.ClientSize = new Size(200, 200); | unknown | |
d7329 | train | You need to get the val() then use startsWith(). Additionally you need to bind proper event handler. Here I have used keyup
$("textarea").on('keyup', function() {
if ($(this).val().startsWith("Hello")) {
$(".kuk").show();
} else {
$(".kuk").hide();
}
});
Updated Fiddle
A: Try this. You need to bind an event also you need to get val to check whether it startswith hello or not.
$("textarea").bind('keyup',function () {
if ($(this).val().startsWith("Hello")) {
$(".kuk").show();
}
else {
$(".kuk").hide();
}
});
Here is jsfiddle
A: I made jsfiddle for those wondering which code I am using right now. I added few kinds of input options and now it works in chrome as well.
final fiddle
$("textarea").bind('change keyup paste blur input',function () {
if ($(this).val().startsWith("Hello") || $(this).val().startsWith("HELLO") || $(this).val().startsWith("hello")) {
$(".kuk").show();
}
else {
$(".kuk").hide();
}
}); | unknown | |
d7330 | train | Part of your problem lies in your logic that prints the content of the list and part of it is in your add method. First of all your current node is a local variable of add method. That means second 'if' statement:
if (current.next != null) {
current = current.next;
}
is not doing anything useful. You set current to point at the same object as current.next does but then you leave the method and your reference is destroyed. It does not make sense.
Assuming you invoked constructor of your list and then added three elements: "a", "b", "c"
here is how your Node objects will behave on heap.
After constructor finished there is one Node object on the heap which looks like:
{ list -> {empty}, prev -> null, next -> null } this object is referenced by the head and tail reference variables. Note that if you invoke new ArrayList(bucketSize) it will create empty list with 'bucketSize' initial capacity.
After 1st call to add("a"):
nodeObject#1 : { list -> {"a"}, prev -> null, next -> nodeObject#2 }
nodeObject#2 : { list -> {empty}, prev -> nodeObject#1, next -> null}
nodeObject#1 is accesible via head or tail.
nodeObject#2 is accesible via head.next or tail.next.
After 2nd call to add("b"):
nodeObject#1 : { list -> {"a","b"}, prev -> null, next -> nodeObject#2 }
nodeObject#2 : { list -> {empty}, prev -> nodeObject#1, next -> null}
After 3rd call to add("c"):
nodeObject#1 : { list -> {"a","b","c"}, prev -> null, next -> nodeObject#2 }
nodeObject#2 : { list -> {empty}, prev -> nodeObject#1, next -> null}
Also having prev and next in your Node suggest that your list should be bi directional that means you need to implement methods like add_at_the_end and add_at_the_beginning but that's a different story ( I can show some examples too if needed ).
The next question is why you use ArrayList as a Node class field. T value should be enough.
Here is my example of simple list implementation without ArrayList. There is iterator method that returns instance of Iterator which can be used to display list's elements.
package com.playground;
import java.util.ArrayList;
import java.util.Iterator;
class CustomList<T>{
private class Node{
Node prev;
Node next;
T value;
Node(T rVal, Node p, Node n){
this.value = rVal;
this.prev = p;
this.next = n;
}
void setNext(Node n){ this.next = n; }
void setPrev(Node p){ this.prev = p; }
}
private Node head;
private Node tail;
public void add(T element) {
if(tail == null && head == null){
head = new Node(element, null,null);
tail = head;
}
else{
Node tmp = new Node(element, tail, null);
tail.setNext( tmp );
tail = tmp;
}
}
public Iterator<T> iterator() {
return new Iterator<T>(){
Node current = head;
@Override
public boolean hasNext() {
// TODO Auto-generated method stub
return current != null;
}
@Override
public T next() {
Node tmp = current;
current = tmp.next;
return tmp.value;
}
@Override
public void remove() {
// TODO Auto-generated method stub
} };
}
}
public class CustomListTest {
public static void main(String [] args){
CustomList<String> list = new CustomList<String>();
list.add("my");
list.add("custom");
list.add("list");
Iterator<String> forwardIterator = list.iterator();
while( forwardIterator.hasNext()){
System.out.println( forwardIterator.next());
}
}
} | unknown | |
d7331 | train | As some people have already suggested in comments, you don't really need to convert your arrays to vectors - just work directly with the arrays. All you need to add is the size of the arrays as a third parameter to your function. For that you obviously need to know the size, but since this is running on a microcontroller, where memory budget is tight, I'm relatively certain that you have access to the size info. So it could look like:
double scalar_product(double a[], double b[], unsigned int size)
{
// compute
double product = 0;
for (unsigned int i = 0; i <= size - 1; i++)
product += (a[i])*(b[i]); // += means add to product
return product;
}
I'm assuming that their sizes are the same (they should) but even if not, you could use this to calculate a (partial) scalar product for two differently sized arrays, by supplying the shorter one's size.
A: You are not allocating your double[] arrays dynamically via new[], so they cannot be freed dynamically. They are being declared in automatic memory, so they will be freed only when they go out of scope.
Since you are concerned with limited memory usage, your best option is to simply not convert your double[] arrays to std::vector<double> at all. Change your scalar_product() function instead so it can handle the original double[] arrays as-is, eg:
double scalar_product(const double *a, size_t a_size, const double *b, size_t b_size)
{
if( a_size != b_size ) // error check
{
//puts( "Error a's size not equal to b's size" ) ;
return -1 ; // not defined
}
// compute
double product = 0;
for (size_t i = 0; i < a_size; ++i)
product += (a[i])*(b[i]); // += means add to product
return product;
}
double a[405] = ...;
double b[405] = ...;
double product = scalar_product(a, 405, b, 405);
/*
if, for some reason, you also needed to get the product of vectors,
you can do this:
double scalar_product(const std::vector<double> &a, const std::vector<double> &b)
{
return scalar_product(&a[0], a.size(), &b[0], b.size());
}
vector<double> a = ...;
vector<double> b = ...;
double product = scalar_product(a, b);
*/
Or:
double scalar_product(const double *a, const double *b, size_t n)
{
// compute
double product = 0;
for (size_t i = 0; i < n; ++i)
product += (a[i])*(b[i]); // += means add to product
return product;
}
double a[405] = ...;
double b[405] = ...;
double product = scalar_product(a, b, 405);
/*
if, for some reason, you also needed to get the product of vectors,
you can do this:
double scalar_product(const std::vector<double> &a, const std::vector<double> &b)
{
return (a.size() == b.size())
? scalar_product(&a[0], &b[0], a.size())
: -1.0;
}
vector<double> a = ...;
vector<double> b = ...;
double product = scalar_product(a, b);
*/
Alternatively, you can let the compiler deduce the array sizes for you, if you are passing in the original arrays directly, and not passing in pointers to the arrays:
template<size_t a_size, size_t b_size>
double scalar_product(const double (&a)[a_size], const double (&b)[b_size])
{
if( a_size != b_size ) // error check
{
//puts( "Error a's size not equal to b's size" ) ;
return -1 ; // not defined
}
// compute
double product = 0;
for (size_t i = 0; i < a_size; ++i)
product += (a[i])*(b[i]); // += means add to product
return product;
}
double a[405] = ...;
double b[405] = ...;
double product;
product = scalar_product(a, b); // OK
double *pa = a;
double *pb = b;
product = scalar_product(pa, pb); // COMPILER ERROR
/*
this approach doesn't work with vectors, so you will need a
separate overload of scalar_product() for that...
*/
Or:
template<size_t N>
double scalar_product(const double (&a)[N], const double (&b)[N])
{
// compute
double product = 0;
for (size_t i = 0; i < N; ++i)
product += (a[i])*(b[i]); // += means add to product
return product;
}
double a[405] = ...;
double b[405] = ...;
double product;
product = scalar_product(a, b); // OK
double c[404] = ...;
double d[405] = ...;
product = scalar_product(c, d); // COMPILER ERROR
/*
this approach doesn't work with vectors, so you will need a
separate overload of scalar_product() for that...
*/
Otherwise, if the array size is constant at compile-time, then just hard-code it:
const size_t ArrSize = 405;
double scalar_product(const double (&a)[ArrSize], const double (&b)[ArrSize])
{
// compute
double product = 0;
for (size_t i = 0; i < ArrSize; ++i)
product += (a[i])*(b[i]); // += means add to product
return product;
}
double a[ArrSize] = ...;
double b[ArrSize] = ...;
double product = scalar_product(a, b);
/*
this approach doesn't work with vectors, so you will need a
separate overload of scalar_product() for that...
*/
A: An array must be dynamically allocated (using the new[] operator) in order to use the delete[] operator to free it. As the comments have mentioned, you should post how you create your arrays so we can get a better understanding of how they are stored in memory.
double a* = new double[size];
// do stuff with a
delete[] a;
The above example is legal and will do what you asked. | unknown | |
d7332 | train | What you really do is 3 updates on same record . Thats why you get only last value in database.
You are updating record id_text=28 with values 8,6,10.
A: It will only end up set as the last value, as you are updating the same record the number of times that you have values in your array.
Only one field in one record is being processed, it can not magically become multiple records this way - you would need a separate table, or use INSERT. | unknown | |
d7333 | train | I can confirm that this is still a bug in v4.0.35. This is how I got around it...
*
*Create a custom registration validation class which implements AbstractValidator<Register> and add your own fluent validation rules (I copied the rules from the SS source)
public class CustomRegistrationValidator : AbstractValidator<Register>
{
public IAuthRepository UserAuthRepo { get; set; }
public CustomRegistrationValidator()
{
RuleSet(ApplyTo.Post, () =>
{
RuleFor(x => x.UserName).NotEmpty().When(x => x.Email.IsNullOrEmpty());
RuleFor(x => x.UserName)
.Must(x => UserAuthRepo.GetUserAuthByUserName(x) == null)
.WithErrorCode("AlreadyExists")
.WithMessage("UserName already exists")
.When(x => !x.UserName.IsNullOrEmpty());
RuleFor(x => x.Email)
.Must(x => x.IsNullOrEmpty() || UserAuthRepo.GetUserAuthByUserName(x) == null)
.WithErrorCode("AlreadyExists")
.WithMessage("Email already exists")
.When(x => !x.Email.IsNullOrEmpty());
RuleFor(x => x.FirstName).NotEmpty();
RuleFor(x => x.LastName).NotEmpty();
RuleFor(x => x.Email).NotEmpty();
RuleFor(x => x.Password).NotEmpty();
// add your own rules here...
});
RuleSet(
ApplyTo.Put,
() =>
{
RuleFor(x => x.UserName).NotEmpty();
RuleFor(x => x.Email).NotEmpty();
// add your own rules here...
});
}
}
*Create a CustomRegistrationFeature class which implements IPlugin (again I just copied the SS source and changed the IoC registration to the CustomRegistrationValidator class)
public class CustomRegistrationFeature : IPlugin {
public string AtRestPath { get; set; }
public CustomRegistrationFeature()
{
this.AtRestPath = "/register";
}
public void Register(IAppHost appHost)
{
appHost.RegisterService<RegisterService>(AtRestPath);
appHost.RegisterAs<CustomRegistrationValidator, IValidator<Register>>();
}
}
*Replace the RegistrationFeature registration in the App.Host with the new CustomRegistrationFeature we just created.
Plugins.Add(new CustomRegistrationFeature());
I don't know why this works as I am just doing the same or similar thing to what's already there, but it does. It also allows me to add in more validation rules (which is why I needed to do this).
A: I'm not quite sure that this is a bug, but it might be. Anyway, the reason that the custom validator isn't used, is that the Register() method of RegistrationFeature isn't called until after the Configure() method has been run, thus overriding the registration of CustomRegistrationValidator.
The simplest solution is registering the custom validator after the Register() method has been run:
public override void OnAfterInit()
{
base.OnAfterInit();
RegisterAs<CustomRegistrationValidator, IValidator<Register>>();
} | unknown | |
d7334 | train | The problem is that the default form control for selecting the target of that child association is not sufficient for your needs. So, you need to provide an alternative custom form control. The docs show how to do this using a very simplified example. | unknown | |
d7335 | train | Use a window function!
select r.*,
sum(case when position = 1 and country_code = 'GB' then 1 else 0 end) over
(partition by horsename, coursename
order by racedate
rows between unbounded preceding and 1 preceding
) as CourseDistanceWinners
from [dbo].[Results] r | unknown | |
d7336 | train | A different way to draw a checkerboard pattern is to make a 2x2 checkerboard in a drawing program, add that to your project as a resource, pass that image to +[NSColor colorWithPatternImage:], and then you can just fill an area with that color.
A: OK, so the problem is that a row might have an odd number of squares in it, in which case the next row shows the same pattern again. Here's how I would fix it; I also changed your use of NSBezierPath to NSRectFill() since the latter is both conceptually simpler and likely to be faster.
- (void)drawRect:(NSRect)rect {
for (int j = 0; j * 20 < self.frame.size.width; j++) {
for (int i = 0; i * 20 < self.frame.size.height; i++) {
if (((i^j) & 1) == 0)
[[NSColor whiteColor] set];
else
[[NSColor lightGrayColor] set];
NSRectFill(NSMakeRect(j*20,i*20,20,20));
}
}
}
The (i^j) & 1 uses bitwise exclusive-or (the ^ operator) and then bitwise and (the & operator) to combine the odd-even state of both the row and column indices. There would be various ways to optimize away the multiplies, but this code seems like the clearest way to do it.
A somewhat cleaner version that would run faster and would perhaps avoid some drawing artifacts would involve clearing to one color first and then drawing only squares of the opposite color:
- (void)drawRect:(NSRect)rect {
[[NSColor whiteColor] set];
NSRectFill(rect);
[[NSColor lightGrayColor] set];
for (int j = 0; j * 20 < self.frame.size.width; j++)
for (int i = 0; i * 20 < self.frame.size.height; i++)
if ((i^j) & 1)
NSRectFill(NSMakeRect(j*20,i*20,20,20));
}
A: Keep in mind that your view is not redrawn just because it is resized! So your checkerboard gets squeezed or stretched together with the view. It is up to you to call setNeedsDisplay.
A: Your method of determining square color alternates the color down a column and then continues with the next column. If you have an even number of rows each row will start with the same color, giving you stripes.
Think simple, you only need two states - black or white - so instead of integers, addition, and odd/even tests to reduce an integer to one of two values just start with a type with only two values, that is Boolean.
To deal with the issue of an odd or even number of rows use two variables, one to track the color of the first square in the column - this will flip each iteration of your outer loop, and one to track the color of the current square - for each column this starts with the value of the first variable and flips each iteration of the inner loop.
In code outline, before the outer loop:
BOOL colStartsWhite = YES; // or NO, you decide - this is the corner color
Inside the outer loop:
BOOL squareIsWhite = colStartsWhite; // inner tracker
colStartsWhite = !colStartsWhite; // flip ready for next column
Inside the inner loop:
if (squareIsWhite) ... else ... // fill the square
squareIsWhite = !squareIsWhite; // flip for next square
HTH | unknown | |
d7337 | train | It would probably be easiest to flatten each row into a normal list before writing it to the file. Something like this:
with open(filename, 'w') as file:
writer = csv.writer(file)
for row in data:
out_row = [row['value']]
for word in row['word_list']:
out_row.append(word)
csv.writerow(out_row)
# Shorter alternative to the two loops:
# csv.writerow((row['value'], *row['word_list']) for row in data)
A: I figured out a solution, but it's kind of messy (wow, I can't write a bit of code without it being in the "proper format"...how annoying):
with open('filename', 'w') as f:
for key in d.keys():
f.write("%s,"%(key))
for word in d[key]:
f.write("%s,"%(word))
f.write("\n")
A: You can loop through the dictionaries one at a time, construct the list and then use the csv module to write the data as I have shown here
import csv
d = [{'value':'foo_1', 'word_list':['blah1', 'blah2']}, {'value': 'foo_n', 'word_list':['meh1', 'meh2']}]
with open('test_file.csv', 'w') as file:
writer = csv.writer(file)
for val_dict in d:
csv_row = [val_dict['value']] + val_dict['word_list']
writer.writerow(csv_row)
It should work for word lists of arbitrary length and as many dictionaries as you want. | unknown | |
d7338 | train | You're overwriting the previous files with 'w'. Besides opening the file and closing at every iteration is not a very good idea.
Why not read all the rows and group them with itertools.groupby using the first item in each row (i.e. date) as the grouping criterion. Then write into each file after splitting. The file names will be the key for each group.
A: You're overwriting the contents of your file each time you open them with the w flag, try instead by grouping your rows with itertools.groupby:
import csv
import itertools
with open(path1 + filename) as f:
reader = csv.reader(f)
for date, rows in itertools.groupby(reader, lambda row: row[0]):
with open(path2 + date + '.csv', 'w') as csvfile:
writer = csv.writer(csvfile, delimiter=',')
writer.writerows(rows) | unknown | |
d7339 | train | Your Paint event handler has to be responsible for drawing everything each time. Your code looks like it only draws one object.
When you drag something over your box, the box becomes invalid and needs to be painted from scratch which means it erases everything and calls your Paint handler. Your Paint event handler then just draws one object.
I suspect that what you want to do is keep a data structure of each item you draw and then have a loop in your Paint event handler that will draw all the objects you add.
A: Don't use variables that you defined outside a loop, in the paint event. That might be your problem.
Try to paint ((Panel)s).Name. Does this work properly?
A: Your title has got all of us confused.. You don't draw Rectangles, you create new Panels on each ButtonClick, right?
The code for the Paint event is not quite right, though. Just like in any Paint event you should use the built-in Graphics object. And as Hans has noted, you should not destroy/dispose things you didn't create.
The main problem you describe seems to be that your boxes have to paint themselves without referring to their real numbers. You should store their numbers e.g. in their Tags..
(Or you could extract it from their Names, like you do it in the MouseDown!)
box[panelBoxNo].Name = "box" + panelBoxNo;
box[panelBoxNo].Tag = panelBoxNo; // < === !!
//..
box[panelBoxNo].Paint += new PaintEventHandler((s, m) =>
{
Graphics g = m.Graphics; // < === !!
g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.ClearTypeGridFit;
string text = box[panelBoxNo].Tag.ToString(); // < ===
SizeF textSize = g.MeasureString(text, Font);
PointF locationToDraw = new PointF();
locationToDraw.X = (pbW / 2) - (textSize.Width / 2);
locationToDraw.Y = (pbH / 2) - (textSize.Height / 2);
g.DrawString(text, Font, Brushes.Black, locationToDraw);
g.DrawRectangle(new Pen(Color.Black), 0, 0, pbW - 1, pbH - 1);
g.DrawLine(drawLine, 0, 0, 0, pbH);
// g.Dispose(); // < === !!
// m.Graphics.DrawImageUnscaled(drawBox, new Point(0, 0)); // < === !!
// m.Dispose(); // < === !!
});
And, as I have noted you should only use arrays if you (or the code) really knows the number of elements. In your case a List<Panel> will be flexible to hold any number of elements without changing any other part of the code, except the adding. You can access the List just like an Array. (Which it is behind the scenes..)
Update: The way I see it now, your problems all are about scope in one way or another.
Scope in its most direct meaning is the part of your code where a variable is known and accessible. In a slightly broader meaning it is also about the time when it has the value your need.
Your original problem was of the latter kind: You accessed the partNo in thr Paint event of the Panels, when it had long changed to a new, probably higher value.
Your current problem is to understand the scope of the variables in the ButtonClick event.
Usually this is no problem; looking at the pairs of braces, the scope is obvious. But: Here we have a dynamically created Lambda event and this is completely out of the scope of the Click event!! Behind the scenes this Paint event is removed from the Click code, placed in a new event and replaced by a line that simply adds a delegate to the Paint handler just like any regular event.
So: nothing you declare in the ButtonClick is known in the Paint code!
To access these data you must place them in the Panel properties, in our case in the Tag and access them via casting from the Sender parameter s!
So you need to change this line in the Paint event
Product b = box.Tag as Product;
to something like this:
Product b = ( (Panel) s ).Tag as Product; | unknown | |
d7340 | train | in Base R
order.one <- 1+order(df[df$time=="one",][(2:ncol(df))],decreasing = TRUE)
output.one <- df[df$time=="one",][c(1,order.one)]
The order can be switch to ascending by removing decreasing = TRUE
A: Using pipes:
library(magrittr)
df %>%
.[.$time == "one", ] %>%
.[c(1, 1 + order(-.[-1]))] # "-" short for decreasing = TRUE
time B C D E A
1 one 5 4 3 2 1 | unknown | |
d7341 | train | If your collection name is based off <T>, why not simply have CosmosDBRepository<T> as the actual class. That way you can get the value also on the constructor.
Ideally it would also be a readonly private property that you only calculate once (on the constructor) and reuse on all operations to avoid paying the cost to construct it later on (since it doesn't change). | unknown | |
d7342 | train | The reason why CSS failed to achieve the results desired is because CSS only cares about horizontal position, i.e. everything is laid out horizontally first, and then vertically. Therefore, the next float on the subsequent row will only be positioned below the tallest element in the first row float, therefore leaving unfilled vertical spaces when the previous row of floats have containers with shorter heights.
I'd recommend jQuery Masonry :) http://masonry.desandro.com/
A: You can use column-count to achieve the required look. Keep in mind that the order of the items will be different from when you float.
Float:
1 2
3 4
5 6
Columns:
1 4
2 5
3 6
Simply set column-count: 2 on the parent of the items.
Note that it doesn't work in all browsers: http://caniuse.com/#search=column | unknown | |
d7343 | train | For this to work, you have to create a plug-in for Interface Builder that uses your custom control's class. As soon as you create and install your plug-in, you will be able to add by drag and drop, instances of your view onto another window or view in Interface Builder. To learn about creating IB Plugins, see the Interface Builder Plug-In Programming Guide and the chapter on creating your own IB Palette controls from Aaron Hillegass's book, Cocoa Programming for Mac OS X.
Here is the link to the original author of the accepted answer to a similar question.
A: This was originally a comment in Ratinho's thread, but grew too large.
Although my own experience concurs with everything mentioned here and above, there are some things that might ease your pain, or at least make things feel a little less hack-ish.
Derive all of your custom UIView classes from a common class, say EmbeddableView. Wrap all of the initWithCoder logic in this base class, using the Class identity (or an overloadable method) to determine the NIB to initialize from. This is still a hack, but your at least formalizing the interface rules and hiding the machinery.
Additionally, you could further enhance your Interface Builder experience by using "micro controller" classes that pair with your custom views to handle their delegate/action methods and bridge the gap with the main UIViewController through it's own delegation protocol. All of this can be wired together using connectors within Interface Builder.
The underlying UIViewController only needs to implement enough functionality to satisfy the "micro controller" delegation pattern.
You already have the details for adding the custom views by changing the class name and handling the nib loading. The "micro controllers" (if used) can just be NSObject derived classes added to the NIB as suggested here.
Although I've done all of these steps in isolated cases, I've never taken it all the way to this sort of formal solution, but with some planning it should be fairly reliable and robust.
A: Maybe i didnt understand u?
you have library in the Interface builder u can move every component u want and place it on your view. (u can add another view by adding UIView and change its class name in the 4th tab).
then u declare vars with IBOutlet and connect them from the 2nd tab of ur file's owners to their components...another question?
A: Unfortunately, you can't do what you want to do with UIKit. IB Plugins only work for OS X, and Apple explicitly doesn't allow them for use with iOS development. Something to do with them not being static libraries. Who knows, they may change this someday, but I wouldn't hold your breath. | unknown | |
d7344 | train | In your first example, you're referencing the variable klass before you've defined it. That's not the case in the second example. | unknown | |
d7345 | train | If you are certain that your time is always-increasing, then you can look for an apparent decrease (of time-of-day) and manually insert the TZ offset to the string, then parse as usual. I added some logic to look for this decrease only around 2-3am so that if you have multiple days of data spanning midnight, you would not get a false-alarm.
data <- read.csv(text = data_in)
fakedate <- as.POSIXct(gsub("^[-0-9]+ ", "2000-01-01 ", data$date))
decreases <- cumany(grepl(" 0[23]:", data$date) & c(FALSE, diff(fakedate) < 0))
data$date <- paste(data$date, ifelse(decreases, "+0100", "+0200"))
data
# date val
# 1 2018-10-28 01:30:00 +0200 25
# 2 2018-10-28 02:00:00 +0200 26
# 3 2018-10-28 02:30:00 +0200 27
# 4 2018-10-28 02:00:00 +0100 28
# 5 2018-10-28 02:30:00 +0100 29
# 6 2018-10-28 03:00:00 +0100 30
as.POSIXct(data$date, format="%Y-%m-%d %H:%M:%S %z", tz="Europe/Paris")
# [1] "2018-10-28 01:30:00 CEST" "2018-10-28 02:00:00 CEST" "2018-10-28 02:30:00 CEST"
# [4] "2018-10-28 02:00:00 CET" "2018-10-28 02:30:00 CET" "2018-10-28 03:00:00 CET"
My use of "2000-01-01" was just some non-DST day so that we can parse the timestamp into POSIXt and calculate a diff on it. (If we didn't insert a date, we could still use as.POSIXct with a format, but if you ever ran this on one of the two DST days, you might get different results since as.POSIXct("01:02:03", format="%H:%M:%S") always assumes "today".
This is obviously a bit fragile with its assumptions, but perhaps it'll be good enough for what you need. | unknown | |
d7346 | train | Based on the reference manual (https://cran.r-project.org/web/packages/deldir/index.html), the output of the deldir function is a list. One of the list element, summary, is a data frame, which contains a column called dir.area. This is the the area of the Dirichlet tile surrounding the point, which could be what you are looking for.
Below I am using the example from the reference manual. Use $ to access the summary data frame.
library(deldir)
x <- c(2.3,3.0,7.0,1.0,3.0,8.0)
y <- c(2.3,3.0,2.0,5.0,8.0,9.0)
dxy1 <- deldir(x,y)
dxy1$summary | unknown | |
d7347 | train | If you just want to get all the vertices in the path you can do:
g.V('A').repeat(out("knows")).emit().label()
example: https://gremlify.com/c533ij58a98z8 | unknown | |
d7348 | train | You can use two breakpoints. The one you're really interested in and another one in your unit test (somewhere between calling the code under test and cleaning up). Set both to only suspend the current thread. | unknown | |
d7349 | train | Since you haven't mentioned the datasource of your DataGridView i show an approach with a collection. For example with an int[] but it works with all:
int[] collection = { 0, 0, 0, 5, 2, 7 };
int[] ordered = collection.OrderBy(i => i == 0).ThenBy(i => i).ToArray();
This works because the first OrderBy uses a comparison which can either be true or false. Since true is "higher" than false all which are not 0 come first. The ThenBy is for the internal ordering of the non-zero group.
If that's too abstract, maybe you find this more readable:
int[] ordered = collection.OrderBy(i => i != 0 ? 0 : 1).ThenBy(i => i).ToArray();
A: If you are not using data source for your grid, then you can use DataGridView.SortCompare event like this
void yourDataGridView_SortCompare(object sender, DataGridViewSortCompareEventArgs e)
{
if (e.Column.Name == "Id" && e.CellValue1 != null && e.CellValue2 != null)
{
var x = (int)e.CellValue1;
var y = (int)e.CellValue2;
e.SortResult = x == y ? 0 : x == 0 ? 1 : y == 0 ? -1 : x.CompareTo(y);
e.Handled = true;
}
}
Don't forget to attach the event handler to your grid view. | unknown | |
d7350 | train | As suggested by Simon Mourier in the comments above, there is a PKEY that can access that data.
You can follow along with most of this sample and just replace PKEY_Device_FriendlyNamewith PKEY_DeviceClass_IconPath.
In short, it's something like this:
// [...]
IMMDevice *pDevice; // the device you want to look up
HRESULT hr = S_OK;
IPropertyStore *pProps = NULL;
PROPVARIANT varName;
hr = pDevice->OpenPropertyStore(STGM_READ, &pProps);
EXIT_ON_ERROR(hr);
PropVariantInit(&varName);
hr = pProps->GetValue(PKEY_DeviceClass_IconPath, &varName);
EXIT_ON_ERROR(hr);
// [...] | unknown | |
d7351 | train | I've tried to delete all thing on Podfile archive and then install the pod again with the command "pod install" on the worksapce path.
It worked to me.
A: If you're using Cocoapods,
1. Remove the line pod 'AFNetworking' from Podfile.
2. Open terminal, go to project directory, & do pod install
That should work.
And,
If you're not using Cocoapods, just remove the AFNetworking framework from XcodeProject file. | unknown | |
d7352 | train | df1["Sports 2019/2018"] = df1["Sports 2019/2018"].str.replace(" ", "", n = 1)
n=1 is an argument that will only replace the first character that will find. | unknown | |
d7353 | train | A solution is to read your providers before the async gap, but not use them immediately.
For example:
Future<void> logInUser(WidgetRef ref) async {
final isLoadingNotifier = ref.read(loginIsLoadingProvider.notifier);
isLoadingNotifier.state = true;
await someAsyncOperation();
isLoadingNotifier.state = false;
} | unknown | |
d7354 | train | try to Change your scope like:
class ViewController: UIViewController, GIDSignInDelegate, GIDSignInUIDelegate
{
// If modifying these scopes, delete your previously saved credentials by
private let scopes = ["https://www.googleapis.com/auth/drive"]
...
} | unknown | |
d7355 | train | You can write your filter like this:
$scope.events = $filter('filter')($scope.events, function(e){
return e.state === 'NSW';
}); | unknown | |
d7356 | train | You can create custom policy for IAM user as well, where you only allow PUTObject to specific bucket.
example:
{
"Version": "2012-10-17",
"Id": "Policy1234567",
"Statement": [
{
"Sid": "Stmt1234567",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::some-bucket-name/*"
}
]
}
If the bucket and IAM user are in the same account, you don't need bucket policy if IAM user has the above policy.
You definitely need Identity policy based on below link:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html | unknown | |
d7357 | train | From the MySQL Docs:
If you are using the mysql client with auto-reconnect enabled (which is not recommended), it is preferable to use the charset command rather than SET NAMES. For example:
mysql> charset utf8
Charset changed
The charset command issues a SET NAMES statement, and also changes the default character set that mysql uses when it reconnects after the connection has dropped. Therefore, issue this:
mysqli_query('charset utf8');
Or whatever charset you're using (if other than UTF8). This should fix the problem. | unknown | |
d7358 | train | I have progressed a little on that subject:
What's happening is the format sent from apache to aprs-is as response is badly interpreted by aprs. The apache server send a response with a header configured in apache2.conf:
ServerTokens will return HTTP/1.1 with the version of apache after that ( the verbose depending on the option next to Servertoken )
But i can't figure out how change the value "HTTP/1.1" by "#APRS"
I've tried to change it in mod_header.c but the only file i have found is mod_header.so ... | unknown | |
d7359 | train | Remove the last backslash \ there is no need to escape )
System.out.println("if(line.contains(\"<string key=\"concept:name\" value=\"LCSP\"/>\"))");
DEMO1
Solution to the problem given in comment
String v11 = "John";
System.out.println("if(line.contains(\"<string key=\\\"concept:name\\\" value=\\\""+v11+"\\\"/>\"))");
OUTPUT
if(line.contains("<string key=\"concept:name\" value=\"John\"/>"))
DEMO2
A: no need to escape ')' at end of the line.
the changed statement is:
System.out.println("if(line.contains(\"<string key=\"concept:name\" value=\"LCSP\"/>\"))");
A: Original
System.out.println("if(line.contains(\"<string key=\"concept:name\" value=\"LCSP\"/>\"**\**))");
**** is not required I have removed from Original
Changed:
System.out.println("if(line.contains(\"<string key=\"concept:name\" value=\"LCSP\"/>\"))");
A: String v11 = "John1";
System.out.println("if(line.contains(\"<string key=\\\"concept:name\\\" value=\\\""+v11+"\\\"/>\"))");
remove the last backlash and update the "LCSP" to v11 variable.
output:
if(line.contains("<string key=\"concept:name\" value=\"John1\"/>")) | unknown | |
d7360 | train | Unfortunately there's no way to get the templated message for audit logs provided by http://console.cloud.google.com/home/activity apart from the UI itself. | unknown | |
d7361 | train | This is one of many ways:
listofEmployees.stream()
.sorted((o1, o2) -> o1.getName().compareToIgnoreCase(o2.getName()))
.forEach(s -> System.out.println(s.getName()));
A: As @Tree suggested in comments, one can use the java.text.Collator for a case-insensitive and locale-sensitive String comparison. The following shows how both case and accents could be ignored for US English:
Collator collator = Collator.getInstance(Locale.US);
collator.setStrength(Collator.PRIMARY);
listOfEmployees.sort(Comparator.comparing(Employee::getName, collator.reversed()));
When collator strength is set to PRIMARY, then only PRIMARY differences are considered significant during comparison. Therefore, the following Strings are considered equivalent:
if (collator.compare("abc", "ABC") == 0) {
System.out.println("Strings are equivalent");
}
A: You can specify it as the second argument to ignore cases:
Comparator.comparing(Employee::getName, String::compareToIgnoreCase).reversed()
A: Try this
Comparator.comparing(Employee::getName, String.CASE_INSENSITIVE_ORDER)
A: It looks like there is a Comparator that orders String objects as by compareToIgnoreCase, you can read the documentation here: https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#CASE_INSENSITIVE_ORDER
A: You can lowercase or uppercase it like this:
listofEmployees.stream()
.sorted(Comparator.comparing((Employee e) -> e.getName()
.toLowerCase()) // or toUpperCase
.reversed())
.forEach(s -> System.out.println(s.getName())); | unknown | |
d7362 | train | You can use a system property to set the log4j file names with it's value and give that property an unique value for each run.
Something like this on your starter class (timeInMillis and a random to avoid name clashes):
static {
long millis = System.currentTimeMillis();
System.setProperty("log4jFileName", millis+"-"+Math.round(Math.random()*1000));
}
And then you refer to the system property on log4j conf properties:
log4j.appender.R.File=./Logs/${log4jFileName}.log
log4j.appender.HTML.File=./Logs/${log4jFileName}.log
Hope it helps!
A: You need to write (or find) a custom appender which will create the file with a timestamp in the name.
The 3 defaults implementation for file logging in log4j are :
*
*FileAppender : One file logging, without size limit.
*RollingFileAppender : Multiple files and rolling file when current file hits the size limit
*DailyRollingFileAppender : A file per day
The simpliest way is to extend FileAppender and overwrite the setFile and getFile methods.
A: i think this answer would be helpful for you click here
i have ran code as the page said, and i got a new log file each time i start my application.
result like that :
and all code in my Test.java is :`
private static final Logger log = Logger.getLogger(Test.class);
public static void main(String[] args) {
log.info("Hello World");
}
` | unknown | |
d7363 | train | But I it needs to show a row even if
there is no SMS number for that
contact.
Then use a LEFT OUTER JOIN, which returns a row from the left table even if there is no corresponding row in the right table.
It's a good idea to learn to always use JOIN syntax instead of the pseudo-inner-join when you just list tables with commas.
SELECT
Contact.id,
Contact.name,
Sms.provider,
Sms.number
FROM
Contact
LEFT OUTER JOIN
Sms
ON
Sms.contactId = Contact.id | unknown | |
d7364 | train | Convert your calendar string day, month, year to Date class.
More discussion here: Java string to date conversion
e.g.
DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
Date d1, d2;
try {
d1 = dateFormat.parse(year + "-" + month + "-" + day); //as per your example
d2 = //do the same thing as above
long days = getDifferenceDays(d1, d2);
}
catch (ParseException e) {
e.printStackTrace();
}
public static long getDifferenceDays(Date d1, Date d2) {
long diff = d2.getTime() - d1.getTime();
return TimeUnit.DAYS.convert(diff, TimeUnit.MILLISECONDS);
}
A: Create a method getDates()
private static ArrayList<Date> getDates(String dateString1, String dateString2)
{
ArrayList<Date> arrayofdates = new ArrayList<Date>();
DateFormat df1 = new SimpleDateFormat("dd-MM-yyyy");
Date date1 = null;
Date date2 = null;
try {
date1 = df1 .parse(dateString1);
date2 = df1 .parse(dateString2);
} catch (ParseException e) {
e.printStackTrace();
}
Calendar calender1 = Calendar.getInstance();
Calendar calendar2 = Calendar.getInstance();
calender1.setTime(date1);
calender2.setTime(date2);
while(!calender1.after(calender2))
{
arrayofdates.add(calender1.getTime());
calender1.add(Calendar.DATE, 1);
}
return arrayofdates;
}
then pass the parameter in this method to get array of dates
As you are using DatePicker then
DateFormat df1 = new SimpleDateFormat("dd-MM-yyyy");
ArrayList<Date> mBaseDateList = getDates(df1.format(cal1.time), df1.format(cal2.time))
A: Scanner in = new Scanner(System.in);
int n = in.nextInt();
Date d1, d2;
Calendar cal = Calendar.getInstance();
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
for (int i = 0; i < n; i++) {
try {
d2 = sdf.parse(in.next());
d1 = sdf.parse(in.next());
long differene = (d2.getTime() - d1.getTime()) / (1000 * 60 * 60 * 24);
System.out.println(Math.abs(differene));
} catch (Exception e) {
}
}
A: public static int getDaysBetweenDates(Date fromDate, Date toDate) {
return Math.abs((int) ((toDate.getTime() - fromDate.getTime()) / (1000 * 60 * 60 * 24)));
} | unknown | |
d7365 | train | You could load the markup for the window as a string in your bookmarklet.js file, then (later) use window.open without a URL (or with "about:blank", I forget which is more cross-browser-compatible), and use my_popup.document.write to write the markup to the new window.
You may find that you can't open the window later, even without cross-domain issues, unless you're doing so in direct response to a user action — which is probably a good thing. :-) | unknown | |
d7366 | train | You're on the right path. But as you want to get sum() of current_sales for each month, you shouldn't use Group by in your subqueries as it will return multiple rows. Instead just put where condition to fetch rows for same pro_id as query is currently executing using Group by clause.
Following query will work:
select tmp.pro_id,
tmp.product_name,
tmp.nsp,
tmp.Jan,
tmp.Feb,
tmp.Mar,
tmp.Apr,
(tmp.Jan+tmp.Feb+tmp.Mar+tmp.Apr) as Q1,
tmp.May,
tmp.Jun,
tmp.Jul,
tmp.Aug,
(tmp.May+tmp.Jun+tmp.Jul+tmp.Aug) as Q2,
tmp.Sep,
tmp.Oct,
tmp.Nov,
tmp.`Dec`,
(tmp.Sep+tmp.Oct+tmp.Nov+tmp.`Dec`) as Q3
From
(
SELECT o.pro_id,
p.product_name,
p.nsp,
(case when coalesce(sum(month(order_date) = 1),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='1')
else 0
end
) as Jan,
(case when coalesce(sum(month(order_date) = 2),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='2')
else 0
end
) as Feb,
(case when coalesce(sum(month(order_date) = 3),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='3')
else 0
end
) as Mar,
(case when coalesce(sum(month(order_date) = 4),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='4')
else 0
end
) as Apr,
(case when coalesce(sum(month(order_date) = 5),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='5')
else 0
end
) as May,
(case when coalesce(sum(month(order_date) = 6),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='6')
else 0
end
) as Jun,
(case when coalesce(sum(month(order_date) = 7),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='7')
else 0
end
) as Jul,
(case when coalesce(sum(month(order_date) = 8),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='8')
else 0
end
) as Aug,
(case when coalesce(sum(month(order_date) = 9),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='9')
else 0
end
) as Sep,
(case when coalesce(sum(month(order_date) = 10),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='10')
else 0
end
) as Oct,
(case when coalesce(sum(month(order_date) = 11),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='11')
else 0
end
) as Nov,
(case when coalesce(sum(month(order_date) = 12),0) <> 0
then
(SELECT SUM(`current_sales`) FROM `orders` WHERE pro_id = o.pro_id and YEAR(order_date) ='2017' AND MONTH(order_date) ='12')
else 0
end
) as `Dec`
from products p
inner join orders o
on p.pro_id = o.pro_id
group by o.pro_id
)tmp
group by tmp.pro_id
;
Click here for DEMO
Although, I have another approach for your task which has huge query with many in-built functions of Mysql like Group_concat(), Substring_Index() etc.
Have a look at another approach:
select tmp2.pro_id,
tmp2.product_name,
tmp2.nsp,
tmp2.Jan,
tmp2.Feb,
tmp2.Mar,
tmp2.Apr,
(tmp2.Jan+tmp2.Feb+tmp2.Mar+tmp2.Apr) as Q1,
tmp2.May,
tmp2.Jun,
tmp2.Jul,
tmp2.Aug,
(tmp2.May+tmp2.Jun+tmp2.Jul+tmp2.Aug) as Q2,
tmp2.Sep,
tmp2.Oct,
tmp2.Nov,
tmp2.`Dec`,
(tmp2.Sep+tmp2.Oct+tmp2.Nov+tmp2.`Dec`) as Q3
from
(
select tmp.pro_id,
tmp.product_name,
tmp.nsp,
(case when coalesce(sum(tmp.month=1),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(1,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Jan,
(case when coalesce(sum(tmp.month=2),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(2,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Feb,
(case when coalesce(sum(tmp.month=3),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(3,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Mar,
(case when coalesce(sum(tmp.month=4),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(4,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Apr,
(case when coalesce(sum(tmp.month=5),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(5,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as May,
(case when coalesce(sum(tmp.month=6),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(6,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Jun,
(case when coalesce(sum(tmp.month=7),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(7,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Jul,
(case when coalesce(sum(tmp.month=8),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(8,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Aug,
(case when coalesce(sum(tmp.month=9),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(9,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Sep,
(case when coalesce(sum(tmp.month=10),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(10,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Oct,
(case when coalesce(sum(tmp.month=11),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(11,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as Nov,
(case when coalesce(sum(tmp.month=12),0) <> 0
then
substring_index
(
substring_index
(
Group_concat(tmp.total order by tmp.month separator ','),
',',
(find_in_set
(12,
Group_concat(tmp.month order by tmp.month separator ',')
)
)
),
',',
-1
)
else 0
end
) as `Dec`
from
(
select o.pro_id,
p.product_name,
p.nsp,
sum(o.current_sales) as total,
month(order_date) as month
from
products p
inner join orders o
on p.pro_id = o.pro_id
group by o.pro_id,month(order_date)
)tmp
group by tmp.pro_id
)tmp2
group by tmp2.pro_id
;
Click here for Demo
Now, you can run both the queries against your actual data and select one with less execution time.
Hope it helps! | unknown | |
d7367 | train | You can access a child property like this:
<TextBox Text="{Binding CurrentTrack.TitleName}"/>
You must have bound your View to your ViewModel | unknown | |
d7368 | train | You should test the server and client in isolation.
The way to do this is to use mock objects to mock either the server (for testing the client) or the client (for testing the server).
A mock server would have the same methods as the real server, but you can decide what they return, i.e. simulate a connection error, a timeout, etc. Because it is a mock, you have full control over the functioning and you don't have to worry about actual connection errors.
For Java, look at the Mockito mocking framework.
A: Unit tests should be focused on exercising public APIs of each class you have built. However, things get a little tricky when dealing with Swing. Consider swingUnit for unit testing Swing components.
A: Chapter 7 of Beautiful Testing describes testing an XMPP chat client. I recommend reading the chapter. The conclusion is illustrative and may provide some pointers for your chat application:
In our quest to create beautiful tests for checking XMPP protocol implementations, we started out by testing simple request-response protocols at the lowest level: the data sent of the network stream. After discovering that this form of testing does not really scale well, we abstracted out the protocol to a higher level, up to the point where the tests used only high-level data structures. By testing protocol behavior on a high level, we were able to write tests for more complex protocols without compromising the clarity of the tests. For the most complex protocols, writing scenarios helped to cover all of the possible situations that can arise in a protocol session. Finally, since XMPP is an open protocol with many different implementations, it's very important to test an XMPP application on the real network, to ensure interoperability with other implementations. By running small test programs regularly, we were able to test the system in its entirety, and check whether our implementation of the protocol plays together nicely with other entities on the network. | unknown | |
d7369 | train | With the current base method signature this is impossible since generics are erased.
Take a type tag (I also renamed the method's T to U to prevent confusion because of shadowing):
abstract class abs[T] {
def getA[U: TypeTag](): U
}
class ABS[T] extends abs[T] {
override def getA[U]()(implicit tag: TypeTag[U]): U = {
if (tag == typeTag[Int]) new Integer(1)
else if (tag == typeTag[Char]) new Character('1')
else throw new Exception("bad type")
}.asInstanceOf[U]
}
A: Maybe you want something like this
abstract class Abs[T] {
def getA: T
def getA(i: T): T
}
class Absz extends Abs[Int]{
override def getA = 4
override def getA(i: Int) = i
}
(new Absz()).getA //> res0: Int = 4
(new Absz()).getA(3) //> res1: Int = 3
A: You can do it with Shapeless polymorphic functions: https://github.com/milessabin/shapeless/wiki/Feature-overview:-shapeless-2.0.0
import shapeless._
object Abs extends Poly0{
implicit def caseInt = at[Int](4)
implicit def caseChar = at[Char]('4')
}
println(Abs.apply[Int])
println(Abs.apply[Char]) | unknown | |
d7370 | train | So, packaging your choices into a dictionary, similar to that shown below, should make it slightly easier to manage the choices here, I think (there's almost certainly a better way than this). Then add to the empty string each time a choice is made and try to access the dictionary. If the choice is in the dictionary then it will recover a text string and an end-state, which will enable us to end the game when we need to.
This approach also makes testing easier by using itertools to generate all possible combinations of states so you can work out which are missing. If an end_state is found (a value of 1 in the second position of the tuple), then you get the game over message and it closes the loop. If the element isn't in the dictionary, then the last selection was removed and the invalid_input function is called.
def test():
choice_dict = {"a": (dP_lvl1.path_a, 0),
"b": (dP_lvl1.path_b, 0),
"c": (dP_lvl1.path_c, 1)
"bb": (dP_lvl2.path_bb, 0),
"aa": (dP_lvl2.path_aa, 0),
"ba": (dP_lvl2.path_ba, 0),
"ab": (dP_lvl2.path_ab, 0),
"aaa": (dP_lvl3.path_aaa, 0),
"aab": (dP_lvl3.path_aab 0),
"aba": (dP_lvl3.path_aba, 0),
"abb": (dP_lvl3.path_abb, 0),
"bab": (dP_lvl3.path_bab, 0),
"bba": (dP_lvl3.path_bba} 0),
"bbb": (dP_lvl3.path_bbb, 0),
"aaaa": (dP_lvl4.path_aaaa, 0),
"abaa": (dP_lvl4.path_abaa, 0),
"aaba": (dP_lvl4.path_aaba, 0),
"aaab": (dP_lvl4.path_aaab, 1),
"bbba": (dP_lvl4.path_bbba, 0),
"bbab": (dP_lvl4.path_bbab, 0),
"babb": (dP_lvl4.path_babb, 0),
"abbb": (dP_lvl4.path_abbb, 0),
"abba": (dP_lvl4.path_abba, 1),
"abab": (dP_lvl4.path_abab, 0),
"aabb": (dP_lvl4.path_aabb, 0),
"baab": (dP_lvl4.path_baab, 0),
"bbaa": (dP_lvl4.path_bbaa, 1),
"baba": (dP_lvl4.path_baba, 0),
"baaa": (dP_lvl4.path_baaa, 0),
"bbbb": (dP_lvl4.path_bbbb, 0),}
# etc. you get the idea
decisions = ""
playing = True
while playing:
decision = input("choose an option 'a' or 'b':")
decisions += decision
try:
data, end_state = choice_dict[decisions]
print(data)
if end_state:
playing = False
print("Game over")
else:
continue
except KeyError:
decisions = decisions[:-1]
invalid_input() | unknown | |
d7371 | train | Based on this answer you can also use Latex to create a table.
For ease of usability, you can create a function that turns your data into the corresponding text-string:
import numpy as np
import matplotlib.pyplot as plt
from math import pi
from matplotlib import rc
rc('text', usetex=True)
# function that creates latex-table
def latex_table(celldata,rowlabel,collabel):
table = r'\begin{table} \begin{tabular}{|1|'
for c in range(0,len(collabel)):
# add additional columns
table += r'1|'
table += r'} \hline'
# provide the column headers
for c in range(0,len(collabel)-1):
table += collabel[c]
table += r'&'
table += collabel[-1]
table += r'\\ \hline'
# populate the table:
# this assumes the format to be celldata[index of rows][index of columns]
for r in range(0,len(rowlabel)):
table += rowlabel[r]
table += r'&'
for c in range(0,len(collabel)-2):
if not isinstance(celldata[r][c], basestring):
table += str(celldata[r][c])
else:
table += celldata[r][c]
table += r'&'
if not isinstance(celldata[r][-1], basestring):
table += str(celldata[r][-1])
else:
table += celldata[r][-1]
table += r'\\ \hline'
table += r'\end{tabular} \end{table}'
return table
# set up your data:
celldata = [[32, r'$\alpha$', 123],[200, 321, 50]]
rowlabel = [r'1st row', r'2nd row']
collabel = [r' ', r'$\alpha$', r'$\beta$', r'$\gamma$']
table = latex_table(celldata,rowlabel,collabel)
# set up the figure and subplots
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.plot(np.arange(100))
ax2.text(.1,.5,table, size=50)
ax2.axis('off')
plt.show()
The underlying idea of this function is to create one long string, called table which can be interpreted as a Latex-command. It is important to import rc and set rc('text', uestec=True) to ensure that the string can be understood as Latex.
The string is appended using +=; the input is as raw string, hence the r. The example data highlights the data format.
Finally, with this example, your figure looks like this: | unknown | |
d7372 | train | update `blogposts`
set `Body2` = substring(`Body`,3000-instr(reverse(left(`Body`,3000)),' ')+1)
,`Body` = left(`Body`,3000-instr(reverse(left(`Body`,3000)),' '))
where char_length(Body) > 3000
;
Demo on 30 characters
set @Body = 'My name is Inigo Montoya! You''ve killed my father, prepare to die!';
select left(@Body,30-instr(reverse(left(@Body,30)),' ')) as field_1
,substring(@Body,30-instr(reverse(left(@Body,30)),' ')+1) as field_2
;
+---------------------------+------------------------------------------+
| field_1 | field_2 |
+---------------------------+------------------------------------------+
| My name is Inigo Montoya! | You've killed my father, prepare to die! |
+---------------------------+------------------------------------------+
Full Example
create table `blogposts` (`Body` varchar(3000),`Body2` varchar(3000));
insert into blogposts (`Body`) values
('Hello darkness, my old friend' )
,('I''ve come to talk with you again' )
,('Because a vision softly creeping' )
,('Left its seeds while I was sleeping' )
,('And the vision that was planted in my brain' )
,('Still remains' )
,('Within the sound of silence' )
,('In restless dreams I walked alone' )
,('Narrow streets of cobblestone' )
,('''Neath the halo of a street lamp' )
,('I turned my collar to the cold and damp' )
,('When my eyes were stabbed by the flash of a neon light' )
,('That split the night' )
,('And touched the sound of silence' )
,('And in the naked light I saw' )
,('Ten thousand people, maybe more' )
,('People talking without speaking' )
,('People hearing without listening' )
,('People writing songs that voices never share' )
,('And no one dared' )
,('Disturb the sound of silence' )
;
select left(`Body`,30-instr(reverse(left(`Body`,30)),' ')) as Body
,substring(`Body`,30-instr(reverse(left(`Body`,30)),' ')+1) as Body2
from `blogposts`
where char_length(Body) > 30
;
+------------------------------+---------------------------+
| Body | Body2 |
+------------------------------+---------------------------+
| I've come to talk with you | again |
+------------------------------+---------------------------+
| Because a vision softly | creeping |
+------------------------------+---------------------------+
| Left its seeds while I was | sleeping |
+------------------------------+---------------------------+
| And the vision that was | planted in my brain |
+------------------------------+---------------------------+
| In restless dreams I walked | alone |
+------------------------------+---------------------------+
| 'Neath the halo of a street | lamp |
+------------------------------+---------------------------+
| I turned my collar to the | cold and damp |
+------------------------------+---------------------------+
| When my eyes were stabbed by | the flash of a neon light |
+------------------------------+---------------------------+
| And touched the sound of | silence |
+------------------------------+---------------------------+
| Ten thousand people, maybe | more |
+------------------------------+---------------------------+
| People talking without | speaking |
+------------------------------+---------------------------+
| People hearing without | listening |
+------------------------------+---------------------------+
| People writing songs that | voices never share |
+------------------------------+---------------------------+
update `blogposts`
set `Body2` = substring(`Body`,30-instr(reverse(left(`Body`,30)),' ')+1)
,`Body` = left(`Body`,30-instr(reverse(left(`Body`,30)),' '))
where char_length(`Body`) > 30
;
select `Body`
,`Body2`
from `blogposts`
where `Body2` is not null
;
+------------------------------+---------------------------+
| Body | Body2 |
+------------------------------+---------------------------+
| I've come to talk with you | again |
+------------------------------+---------------------------+
| Because a vision softly | creeping |
+------------------------------+---------------------------+
| Left its seeds while I was | sleeping |
+------------------------------+---------------------------+
| And the vision that was | planted in my brain |
+------------------------------+---------------------------+
| In restless dreams I walked | alone |
+------------------------------+---------------------------+
| 'Neath the halo of a street | lamp |
+------------------------------+---------------------------+
| I turned my collar to the | cold and damp |
+------------------------------+---------------------------+
| When my eyes were stabbed by | the flash of a neon light |
+------------------------------+---------------------------+
| And touched the sound of | silence |
+------------------------------+---------------------------+
| Ten thousand people, maybe | more |
+------------------------------+---------------------------+
| People talking without | speaking |
+------------------------------+---------------------------+
| People hearing without | listening |
+------------------------------+---------------------------+
| People writing songs that | voices never share |
+------------------------------+---------------------------+
A: That code will always divide string with 3000 characters and push it to the array. You can use this code block no matter what's the character length is. Don't forget if your text have characters lower than 3000 there will be just 1 element in the $bodyParts variable.
$bodyText; // That came from SQL Ex Query : SELECT body FROM blogposts
$bodyParts = [];
$lengthOfBody = strlen($bodyText);
if($lengthOfBody > 3000){
$forLoopInt = ceil($lengthOfBody / 3000); // For example if your body text have 3500 characters it will be 2
echo $forLoopInt;
for($i = 0; $i<= $forLoopInt - 2; $i++){
$bodyParts[] = substr($bodyText, ($i) * 3000 , 3000);
}
// lets fetch the last part
$bodyParts[] = substr( $bodyText,($forLoopInt - 1) * 3000);
}else{
$bodyParts[] = $bodyText;
}
/* anyway if your body text have characters lower than 3000 , bodyParts array will contain just 1 element, if not it will have Ceil(Length of body / 3000) elements in it. */
var_dump($bodyParts); | unknown | |
d7373 | train | For a non-coding solution you could add two additional rows (and swap row 2 & 3) to allow for the automation:
Row 1: Values
Row 2: Values
Row 3: =if(and(A1="",A2=""),"SELECT FROM DROPDOWN BELOW",if(and(A1="",A2<>""),A2,if(and(A1<>"",A2=""),,A1,if(and(A1<>"",A2<>""),A1,""))))
Row 4: Dropdown list where the first option is blank ""
Row 5: =IF(A3<>"SELECT FROM DROPDOWN BELOW", A3, IF(A4<>"",A4,"SELECTION NEEDED"))
Much taller, but fits the given requirements. | unknown | |
d7374 | train | You'd hit couchjs stack size limit. If you're using CouchDB 1.4.0+ his size is limited by 64 MiB by default. You may increase it by specifying -S <number-of-bytes> option for JavaScript query server in CouchDB config. For instance, to set stack size to 128 MiB your config value will looks like:
[query_servers]
javascript = /usr/bin/couchjs -S 134217728 /usr/share/couchdb/server/main.js
Note, that /usr/bin/couchjs may be different for you depending on your OS. After adding this changes you need to restart CouchDB.
If you'll try to update JavaScript query server config though HTTP API, make sure to kill all couchjs processes from shell to let them apply changes.
If your CouchDB version is <1.4 try to upgrade first. | unknown | |
d7375 | train | There is something in Docs for automatic substitution. Under the tools, menu, click preferences. You can also add a personal dictionary.
If you could add a personal dictionary from code, that would probably work, but I don't see a way to do that.
There is no trigger or way to monitor a Google doc for every keystroke made. See the documentation:
Trigger Events
Something would need to trigger your function to run on every keystroke. In a spreadsheet, there is an onEdit() simple trigger that monitors every change to a cell. But there is nothing like that for Google Docs.
The only event type available to a Google Doc is open. | unknown | |
d7376 | train | The instantiateViewController method creates a new copy of your view controller. Your existing view controllers aren't unloaded because iOS doesn't know that you want to 'go back', so to speak. It can't unload any of your existing view controllers because they're still in the navigation hierarchy. What you really want to do is 'rewind' your storyboard in some way.
Fortunately from iOS 6 there's a much improved way to do this, through unwinding. This lets you 'backtrack' in your storyboard right back to the start, which it sounds like you want to do. The WWDC videos have some examples and walk throughs, and you might also want to look at this existing SO question:
What are Unwind segues for and how do you use them?
A: I found that it can be done easily by calling dismissViewControllerAnimated:completion: on the first view controller in the hierarchy. Fortunately that's all it is needed to accomplish what I wanted :-) | unknown | |
d7377 | train | You could add a .selected to a link on different pages and target it with CSS.
A: The method I would use is a javascript/jQuery approach.
Similar to what htmltroll said, create a class, such as .selected, that has all of the styles you'd like the active link to have. Then in javascript, add something like this:
$(your-links).click(function(){
if (!$(this).hasClass("selected"))$(this).addClass("selected");
})
Something along those lines.
A: As @htmltroll and @Joel said, you'd need to use a little bit of JS(jQuery in my case) to achieve this, as CSS doesn't handle click events.
To make it a bit more modular, you could check to see if any .site-nav li has a nested ul, and then apply the 'active' class accordingly.
// any <li> that is a direct descendant of 'site-nav
var links = $('.site-nav').find('> li');
// bind the click event
links.on('click', function() {
var clkd = $(this);
// if the <li> has a <ul> in it
if(clkd.has('ul').length) {
// and if that <li> doesn't have the 'active' class
if(!clkd.hasClass('active')) {
// Add the active class to the <li>
clkd.addClass('active');
} else {
// if the dropdown was already open, remove class to close
clkd.removeClass('active');
}
}
})
I threw together a quick fiddle to demonstrate: http://jsfiddle.net/uXB2T/7/ | unknown | |
d7378 | train | Remove the () from your call to the function
function hello($a, $b) {
Write-host "a is $a and b is $b"
}
hello "first" "second"
or
hello -a "first" -b "second" | unknown | |
d7379 | train | You can do this with tail quite easily
tail -n+3 foo > result.data
You said top 3 rows but the example has remove the top 2?
tail -n+2 foo > result.data
You can find more ways here
https://unix.stackexchange.com/questions/37790/how-do-i-delete-the-first-n-lines-of-an-ascii-file-using-shell-commands
A: Just throw those lines away.
Use Dictreader to parse the header
import csv
with open("filename") as fp:
fp.readline()
fp.readline()
csvreader = csv.DictReader(fp, delimiter=',')
for row in csvreader:
#your code here
A: Due to the way file systems work, you cannot simply delete the lines from the file directly. Any method to do so will necessarily involve rewriting the entire file with the offending lines removed.
To be safe, before deleting your old file, you'll want store the new file temporarily until you are sure the new one has been successfully created. And if you want to avoid reading the entire large file into memory, you'll want to use a generator.
Here's a generator that returns every item from an iterable (such as a file-like object) after a certain number of items have already been returned:
def gen_after_x(iterable, x):
# Python 3:
yield from (item for index,item in enumerate(iterable) if index>=x)
# Python 2:
for index,item in enumerate(iterable):
if index>=x:
yield item
To make things simpler, we'll create a function to write the temporary file:
def write_file(fname, lines):
with open(fname, 'w') as f:
for line in lines:
f.write(line + '\n')
We will also need the os.remove and os.rename functions from the os module to delete the source file and rename the temp file. And we'll need copyfile from shutil to make a copy, so we can safely delete the source file.
Now to put it all together:
from os import remove, rename
from shutil import copyfile
src_file = 'big_file'
tmp_file = 'big_file_temp'
skip = 2
with open(src_file) as fin:
olines = gen_after_x(fin, skip)
write_file(tmp_file, olines)
src_file_copy = src_file + '_copy'
copyfile(src_file, src_file_copy)
try:
remove(src_file)
rename(tmp_file, src_file)
remove(src_file_copy)
except Exception:
try:
copyfile(src_file_copy, src_file)
remove(src_file_copy)
remove(tmp_file)
except Exception:
pass
raise
However, I would note that 240 MB isn't such a huge file these days; you may find it faster to do this the usual way since it cuts down on repetitive disk writes:
src_file = 'big_file'
tmp_file = 'big_file_temp'
skip = 2
with open(src_file) as f:
lines = f.readlines()
for _ in range(skip):
lines.pop(0)
with open(tmp_file, 'w') as f:
f.write('\n'.join(lines))
src_file_copy = src_file + '_copy'
copyfile(src_file, src_file_copy)
try:
remove(src_file)
rename(tmp_file, src_file)
remove(src_file_copy)
except Exception:
try:
copyfile(src_file_copy, src_file)
remove(src_file_copy)
remove(tmp_file)
except Exception:
pass
raise
...or if you prefer the more risky way:
with open(src_file) as f:
lines = f.readlines()
for _ in range(skip):
lines.pop(0)
with open(src_file, 'w') as f:
f.write('\n'.join(lines)) | unknown | |
d7380 | train | The only reason to use binding to a backing bean's UIComponent instance that I know of is the ability to manipulate that component programmatically within an action/actionlistener method, or ajax listener method, like in:
UIInput programmaticInput;//getter+setter
String value1, value2;//getter+setter
...
public void modifyInput() {
ELContext ctx = FacesContext.getCurrentInstance().getELContext();
ValueExpression ve = FacesContext.getCurrentInstance().getApplication().getExpressionFactory().createValueExpression(ctx, "#{bean.value2}", Object.class);
programmaticInput.setValueExpression("value", ve);
}
After the action method has been triggered the value of component <h:inputText value="#{bean.value1}" binding="#{bean.programmaticInput} ... /> will be bound to value2 instead of value1.
I rarely use this type of binding, because facelets offer an XML-based view definition without the necessity to (regularly) mess with programmatic components.
Be sure to know that the abovementioned construct fails in Mojarra version older than 2.1.18, forcing view scoped beans to be recreated on every HTTP request. For more details refer to @ViewScoped fails in tag handlers.
More typically, you'd want to use binding to the view in which you can do cross-field validation:
<h:inputText binding="#{input}" ... />
<h:inputText validator="#{bean.validate}" ... >
<f:attribute name="input" value="#{input}" />
</h:inputText>
Here, the whole first input component will be available as an attribute of the second component and therefore its value will be available in the associated validator (method). Another example is to check which of the command components has been triggered in view:
<h:commandButton binding="#{button}" ... />
<h:inputText disabled="#{not empty param[button.clientId]}" ... />
Here, the input text component will be disabled only when the button was pressed.
For more information proceed to the follwing answers by BalusC:
*
*What is component binding in JSF? When it is preferred to be used?
*How does the 'binding' attribute work in JSF? When and how should it be used?
A: The <h:form> tag can be bound to a backing bean's property that has the same type of the tag HTMLForm - just like the other usual tags.
See also: Difference between value and binding | unknown | |
d7381 | train | here sample code:
$('#textboxID').on('change',function(){
if($(this).val()>=2147483647){
//put error span with nice css
}
});
A: HTML:
<input type="text" id="numberField" />
<input id="submit" type="submit" value="Submit" />
JQuery:
$('#submit').click(function(){
var numberField = $('#numberField');
var number = parseInt(numberField.val(), 10);
if(isNaN(number) || number > 2147483647){
numberField.val('');
alert('Not a number');
}
else
alert('Number is: '+ number);
});
jsFiddle http://jsfiddle.net/R3Rx2/1/
A: You could do something like this:
DEMO: http://jsfiddle.net/mSSYT/1/
$('#test').on('keyup', function (e) {
var $self = $(this),
v = $self.val(),
max = 2147483647;
//blank any input that isint a number
if (!/^\d*$/.test(v)) {
$self.val('');
return;
}
//trim the value until it meets the condition
if (v >= max) {
while (v >= max) {
v = v.substring(0, v.length - 1);
}
$self.val(v);
}
}); | unknown | |
d7382 | train | It's last version uses .NET Framework 2.0.
Years ago I gave it a try. But that was not interesting to me those days. :( | unknown | |
d7383 | train | Sorry, got it right now using a XMLSerializer instead of the Transformer...
A: Here's how you could do it using the LSSerializer found in JDK:
private void writeDocument(Document doc, String filename)
throws IOException {
Writer writer = null;
try {
/*
* Could extract "ls" to an instance attribute, so it can be reused.
*/
DOMImplementationLS ls = (DOMImplementationLS)
DOMImplementationRegistry.newInstance().
getDOMImplementation("LS");
writer = new OutputStreamWriter(new FileOutputStream(filename));
LSOutput lsout = ls.createLSOutput();
lsout.setCharacterStream(writer);
/*
* If "doc" has been constructed by parsing an XML document, we
* should keep its encoding when serializing it; if it has been
* constructed in memory, its encoding has to be decided by the
* client code.
*/
lsout.setEncoding(doc.getXmlEncoding());
LSSerializer serializer = ls.createLSSerializer();
serializer.write(doc, lsout);
} catch (Exception e) {
throw new IOException(e);
} finally {
if (writer != null) writer.close();
}
}
Needed imports:
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.io.Writer;
import org.w3c.dom.Document;
import org.w3c.dom.bootstrap.DOMImplementationRegistry;
import org.w3c.dom.ls.DOMImplementationLS;
import org.w3c.dom.ls.LSOutput;
import org.w3c.dom.ls.LSSerializer;
I know this is an old question which has already been answered, but I think the technical details might help someone.
A: I tried using the LSSerializer library and was unable to get anywhere with it in terms of retaining the Doctype. This is the solution that Stephan probably used
Note: This is in scala but uses a java library so just convert your code
import com.sun.org.apache.xml.internal.serialize.{OutputFormat, XMLSerializer}
def transformXML(root: Element, file: String): Unit = {
val doc = root.getOwnerDocument
val format = new OutputFormat(doc)
format.setIndenting(true)
val writer = new OutputStreamWriter(new FileOutputStream(new File(file)))
val serializer = new XMLSerializer(writer, format)
serializer.serialize(doc)
} | unknown | |
d7384 | train | You're encoding in UTF-8, so you can just encode the Unicode control characters like any character. But something tells me you mean something else.
A: You can also change your fieldpackager to a BINARY type (see IF*BINARY) and use an hex representation, i.e.:
<field id="xx" value="0123456789ABCDEF" type="binary" /> | unknown | |
d7385 | train | Zebble provides you with other overloads of the Nav pop-up methods to help you achieve that.
Host page:
var result = await Nav.ShowPopup<TargetPage, SomeType>();
// Now you can use "result".
Pop-up page's close button:
...
await Nav.HidePopup(someResultValue);
Notes:
*
*"SomeType" can be a simple type such as boolean or string, or it can be a complex class.
*The type of the object returned by the pop-up must match the one expected by the host parent page.
You can check out the full spec here: http://zebble.net/docs/showing-popup-pages | unknown | |
d7386 | train | your problem is the where clause of propertyid;
you should have that as a JOIN condition.
WHERE clause and ON conditions can be interchangeably used in INNER JOIN but in OUTER JOIN they impact the meaning.
demo link
SELECT [R].[PropertyId], [D].[DateName], CASE WHEN [R].[StartingDate] IS NULL THEN 1 ELSE 0 END AS [IsAvailable]
FROM @Dates AS D
LEFT JOIN @Rentals R ON [D].[DateName] >= [R].[StartingDate] AND [D].[DateName] <= [R].[EndingDate]
AND [R].[PropertyId] = 'A5B2B505-EC6F-EC11-A004-00155E014807'
WHERE [D].[DateName] BETWEEN @AvailableRentalStartingDate AND @AvailableRentalEndingDate
ORDER BY [D].[DateName] | unknown | |
d7387 | train | You can't add a constructor to a function - especially not to an arrow function. Constructor belongs to a class. | unknown | |
d7388 | train | Your inner loop is pushing onto the new array for every item in the array, not just if the desired month is found.
Don't use an inner loop. Use find() to find the matching month, and push 0 if you don't find it.
for (i = 1; i <= 5; i++) {
if (myobj.find(el => el.month == i)) {
newArray.push(i);
} else {
newArray.push(0);
}
}
if you want to push the totals instead of the months, assign the result of find() to a variable so you can get the total from it.
for (i = 1; i <= 5; i++) {
var found = myobj.find(el => el.month == i);
newArray.push(found ? found.total : 0);
} | unknown | |
d7389 | train | chromium-browser --disable-hang-monitor | unknown | |
d7390 | train | My understanding is that a WAR will only expose the resources contained within its own root file structure. Hence the work-arounds (weblets, Maven config) mentioned in the answers to this question. You could make a custom build script that unpacked the content of logon.jar into your WAR, but I definitely wouldn't recommend that sort of hack. With a little more information concerning why you want to do this someone may be able to provide a better approach.
A: As far as I understand, html files should be placed in a WAR instead of a JAR because those are web resources. JAR file should only contain classes/resources that would be looked up by your web components (e.g. Servlet, JSP) using class loading mechanism. | unknown | |
d7391 | train | First of all the target type of CONVERT should be DATETIME...
The format code you've tried expects the month as word (mon != mm)
SELECT CONVERT(DATETIME,'18 jan 2016 11:29:27',113);
You might use one of these:
SELECT CONVERT(DATETIME,'18-01-2016 11:29:27',103)
SELECT CONVERT(DATETIME,'18-01-2016 11:29:27',104)
A: I believe its inserting as varchar, u can use this script
SELECT CONVERT(varchar(24),'18-01-2016 11:29:27',120) | unknown | |
d7392 | train | Make sure you give Template Name: AWPN Featured Article a new unique name.
Then go inside an artist's post and select from the right sidebar the new Template you created.
Write some content, publish, and check the page in the front-end. | unknown | |
d7393 | train | Try this :
public class DatabaseManager {
private DatabaseHelper dataHelper;
private SQLiteDatabase mDb;
private Context ctx;
private String DATABASE_PATH = "/data/data/Your_Package_Name/databases/";
private static String DATABASE_NAME = "Your_Database";
private static String TABLE_NAME = "Your_Table";
private static final int DATABASE_VERSION = 1;
String Class_Tag = "DatabaseManager";
public DatabaseManager(Context ctx) {
this.ctx = ctx;
dataHelper = new DatabaseHelper(ctx);
}
private static class DatabaseHelper extends SQLiteOpenHelper {
@SuppressWarnings("unused")
Context myContext = null;
public DatabaseHelper(Context context) {
super(context, DATABASE_NAME, null, DATABASE_VERSION);
this.myContext = context;
}
@Override
public void onCreate(SQLiteDatabase db) {
}
@Override
public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) {
Log.w("DBHelper", "Upgrading database from version " + oldVersion
+ " to " + newVersion + ", which will destroy all old data");
onCreate(db);
}
}
public boolean checkDataBase() {
File f = null;
try {
String myPath = DATABASE_PATH + DATABASE_NAME;
f = new File(myPath);
} catch (Exception e) {
Log.e(Class_Tag, "checkDataBase()", e);
}
return f.exists();
}
public void createDataBase() {
try {
openDB();
InputStream myInput = ctx.getAssets().open(DATABASE_NAME + ".db");
OutputStream myOutput = new FileOutputStream(DATABASE_PATH
+ DATABASE_NAME);
byte[] buffer = new byte[1024];
int length;
while ((length = myInput.read(buffer)) > 0) {
myOutput.write(buffer, 0, length);
}
if (mDb.isOpen())
mDb.close();
myOutput.flush();
myOutput.close();
myInput.close();
} catch (Exception e) {
Log.e(Class_Tag, "createDataBase()", e);
}
}
public DatabaseManager openDB() throws SQLException {
mDb = dataHelper.getWritableDatabase();
return this;
}
public void closeDB() {
try {
if (mDb != null) {
mDb.close();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
and in MainActivity
DatabaseManager dbMgr=new DatabaseManager(this);
try {
if (!dbMgr.checkDataBase()) {
dbMgr.createDataBase();
}
} catch (Exception e) {
Log.e(Class_Tag, "onCreate()", e);
} finally {
dbMgr.closeDB();
}
Hope it helps... | unknown | |
d7394 | train | Instead of
const dataset = gdal.open('raster.bip');
use
const dataset = gdal.open('raster.bip', gdal.GA_Update);
to allow writing to raster.bip | unknown | |
d7395 | train | Thanks for all answers, finally found solution there is a property for maximum sessions which value was 0 by default. Changed it to 100 and it send all pending emails immediately.
A: This souds like a DNS issue. Check your /badmail directory. It will have .bad and .bdp files in there. You can open these in notepad (there will be some binary in there).
However, it may point to the possible problem.
You may also want to try and enable logging on the SMTP service. There may be something in there.
A: Possible reasons are that some SMTP servers block the outgoing messages if there domain name mismatch, possible to prevent spam mails from being sent. So for example, I will not be able to send my email with an address [email protected] from my domain yourdomain.com.
Hope that helps.
A: Ensure your sending domain is the same as the google apps domain
Ensure your sending address is a real address and not just an alias
IIRC you need to use STARTTLS (SSL) not basic authentication | unknown | |
d7396 | train | You could create a subplot with two rows and then plot that subplot.
This could be a minimal example
import plotly.graph_objs as go
import plotly.offline as pyo
from plotly.subplots import make_subplots
fig = make_subplots(rows=2, cols=1)
# traces WithStorage
trace1 = go.Scatter({'x': [3,3.1],'y': [1,1.1], 'name': 'Coal', 'mode':'lines', 'line' : dict(width = 0.5, color = 'grey'), 'stackgroup': 'one'})
trace2 = go.Scatter({'x': [4,4.2],'y': [2,2.1], 'name': 'Nuclear', 'mode':'lines', 'line' : dict(width = 0.5, color = 'red'), 'stackgroup': 'one'})
#traces WithoutStorage
trace3 = go.Scatter({'x': [5,5.1],'y': [2,2.1], 'name': 'Coal', 'mode':'lines', 'line' : dict(width = 0.5, color = 'grey'), 'stackgroup': 'one'})
trace4 = go.Scatter({'x': [6, 6.1],'y': [3,3.1], 'name': 'Nuclear', 'mode':'lines', 'line' : dict(width = 0.5, color = 'red'), 'stackgroup': 'one'})
fig = make_subplots(rows=2, cols=1)
# we add each trace to their subplot
fig.add_trace(trace1,1,1)
fig.add_trace(trace2,1,1)
fig.add_trace(trace3,2,1)
fig.add_trace(trace4,2,1)
fig.update_layout(height=600, width=600, title_text="Stacked Subplots")
pyo.plot(fig, filename= 'testing.html') | unknown | |
d7397 | train | My idea is to add some javascript mocking library like sinon and execute this javascript.
Especially take a look at fake XMLHttpRequest
Javascript code will like this:
let sinon = require('sinon');
let xhr = sinon.useFakeXMLHttpRequest();
let requests = [];
xhr.onCreate = function (xhr) {
requests.push(xhr);
}
const url = "http://foo.bar?q=" + location.href
const method = "GET"
const isAjax = true
let xmlhttp = new XMLHttpRequest();
xmlhttp.open(method, url, isAjax);
console.log(requests[0].url) | unknown | |
d7398 | train | Problem lies in this code:
return step.forEach(doc=>{
if (!doc.exists){
console.log('Zut !')
}else{
console.log(doc.id)
return doc.id
}
step is a QuerySnapshot so you should access the documents using step.docs. The function forEach does not return you the the array, but returns undefined instead.
use map which actually returns the transformed data:
return steps.docs.map(doc => doc.id);
A: I've finished my app long time ago, but I think that might interest.
I finally decided to learn typescript lol.
You can't get a value of a promise since it's not return, that was what I was trying to do.
Just call your function and .then do what you want ^^.
By the way, I needed to map my data as said by Lim Shang Yi | unknown | |
d7399 | train | You could rely on Socket.bufferedamount (never tried)
http://www.whatwg.org/specs/web-apps/current-work/multipage/network.html#dom-websocket-bufferedamount
var socket = new WebSocket('ws://game.example.com:12010/updates');
socket.onopen = function () {
setInterval(function() {
if (socket.bufferedAmount == 0){
// Im' not busy anymore - set a flag or something like that
}
}, 50);
};
Or implement an acknowledge answer from the server for every client message (tried, works fine) | unknown | |
d7400 | train | Use transform for sum for divide by column Total:
df['Average'] = df['Total'] / df.groupby('Member')['Total'].transform('sum')
print (df)
Member Category Total Average
0 1001 1 5 0.277778
1 1001 2 4 0.222222
2 1001 3 9 0.500000
3 1003 1 7 0.500000
4 1003 2 5 0.357143
5 1003 3 2 0.142857
6 1005 1 2 0.285714
7 1005 3 5 0.714286
Detail:
print (df.groupby('Member')['Total'].transform('sum'))
0 18
1 18
2 18
3 14
4 14
5 14
6 7
7 7
Name: Total, dtype: int64
Alternative solution:
df['Average'] = df['Total'] / df['Member'].map(df.groupby('Member')['Total'].sum())
Timings:
np.random.seed(123)
N = 100000
L = ['AV','DF','SD','RF','F','WW','FG','SX']
dates = pd.date_range('2015-01-01', '2015-02-20')
df = pd.DataFrame(np.random.randint(100, size=(N, 3)), columns=['Member','Category','Total'])
df = df.sort_values(['Member','Category']).reset_index(drop=True)
#Wen solution
In [395]: %timeit df.groupby('Member').Total.apply(lambda x : x/sum(x))
10 loops, best of 3: 31.2 ms per loop
In [396]: %timeit df['Total'] / df.groupby('Member')['Total'].transform('sum')
100 loops, best of 3: 5.11 ms per loop
#alternative a bit slowier solution
In [397]: %timeit df['Total'] / df['Member'].map(df.groupby('Member')['Total'].sum())
100 loops, best of 3: 9.92 ms per loop
A: df['ave']=df.groupby('Member').Total.apply(lambda x : x/sum(x))
df
Out[318]:
Member Category Total ave
0 1001 1 5 0.277778
1 1001 2 4 0.222222
2 1001 3 9 0.500000
3 1003 1 7 0.500000
4 1003 2 5 0.357143
5 1003 3 2 0.142857
6 1005 1 2 0.285714
7 1005 3 5 0.714286 | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.