_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d17701 | val | You can group the elements using a dict, always keeping the sublist with the smaller second element:
l = [[1, 2, 3], [1, 3, 4], [1, 4, 5], [2, 4, 3], [2, 5, 6], [2, 1, 3]]
d = {}
for sub in l:
k = sub[0]
if k not in d or sub[1] < d[k][1]:
d[k] = sub
Also you can pass two keys to sorted, you don't need to call sorted twice:
In [3]: l = [[1,4,6,2],[2,2,4,6],[1,2,4,5]]
In [4]: sorted(l,key=lambda x: (-x[1],x[0]))
Out[4]: [[1, 4, 6, 2], [1, 2, 4, 5], [2, 2, 4, 6]]
If you wanted to maintain order in the dict as per ordering needs to be preserved.:
from collections import OrderedDict
l = [[1, 2, 3], [1, 3, 4], [1, 4, 5], [2, 4, 3], [2, 5, 6], [2, 1, 3]]
d = OrderedDict()
for sub in l:
k = sub[0]
if k not in d or sub[1] < d[k][1]:
d[sub[0]] = sub
But not sure how that fits as you are sorting the data after so you will lose any order.
What you may find very useful is a sortedcontainers.sorteddict:
A SortedDict provides the same methods as a dict. Additionally, a SortedDict efficiently maintains its keys in sorted order. Consequently, the keys method will return the keys in sorted order, the popitem method will remove the item with the highest key, etc.
An optional key argument defines a callable that, like the key argument to Python’s sorted function, extracts a comparison key from each dict key. If no function is specified, the default compares the dict keys directly. The key argument must be provided as a positional argument and must come before all other arguments.
from sortedcontainers import SortedDict
l = [[1, 2, 3], [1, 3, 4], [1, 4, 5], [2, 4, 3], [2, 5, 6], [2, 1, 3]]
d = SortedDict()
for sub in l:
k = sub[0]
if k not in d or sub[1] < d[k][1]:
d[k] = sub
print(list(d.values()))
It has all the methods you want bisect, bisect_left etc..
A: If I got it correctly, the solution might be like this:
mylist = [[1, 2, 3], [1, 3, 4], [1, 4, 5], [7, 3, 6], [7, 1, 8]]
ordering = []
newdata = {}
for a, b, c in mylist:
if a in newdata:
if b < newdata[a][1]:
newdata[a] = [a, b, c]
else:
newdata[a] = [a, b, c]
ordering.append(a)
newlist = [newdata[v] for v in ordering]
So in newlist we will receive reduced list of [[1, 2, 3], [7, 1, 8]]. | unknown | |
d17702 | val | You are sending a function as the data, I think you want the return of the function so you would need to call it by including parens after it:
var newGridDataSource = new kendo.data.DataSource({
transport: {
read: {
url: "/api/Stuff/",
dataType: "json",
data:
function(){
return { name: "Bob" };
}()
}
}
});
But that seems kind of silly when you could just do:
data: { name: "Bob" }
udpate
You can't get data FromBody with a GET request. Remove <FromBody()> and the model binder will look in the query string for data and it should work. | unknown | |
d17703 | val | You can use something like this:
$f = array_filter(array_keys($arr), function ($k){
return preg_match('~something/(?![_!])~', $k); });
The negative lookahead (?!...) checks if the slash isn't followed by a ! or a _.
Note that preg_match returns 1 if found or 0 if not. If you want to return true or false you can do it with the ternary operator:
return preg_match('~something/(?![_!])~', $k) ? true : false;
an other way:
If you use strpos, I assume that the string you describe as 'something/' is a fixed string $str, in other words you know the length of this strings. Since strpos returns the position $pos in the string of the occurence, you can easily check if at the position $pos + strlen($str) there is either an exclamation mark or an underscore. (you could, for example, extract this next character and check if it is in the array array('!', '_')) | unknown | |
d17704 | val | Your base case is only checking if the number is zero. Check this solution, and let me know if you have doubts!
public int evenToZero(int number){
//Base case: Only one digit
if(number % 10 == number){
if(number % 2 == 0){
return 0;
}
else{
return number
}
}
else{ //Recursive case: Number of two or more digits
//Get last digit
int lastDigit = number % 10;
//Check if it is even, and change it to zero
if(lastDigit % 2 == 0){
lastDigit = 0;
}
//Recursive call
return evenToZero(number/10)*10 + lastDigit
}
}
A: Below is a recursive method that will do what you need. Using strings will be easier because it allows you to 'add' the number digit by digit. That way, using 332 as example, 3+3+0 becomes 330, and not 6.
Every run through, it cuts of the digit furthest right so 332 becomes 33, then 3, and then 0.
public static void main(String[] args) {
System.out.println(Even(12345));
}
private static String Even(int n) {
if(n==0)
return "";
else if(n%2==0){
return Even(n/10) + "0";
}
else{
String s = Integer.toString(n%10);
return Even(n/10) + s;
}
} | unknown | |
d17705 | val | filename users wouldn't point to the same object as filename users/.
That is not true. In most filesystems, you cannot have a file named users and a directory named users in the same parent directory.
cd users and cd users/ have the same result.
A: Short answer: they may only identify the same resource if one redirects to the other.
URI's identify resources, but they do so differently depending on the response status code to a GET request in HTTP. If one returns a 3xx to the other, then the two URI's identify the same resource. If the two resources each return a 2xx code, then the URI's identify different resources. They may return the same response in reply to a GET request, but they are not therefore the same resource. The two resources may even map to the same handler to produce their reply, but they are not therefore the same resource. To quote Roy Fielding:
The resource is not the storage object. The resource is not a
mechanism that the server uses to handle the storage object. The
resource is a conceptual mapping -- the server receives the identifier
(which identifies the mapping) and applies it to its current mapping
implementation (usually a combination of collection-specific deep tree
traversal and/or hash tables) to find the currently responsible
handler implementation and the handler implementation then selects the
appropriate action+response based on the request content.
So, should /users and /users/ return the same response? No. If one does not redirect to the other, then they should return different responses. However, this is not itself a constraint of REST. It is a constraint, however, which makes networked systems more scalable: information that is duplicated in multiple resources can get out of sync (especially in the presence of caches, which are a constraint of REST) and lead to race conditions. See Pat Helland's Apostate's Opinion for a complete discussion.
Finally, clients may break when attempting to resolve references relative to the given URI. The URI spec makes it clear that resolving the relative reference Jerry/age against /users/ results in /users/Jerry/age, while resolving it against /users (no trailing slash) results in /Jerry/age. It's amazing how much client code has been written to detect and correct the latter to behave like the former (and not always successfully).
For any collection (which /users/ often is), I find it best to always emit /users/ in URI's, redirect /users to /users/ every time, and serve the final response from the /users/ resource: this keeps entities from getting out of sync and makes relative resolution a snap on any client.
A: There are some nuances on this, while "users" represent one resource while "users/" should represent a set of resources, or operations on all resources "users"... But there does not seem to exist a "standard" for this issue.
There is another discussion on this, take a look here: https://softwareengineering.stackexchange.com/questions/186959/trailing-slash-in-restful-api
A: Technically they are not the same. But a request for /users will probably cause a redirect to /users/ which makes them semantically equal.
In terms of JAX-RS @Path, they can both be used for the same path. | unknown | |
d17706 | val | There are so many caching back-ends you can use as listed in https://docs.djangoproject.com/en/1.7/topics/cache/
I haven't tried the file system or local memory caching myself, I always needed memcached, but looks like they're available, and the rest is a piece of cake!
from django.core import cache
cache_key = 'questions'
questions = cache.cache.get(cache_key) # to get
if questions:
# use questions you fetched from cache
else:
questions = { 'question1': 'How are you?'} #Something serializable that has your questions
cache.cache.set(cache_key, questions) | unknown | |
d17707 | val | I don't see why you have to "cluster" on the fly. Summarize at each zoom level at a resolution you're happy with.
Simply have a structure of X, Y, # of links. When someone adds a link, you insert the real locations (Zoom level max, or whatever), then start bubbling up from there.
Eventually you'll have 10 sets of distinct coordinates if you have 10 zoom levels - one for each different zoom level.
The calculation is trivial, and you only have to do it once.
A: I am currently doing dynamic server-side clustering of about 2,000 markers, but it runs pretty quick up to 20,000. You can see discussion of my algorithm here:
Map Clustering Algorithm
Whenever the user moves the map I send a request with the zoom level and the boundaries of the view to the server, which clusters the viewable markers and sends it back to the client.
I don't cache the clusters because the markers can be dynamically filtered and searched - but if they were pre-clustered it would be super fast! | unknown | |
d17708 | val | I wouldn't call it sum statement.
The statement
var1+1;
is equivalent of
retain var1 0;
var1 = var1 + 1;
Nor the 'long' sum statement
var1 = var1 + 1;
nor
var1 = sum(var1, 1);
itself would do the RETAIN behavior nor initialization to zero.
So to answer the question:
initialization to zero is part of RETAIN behavior implicitly requested by
a + b;
syntax for variable a.
I can't think of other exceptions. | unknown | |
d17709 | val | Are you just trying to evaluate an arbitrary expression inside a double quoted string? Then maybe you're thinking of
print "@{[$this->method]}";
There is also a trick to call the method in scalar context, but the syntax is a little less clean.
print "${\($this->method)}";
A: Well, if $this->method outputs a string or a number (like PHP, Perl can automatically convert numbers to strings when required), then you can do print $this->method . "\n";.
If $this->method outputs a data structure (eg an array reference or a hash reference), you can use Data::Dumper to look at the structure of the data. Basically, print Dumper($foo) is the Perl equivalent of PHP's var_dump($foo).
What are you trying to do, exactly?
A: If $this->method is returning a string, you can do this:
print $this->method . "\n";
without quotes. That will print your string. Sometimes, that can lead to a clumsy looking statement:
print "And we have " . $this->method . " and " . $that->method . " and " . $there->method . "\n";
In that case you can use a little programming trick of:
print "And we have @{[$this->method]} and @{[that->method]} and @{[$their->method]}\n";
Surrounding a function with @{[]} prints out the function's value. Someone explained this to me once, but I can't remember why it works. | unknown | |
d17710 | val | Make the first column the primary key of the table.
A: Set the column as a primary key. I doesn't have to be an identity column to has a PK.
A: Create it the same way you would any other column: create table sometable (column1 varchar(10), column2 varchar(20)) or whatever.
Do you mean: How can you get the database to force it to be unique? Either declare it to be the primary key, or create a unique index on the column.
Perhaps you're thinking that a primary key must be auto-generated? There's no such rule. Whether you invent the value yourself or use an autonumber feature has nothing to do with whether a field can be a primary key.
A: Why not just put a unique constraint on it:
ALTER TABLE <table_name>
ADD CONSTRAINT <constraint_name>
UNIQUE(<column_name>) | unknown | |
d17711 | val | Replace text from txt for each line and save as Could somebody help me with the following? I tried making it on my own, but all I could do is open a txt and replace a static word with static word.
VBA script:
Open and Read first line of ThisVbaPath.WordsToUse.txt
Open and Find USER_INPUT in ThisVbaPath.BaseDoc.docx (or txt)
Replace all occurrence of USER_INPUT with first line from WordsToUse.txt
Save BaseDoc.docx as BaseDoc&First_Line.docx
Close all
Go next line and do same, but don't ask for user input, use previous one
If error go next
When done show if there were any errors (unlikely I guess)
I would use it about weekly for 150-ish lines.
Thanks!
A: I think something like this should work?
Sub test()
Dim text, textReplace, findMe As String
findMe = InputBox("What Text To Find?")
Open "C:\Excel Scripts\test.txt" For Input As #1
Open "C:\Excel Scripts\test2.txt" For Input As #2
Open "C:\Excel Scripts\output.txt" For Output As #3
While Not EOF(1)
Line Input #1, text
While Not EOF(2)
Line Input #2, textReplace
Write #3, Replace(text, findMe, textReplace)
Wend
Wend
Close #1
Close #2
Close #3
End Sub
A: Sub TextFile_FindReplace()
Dim TextFile As Integer
Dim FilePath As String
Dim FileContent As String
FilePath = Application.ActiveWorkbook.Path & "\NEWTEST.txt"
TextFile = FreeFile
Open FilePath For Input As TextFile
FileContent = Input(LOF(TextFile), TextFile)
FileContent = Replace(FileContent, "FOO", "BAR")
Print #TextFile, FileContent
Close TextFile
End Sub | unknown | |
d17712 | val | You can surround the string with single quotes, since double quotes are used in the string already:
>>> print 'q0Ø:;AI"E47FRBQNBG4WNB8B4LQN8ERKC88U8GEN?T6LaNBG4GØ""N6K086HB"Ø8CRHW"+LS79Ø""N29QCLN5WNEBS8GENBG4FØ47a'
q0Ã:;AI"E47FRBQNBG4WNB8B4LQN8ERKC88U8GEN?T6LaNBG4GÃ""N6K086HB"Ã8CRHW"+LS79Ã""N29QCLN5WNEBS8GENBG4FÃ47a
>>> | unknown | |
d17713 | val | If I don't misunderstood your requirements then you can try this way with json_normalize. I just added the demo for single json, you can use apply or lambda for multiple datasets.
import pandas as pd
from pandas.io.json import json_normalize
df = {":@computed_region_amqz_jbr4":"587",":@computed_region_d3gw_znnf":"18",":@computed_region_nmsq_hqvv":"55",":@computed_region_r6rf_p9et":"36",":@computed_region_rayf_jjgk":"295","arrests":"1","county_code":"44","county_code_text":"44","county_name":"Mifflin","fips_county_code":"087","fips_state_code":"42","incident_count":"1","lat_long":{"type":"Point","coordinates":[-77.620031,40.612749]}}
df = pd.io.json.json_normalize(df)
df_modified = df[['county_name', 'incident_count', 'lat_long.type']]
df_modified['lat'] = df['lat_long.coordinates'][0][0]
df_modified['lng'] = df['lat_long.coordinates'][0][1]
print(df_modified)
A: Here is how you can do it as well:
df1 = pd.io.json.json_normalize(df)
pd.concat([df1, df1['lat_long.coordinates'].apply(pd.Series) \
.rename(columns={0: 'lat', 1: 'long'})], axis=1) \
.drop(columns=['lat_long.coordinates', 'lat_long.type']) | unknown | |
d17714 | val | Each click on an object, adds listener to your button, but you don't ever remove listeners. You end up with multiple listeners, that's why more objects are moved than intended.
You could remove listener after each button click, but that seems like a total overkill.
Instead of adding multiple listeners, consider adding just one which will remember the last clicked object, and clicking your button will move only that object.
Also, if you want to move objects just by clicking on them, not on button click, you can simplify all these, and move the object directly in handleOnMouseDown.
Remove listeners variant:
void handleOnMouseDown(Collider box)
{
posXButtonPlus.GetComponent<Button>().onClick.AddListener(() => handleOnChangePosition("posx", box.gameObject));
}
void handleOnChangePosition(string pos, GameObject go)
{
// I don't have code for moving, but I imagine looks something like this. right * 0.5f can be anything, did it just for tests.
go.transform.position += Vector3.right * 0.5f;
posXButtonPlus.GetComponent<Button>().onClick.RemoveAllListeners();
}
Last clicked variant:
GameObject lastClicked;
void Awake()
{
posXButtonPlus.GetComponent<Button>().onClick.AddListener(() => handleOnChangePosition());
}
void handleOnMouseDown(Collider box)
{
lastClicked = box.gameObject;
}
void handleOnChangePosition()
{
lastClicked.transform.position += Vector3.right * 0.5f;
}
Without buttons and other stuff:
void handleOnMouseDown(Collider box)
{
box.transform.position += Vector3.right * 0.5f;
} | unknown | |
d17715 | val | Try GNU Obstacks.
From Wikipedia:
In the C programming language, Obstack is a memory-management GNU extension to the C standard library. An "obstack" is a "stack" of "objects" (data items) which is dynamically managed.
Code example from Wikipedia:
char *x;
void *(*funcp)();
x = (char *) obstack_alloc(obptr, size); /* Use the macro. */
x = (char *) (obstack_alloc) (obptr, size); /* Call the function. */
funcp = obstack_alloc; /* Take the address of the function. */
IMO what makes Obstacks special: It does not need malloc() nor free(), but the memory still can be allocated «dynamically». It is like alloca() on steroids. It is also available on many platforms, since it is a part of the GNU C Library. Especially on embedded systems it might make more sense to use Obstacks instead of malloc().
A: See Wikipedia's article about stacks.
A: Quick-and-dirty untested example. Uses a singly-linked list structure; elements are pushed onto and popped from the head of the list.
#include <stdlib.h>
#include <string.h>
/**
* Type for individual stack entry
*/
struct stack_entry {
char *data;
struct stack_entry *next;
}
/**
* Type for stack instance
*/
struct stack_t
{
struct stack_entry *head;
size_t stackSize; // not strictly necessary, but
// useful for logging
}
/**
* Create a new stack instance
*/
struct stack_t *newStack(void)
{
struct stack_t *stack = malloc(sizeof *stack);
if (stack)
{
stack->head = NULL;
stack->stackSize = 0;
}
return stack;
}
/**
* Make a copy of the string to be stored (assumes
* strdup() or similar functionality is not
* available
*/
char *copyString(char *str)
{
char *tmp = malloc(strlen(str) + 1);
if (tmp)
strcpy(tmp, str);
return tmp;
}
/**
* Push a value onto the stack
*/
void push(struct stack_t *theStack, char *value)
{
struct stack_entry *entry = malloc(sizeof *entry);
if (entry)
{
entry->data = copyString(value);
entry->next = theStack->head;
theStack->head = entry;
theStack->stackSize++;
}
else
{
// handle error here
}
}
/**
* Get the value at the top of the stack
*/
char *top(struct stack_t *theStack)
{
if (theStack && theStack->head)
return theStack->head->data;
else
return NULL;
}
/**
* Pop the top element from the stack; this deletes both
* the stack entry and the string it points to
*/
void pop(struct stack_t *theStack)
{
if (theStack->head != NULL)
{
struct stack_entry *tmp = theStack->head;
theStack->head = theStack->head->next;
free(tmp->data);
free(tmp);
theStack->stackSize--;
}
}
/**
* Clear all elements from the stack
*/
void clear (struct stack_t *theStack)
{
while (theStack->head != NULL)
pop(theStack);
}
/**
* Destroy a stack instance
*/
void destroyStack(struct stack_t **theStack)
{
clear(*theStack);
free(*theStack);
*theStack = NULL;
}
Edit
It would help to have an example of how to use it:
int main(void)
{
struct stack_t *theStack = newStack();
char *data;
push(theStack, "foo");
push(theStack, "bar");
...
data = top(theStack);
pop(theStack);
...
clear(theStack);
destroyStack(&theStack);
...
}
You can declare stacks as auto variables, rather than using newStack() and destroyStack(), you just need to make sure they're initialzed properly, as in
int main(void)
{
struct stack_t myStack = {NULL, 0};
push (&myStack, "this is a test");
push (&myStack, "this is another test");
...
clear(&myStack);
}
I'm just in the habit of creating pseudo constructors/destructors for everything. | unknown | |
d17716 | val | If you want to keep state after reloads you might want to take a look at HTML Web Storage.
A: In order of preference I would use:
1) If you are on react 16.3 or greater use the react context api
2) If you are not on 16.3 or greater you can use a library such as redux or flux
3) you can use HTML local storage.
Here is more info on Redux vs context api:
https://daveceddia.com/context-api-vs-redux/
Web storage is the least desirable because it doesn't enforce any rules around state management like the other options do.
A: Here, an example with local storage:
import React from 'react';
class App extends React.Component {
constructor(props) {
super(props);
this.state = { list: null };
}
onSearch = (e) => {
e.preventDefault();
const { value } = this.input;
if (value === '') {
return;
}
const cached = localStorage.getItem(value);
if (cached) {
this.setState({ list: JSON.parse(cached) });
return;
}
fetch('https://search?query=' + value)
.then(response => response.json())
.then(result => {
localStorage.setItem(value, JSON.stringify(result.list));
this.setState({ list: result.list });
});
}
render() {
return (
<div>
<form type="submit" onSubmit={this.onSearch}>
<input type="text" ref={node => this.input = node} />
<button type="button">Search</button>
</form>
{
this.state.list &&
this.state.list.map(item => <div key={item.objectID}>{item.title}</div>)
}
</div>
}
}
A: From your question, it sounds like you
*
*have some products on your page that you're representing as <div> elements
*are changing their border on click
*want them to show as selected when the user refreshes the page
React is about the presentation of some data, but doesn't decide how you get that data onto the page. It sounds like you want to store the list of the products selected somewhere, then load that list onto the page again when the user refreshes. The Web Storage api might be helpful, but cookies and sessions could do the same thing.
You need to
*
*choose what to store (probably a list of product ids)
*choose where to store it (localStorage, cookie, server, or maybe in the url with https://reach.tech/router)
*when your react page loads (componentDidMount for some component), read the list from localstorage into your state
*match your list of 'selected products' to your individual products in render
So, if you've loaded the list from one of the storage options into your state as selectedProductIds and your list of products is in state as products
isSelected = (product) => {
this.state.selectedProductIds.includes(product.id)
}
render() {
return <section>
{ this.state.products.map((product) =>
<div className={this.isSelected(product) ? 'selected item' : 'item'}>
{product.name}
</div>
)}
</section>
}
Keeping React state in sync with some other storage mechanism can get pretty messy. | unknown | |
d17717 | val | Route matching is powered by https://github.com/pillarjs/path-to-regexp . You can use their documentation to look for a similar case.
My first guess would be to try escaping the space: path: '/:NAS\ ID' | unknown | |
d17718 | val | you date(int ,int,int) constructor is assigning the variables incorrectly. What you want is month = m; day =d; year = y;
A: Change
Date:: Date(int m, int d, int y) // constructor definition
{
m = month, d = day, y = year;
checkDate();
}
To
Date:: Date(int m, int d, int y) // constructor definition
{
month = m, day = d, year = y ;
checkDate();
}
I would actually change aaaalot, but this is the simplest answer I can give you besides, work, work, work. | unknown | |
d17719 | val | Let us look at the four situations for an element in your list (as we iterate through them). If (for terseness) we take old to be the item that is moving's old position and new to be its new position we have the following cases for an item in your list (draw them out on paper to make this clear).
*
*the current item is the one to be moved: move it directly
*the current item's order is < new and < old: don't move it
*the current item's order is ≥ new and < old: move it right
*the current item's order is ≤ new and > old: move it left
*the current item's order is > new and > old: don't move it
When we start enumerating, we know the where the item to be moved will end up (at new) but we do not know where it has come from (old). However, as we start our enumeration at the beginning of the list, we know at each step that it must be further down in the list until we have actually seen it! So we can use a flag (seen) to say whether we have seen it yet. So seen of false means < old whilst true means >= old.
bool seen = false;
for (int i = 0; i < items.Length; i++)
{
if (items[i].ID == inputID)
{
items[i].Order = inputNewPosition;
seen = true;
}
}
This flag tells us whether the current item is >= old. So now can start shunting stuff around based on this knowledge and the above rules. (So new in the above discussion is inputNewPosition and whether we are before or after old we represent with our seen variable.)
bool seen;
for (int i = 0; i < items.Count; i++)
{
if (items[i].ID == inputID) // case 1
{
items[i].Order = inputNewPosition;
seen = true;
}
else if (seen) // cases 4 & 5
{
if (items[i].Order <= inputNewPosition) // case 4
{
items[i].Order--; // move it left
}
}
else // case 2 & 3
{
if (items[i].Order >= inputNewPosition) // case 3
{
items[i].Order++; // move it right
}
}
}
Having said all of that, it is probably simpler to sort the collection on each change. The default sorting algorithm should be quite nippy with a nearly sorted collection.
A: Your question isn't very clear, but for the requirements you'd likely be best off doing an event on the object that contains Order, and possibly having a container object which can monitor that. However I'd suspect that you might want to rethink your algorithm if that's the case, as it seems a very awkward way to deal with the problem of displaying in order.
That said, what is the problem requirement? If I switch the order of item #2 to #5, what should happen to #3? Does it remain where it is, or should it be #6?
A: This is how I solved the problem but I think there might be a more clever way out there..
The object that I need to update are policy workload items. Each one of these has a priority associated. There cannot be a policy workload item with the same priority. So when a user updates the priority the other priorities need to shift up or down accordingly.
This handler takes a request object. The request has an id and a priority.
public class UpdatePriorityCommand
{
public int PolicyWorkloadItemId { get; set; }
public int Priority { get; set; }
}
This class represents the request object in the following code.
//Get the item to change priority
PolicyWorkloadItem policyItem = await _ctx.PolicyWorkloadItems
.FindAsync(request.PolicyWorkloadItemId);
//Get that item's priority and assign it to oldPriority variable
int oldPriority = policyItem.Priority.Value;
//Get the direction of change.
//-1 == moving the item up in list
//+1 == moving the item down in list
int direction = oldPriority < request.Priority ? -1 : 1;
//Get list of items to update...
List<PolicyWorkloadItem> policyItems = await _ctx.PolicyWorkloadItems
.Where(x => x.PolicyWorkloadItemId != request.PolicyWorkloadItemId)
.ToListAsync();
//Loop through and update values
foreach(var p in policyItems)
{
//if moving item down in list (I.E. 3 to 1) then only update
//items that are less than the old priority. (I.E. 1 to 2 and 2 to 3)
//items greater than the new priority need not change (i.E. 4,5,6... etc.)
//if moving item up in list (I.E. 1 to 3)
//items less than or equal to the new value get moved down. (I.E. 2 to 1 and 3 to 2)
//items greater than the new priority need not change (i.E. 4,5,6... etc.)
if(
(direction > 0 && p.Priority < oldPriority)
|| (direction < 0 && p.Priority > oldPriority && p.Priority <= request.Priority)
)
{
p.Priority += direction;
_ctx.PolicyWorkloadItems.Update(p);
}
}
//finally update the priority of the target item directly
policyItem.Priority = request.Priority;
//track changes with EF Core
_ctx.PolicyWorkloadItems.Update(policyItem);
//Persist changes to database
await _ctx.SaveChangesAsync(cancellationToken); | unknown | |
d17720 | val | The index buffer is used to index into the colorBuffer using the same index as is used for indexing into the vertexBuffer, so the corresponding elements in each need to match. The indices in your index buffer are in the range of 0-7, so you will only ever index the first 8 entries of your colorBuffer, which are green and red.
You need to have a separate index for every unique combination of vertex position and color. For each face there are 4 unique vertex-color combinations, so you will need 6 * 4 = 24 entries in your cubeCoords array and 24 matching entries in your cubeColor array.
Like this:
float cubeCoords[] = {
// front face
-0.5f, 0.5f, 0.5f, // front top left 0
-0.5f, -0.5f, 0.5f, // front bottom left 1
0.5f, -0.5f, 0.5f, // front bottom right 2
0.5f, 0.5f, 0.5f, // front top right 3
// top face
-0.5f, 0.5f, -0.5f, // back top left 4
-0.5f, 0.5f, 0.5f, // front top left 5
0.5f, 0.5f, 0.5f, // front top right 6
0.5f, 0.5f, -0.5f, // back top right 7
// other faces...
}
final float cubeColor[] =
{
// Front face (red)
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
// Top face (green)
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
// other faces...
}
private short drawOrder[] = {
0, 1, 2, 0, 2, 3,//front
4, 5, 6, 4, 6, 7, //Top
// other faces...
} | unknown | |
d17721 | val | There seems to be no reason to JOIN the users table. You can get all public and your own private templates with
SELECT
`templates`.`id`,
`templates`.`name`,
`templates`.`description`,
`templates`.`datetime`,
FROM
`templates`
WHERE
`templates`.`user_id` = 42 OR `templates`.`private` = 0
I am assuming that the id of the current user is 42, substitute this with the real value when constructing the query. | unknown | |
d17722 | val | These usually refer to anonymous inner classes. | unknown | |
d17723 | val | Why are you trying to use "COM3" on a Linux machine? That's a Windows port name. Linux/Unix port names are of the form /dev/ttyUSB0.
But, as the docs show, you can probably just use the port number directly - they start at 0, so you can do ser = serial.Serial(2, 9600).
A: If you have the arduino ide open, then python may not be able to access the port. Ive hade this problem as well using Processing. | unknown | |
d17724 | val | You cannot do this. A similar question was asked on Kotlin's forum and yole (one of the creators of the language) said this:
this in a lambda refers to the instance of the containing class, if any. A lambda is conceptually a function, not a class, so there is no such thing as a lambda instance to which this could refer.
The fact that a lambda can be converted into an instance of a SAM interface does not change this. Having this in a lambda mean different things depending on whether the lambda gets SAM-converted would be extremely confusing. | unknown | |
d17725 | val | The first capture is greedy, which means that it will capture everything up to the last / (rather than the first / as you intended). You could make the capture non-greedy by using *? instead of *.
But if the captures are not intended to capture /, you should use the [^/] character class instead. For example:
rewrite ^/ap/([^/]*)/([^/]*)/?$ /ap/index.php?server=$1&guild=$2 last;
The trailing / has also been made optional. See this useful resource on regular expressions. | unknown | |
d17726 | val | I'll answer the first question only as I haven't used Realm for a while.
As you stated yourself, you cannot use Observable fields in the model that you use in Realm and you shouldn't ever do so. Model is to be kept simple.
ViewModel is exactly where Observables belong. They should be bound to the view and only them.
Consider using the new LiveData classes instead of Observables and ViewModels from the new Architecture Components. They make things even easier and are now supported in Data Binding:
LiveData Overview
LiveData with Data Binding | unknown | |
d17727 | val | I ended up creating the form (document.createElement) on page load with jquery, submitting it (.trigger("click")) and then removing it (.remove()). In addition I obfuscated the jquery code with the tool found here Crazy Obfuscation as @André suggested. That way user cannot see the htaccess username and password in Page Source nor find it using "inspect element" or firebug.
A: Personally, I need a bit more information to clearly deduct a solution for your issue, I hope you can give me that.
However, have you tried simply .remove()ing the form after submission? That way it gets submitted on page load and then gets removed so by the time the page loads and the user clicks view source, he will not be able to see it. He can, of course, disable JS for example, or any other workaround, but this is a very quick fix with the amount of information we have.
A: You can not directly hide values in 'view source'. Similarly when the form is being submitted, using tools like 'fiddler' the user could view the values. If you want to really hide them what you can do is never have those values show in the form. You could try techniques like encrypting those values on server or something if it never needs to be displayed to the user in the web page. | unknown | |
d17728 | val | You don't really need a loop construct here, you can get the desired output with Select-Object:
Get-AzureADUser -All |Select ObjectId,UserPrincipalName,@{Name='CreationDate'; Expression={(Get-AzureADUserExtension -ObjectId $_.ObjectId).Get_Item("createdDateTime")}} | unknown | |
d17729 | val | for more clarification,
you may want to use this
old colde:
$image = file_get_contents($_FILES['photo']['tmp_name']);
new code:
$imgData= file_get_contents($_FILES['photo']['tmp_name']);
$image = imagecreatefromstring( $imgData);
this is a clarification of my comment above. | unknown | |
d17730 | val | Questions regarding number generators for C have been asked before here on SO, such as in the article "Create Random Number Sequence with No Repeats".
I'd suggest looking at the above article to see if anything is suitable and provides a useful alternative to the standard rand() function in C, assuming that you've already looked at the latter and rejected it.
A: This is not quite a function, as function in C or C++. This is a co-routine, which may be resumed.
To implement it in C, you will need to maintain external state, and provide a "next value" func.
The advantage of this function is that it guarantees unique values. Do you really need it? If not, use stdlib's rand, multiplied by proper factors.
A: Do you just want random numbers? If so, why don't you use one of the many libraries out there that can do it for you in a cross platform fashion? Here's one.
A: To write a co-routine in C, you need to maintain state. This simplest way to do this is to use a static variable. For this example, it would look something like:
int ForLargeQuantityAndRange(int init_quantity, int init_range)
{
static int n;
static int quantity, range;
if (init_quantity > 0)
{
n = 0;
quantity = init_quantity;
range = init_range;
}
if (n++ < quantity)
{
int r = Random(range);
while (!used_add(r))
r = Random(range);
return r;
}
/* Quantity exceeded */
return -1;
}
...where you would call it with (quantity, range) to intialise a new sequence, and (0, 0) to continue the previous sequence.
Note that you will have to supply implementations of the Random() and used_add() functions. | unknown | |
d17731 | val | Inside your clients class you defined a member - client. It is not a collection, and when you deserialize xml file to clients class it deserialize only one node (first).
Use List or other collections inside:
[Serializable, XmlRoot("clients")]
public class Clients
{
[XmlElement("client")]
public List<Client> client { get; set; } // Collection
public Clients()
{
}
public Clients(List<Client> client)
{
this.client = client;
}
}
Also use CamelCase naming for classes, and lowerCase naming for variables - it is more readable | unknown | |
d17732 | val | The above is the same problem as in How to get dimensions from dimens.xml
None of the LayoutParams attributes have built-in support. As answered in the linked article, data binding of LayoutParams was thought to be too easy to abuse so it was left out of the built-in BindingAdapters. You are not abusing it, so you should add your own.
@BindingAdapter("android:layout_below")
public static void setLayoutBelow(View view, int oldTargetId, int newTargetId) {
RelativeLayout.LayoutParams layoutParams = (RelativeLayout.LayoutParams)
view.getLayoutParams();
if (oldTargetId != 0) {
// remove the previous rule
layoutParams.removeRule(RelativeLayout.BELOW);
}
if (newTargetId != 0) {
// add new rule
layoutParams.addRule(RelativeLayout.BELOW, newTargetId);
}
view.setLayoutParams(layoutParams);
}
As an aside, the @+id/adone in the binding syntax will not create the id. You should create the id in the View you're binding to. | unknown | |
d17733 | val | This may not be the most elegant solution, but I would try (for a test) disabling firePHP and use instead a logging tool such as log4php and have it log your exceptions where and when they might be thrown.
Thus, if you're not doing so already.. use try and catch blocks and in the catch blocks, log your exception to a file that you'd declare in the config/instantiation of log4php.
Just a suggestion. | unknown | |
d17734 | val | You can make a custom tab bar and add accessibility as you wish in each tab bar item like this:
<Tab.Navigator tabBar={(props) => <CustomTabBar/>}>
<Tab.Screen .../>
</Tab.Navigator>
A: Thank you guys for responding. Really appreciate it!
I hadn't done the correct research. In the documentation , it says that theres is a prop which you can add to the whole tab bar.
https://reactnavigation.org/docs/bottom-tab-navigator/#tabbaraccessibilitylabel
After adding this, the container (both the tab icon and text) was selectable as one container. It is just strange that this behaviour worked before without adding that prop. | unknown | |
d17735 | val | getWcmMode() is a final method in WCMUsePojo, mockito does not support mocking final methods by default.
you will have to enable it by creating a file named org.mockito.plugins.MockMaker in classpath (put it in the test resources/mockito-extensions folder) and put the following single line
mock-maker-inline
then you can use when to specify function return values as usual-
@Test
public void testSomeComponetnInNOTEDITMode() {
//setup wcmmode
SightlyWCMMode fakeDisabledMode = mock(SightlyWCMMode.class);
when(fakeDisabledMode.isEdit()).thenReturn(false);
//ComponentUseClass extends WCMUsePojo
ComponentUseClass fakeComponent = mock(ComponentUseClass.class);
when(fakeComponent.getWcmMode()).thenReturn(fakeDisabledMode);
assertFalse(fakeComponent.getWcmMode().isEdit());
//do some more not Edit mode testing on fakeComponent.
} | unknown | |
d17736 | val | Base on this you may have to change the way you are configuring your application:
var webHost = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.ConfigureAppConfiguration((hostingContext, config) =>
{
var env = hostingContext.HostingEnvironment;
config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json",
optional: true, reloadOnChange: true);
config.AddEnvironmentVariables();
})
.ConfigureLogging((hostingContext, logging) =>
{
logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
logging.AddConsole();
logging.AddDebug();
logging.AddEventSourceLogger();
})
.UseStartup<Startup>()
.Build();
webHost.Run();
1 Microsoft ASP.NET Core 2.2 Logging documentation | unknown | |
d17737 | val | A few things wrong here. The definitive SPF checker is Scott Kitterman's. It finds this error:
PermError SPF Permanent Error: Unknown mechanism found: postbox.pidatacenters.com
It's not clear why this is presented as this particular error because the syntax itself is valid, but you have a recursive definition - your SPF includes postbox.pidatacenters.com, but the SPF for that domain includes itself, which makes no sense. It also contains the google SPF, so you don't need to include that again.
I suggest you set your SPF records to these. For pidatacenters.com:
v=spf1 ip4:192.186.236.104 mx include:bmsend.com include:postbox.pidatacenters.com ~all
you don't need the a clause in there because it resolves to the same IP as you already listed. It's polite to put ip clauses first as they are fastest to resolve for receivers, as they do not require DNS lookups.
For postbox.pidatacenters.com:
v=spf1 include:_spf.google.com ~all
A: The reason why your getting the syntax error with that test is because any valid syntax checker authenticates the entire SPF Statement. Which means it has to test the SPF records of every included statement.
When it checks the include for "postbox.pidatacenters.com" in the SPF syntax for pidatacenters.com, it will see this.
v=spf1 include:_spf.google.com postbox.pidatacenters.com ~all
Which is invalid.
Anyway, you should follow Synchro's Advice and change the records to what he stated.
Also testing with the site Synchro recommended is fine, but it relies on a lot of expert knowledge you might not have. You might think you're emailing one way, but you're really not.
It's better to get a real live example using a reflector, just send an email to each of these and you'll get results back telling you if the SPF is correct, I always use multiple reflectors, to ensure things are accurate.
[email protected]
[email protected] | unknown | |
d17738 | val | You can use Google Stackdrive to setup alerts and have an email sent.
However, disk percentage busy is not an available metric. You can chose from Disk Read I/O and Disk Write I/O bytes per second and set a threshold for the metric.
*
*Go to the Google Console Stackdriver section. Click on Monitoring.
*Select Alerting -> Create Policy in the left panel.
*Create your alerting policy based upon Conditions and Notifications
You can create custom metrics. This link describes how.
Creating Custom Metrics | unknown | |
d17739 | val | This should work
Private Sub txtbarcode_KeyPress(KeyAscii As Integer)
If KeyAscii = 32 Then
'replace space with underscore
KeyAscii = 95
End If
End Sub | unknown | |
d17740 | val | It depends (like often).
The JDK is a development kit for Java SE including FX. So you can develop desktop applications but also web applications depending on the type of integration you prefer. The Java EE SDK contains also the Glassfish server, examples and tutorials but they are not really needed. The ME is a special minimized versions for embedded device development including special tools for that.
I am developing web application for years with a Java SE JDK only. As I normally use Spring Boot with an embedded container or install a Tomcat on demand, this works perfectly and the Java EE SDK is not needed. | unknown | |
d17741 | val | In fact, it was quite easy: if you put the absolute path to an executable file in the browser option, it takes it smoothly.
So options should be something like :
{ port: 9000,
server: {
baseDir: [...
],
routes: {
'/bower_components': 'bower_components',
'/node_modules': 'node_modules',
'/temp': 'temp'
}
},
logLevel: 'info',
notify: true,
browser: 'C:\\Program Files (x86)\\Firefox Developer Edition\\firefox.exe',
reloadDelay: 0 //1000
} | unknown | |
d17742 | val | CouchDB, out of the box, does not provide you with any options to control the order of replication. I'm guessing you could piece something together if you keep documents with different priorities in different databases on the master, though. Then, you could replicate the high-priority master database into the slave database first, replicate lower-priority databases after that, etc.
A: The short answer is no.
The long answer is that CouchDB provides ACID guarantees at the individual document level only, by design. The replicator will update each document atomically when it replicates (as can anyone, the replicator is just using the public API) but does not guarantee ordering, this is mostly because it uses multiple http connections to improve throughput. You can configure that down to 1 if you like and you'll get better ordering, but it's not a panacea.
After the bigcouch merge, all bets are off, there will be multiple sources and multiple targets with no imposed total order.
A: You could set up filtered replication or named document replication:
*
*http://wiki.apache.org/couchdb/Replication#Filtered_Replication
*http://wiki.apache.org/couchdb/Replication#Named_Document_Replication
Both of these are alternatives to replicating an entire database. You could do the replication in smaller batch sizes, and order the batches to match your priorities. | unknown | |
d17743 | val | Based on discussions at the Apple dev forums (https://devforums.apple.com/message/749949) it looks like this is a bug affecting a lot of people. Probably due to a change in Apple's validations servers.
I was able to work around it by changing the build architecture in Build Settings from Standard(armv7,armv7s) to armv7 and rebuilding. This should only have the effect that the compiled code is not optimized for iPhone 5. It will still run, but may not be quite as fast as if it were compiled for armv7s. I suspect the performance difference would be negligible in most cases.
A: This helped me:
Project -> Build Settings -> remove the architecture from "valid
architectures" as well as setting the "Build Active Architecture Only"
to Yes in the Project
A: I had the same problem today. My app has no third-party libraries.
12 days ago I submitted a build from Xcode 4.5.1 that was subsequently reviewed and released to the App Store. Today I tried to submit a new build and suddenly received this error.
I then tried to validate the same executable (not a rebuild) from within Xcode that I had submitted 12 days ago and that had passed validation and is now available for download in the App Store, but this time it failed validation with the above error.
Performing step 4 above allowed me to submit the new build. But the executable is smaller even though I have added a small amount of code and three small png/jpegs. This makes me think that armv7s code is missing from the archive.
What is happening? Why should step 4 above 'work'? Why does an executable that previously submitted OK and was released suddenly no longer pass validation?
Note: this is not a duplicate of any previous post that I was able to find 15 hours ago. This is the first time I have seen any mention made of seeing this error when submitting to iTunes Connect rather than receiving a compiler warning. So please do not mark this as a duplicate. It is not.
A: Most of the answers here are ones that I did not find ideal, mainly because they essentially suggest that you remove armv7s support from your app. While that will make you app pass validation, that could potentially make your app run slower on the iPhone 5.
Here is the workaround I'm using (though, I must say that I wouldn't call this a solution).
Instead of using XCode Organizer, I'm uploading the binary using Application Loader.
To upload the binary using Application Loader
Open Organizer > Right Click on Archive > Reveal in Finder.
Right Click the Archive file > Show Archive Content
Go to Products > Application > YourAPP.app
Compress YourAPP.app and upload using Application Loader.
A: My problem was the fact I was using an old version of Application Loader.
The solution for me was to download the latest version of Application Loader iTunes Connect > Manage Your Applications > Download Application Loader and try again.
A: Try this:
1.Select your project in Xcode (with the blue icon)
2.Select Build Settings
3.Set the view to All/Combined
4.Set "Build Active Architecture Only" to Yes | unknown | |
d17744 | val | In your activity Replace following
PendingIntent pi = PendingIntent.getService(this, 0, notificationmassage, PendingIntent.FLAG_UPDATE_CURRENT);
with
PendingIntent pi = PendingIntent.getBroadcast(this, 0, notificationmassage, PendingIntent.FLAG_UPDATE_CURRENT);
A: public void getNotification(Context context,String Message){
int icon = R.drawable.appicon;
int when =(int) System.currentTimeMillis();
NotificationManager notificationManager = (NotificationManager)
context.getSystemService(Context.NOTIFICATION_SERVICE);
Notification notification = new Notification(icon, Message, when);
String title = context.getString(R.string.app_name);
Intent notificationIntent = new Intent(context, MainActivity.class);
// set intent so it does not start a new activity
notificationIntent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP |Intent.FLAG_ACTIVITY_CLEAR_TASK);
PendingIntent intent =PendingIntent.getActivity(context, 0, notificationIntent, 0);
notification.setLatestEventInfo(context, title,Message, intent);
notification.flags |= Notification.FLAG_AUTO_CANCEL;
// Play default notification sound
notification.defaults |= Notification.DEFAULT_SOUND;
// Vibrate if vibrate is enabled
notification.defaults |= Notification.DEFAULT_VIBRATE;
notificationManager.notify(0, notification);
} | unknown | |
d17745 | val | How this assignment can be done is very dependent on the pixel-format you specified when acquiring ddsd. See the field ddpfPixelFormat and also specifically in there: dwRGBBitCount.
Maybe you can provide this pixel format information so that i can improve my answer. However, i can easily give you an example of how you do this pixel-color assignment if e.g. the pixel-format is:
[1 byte red] [1 byte green] [1 byte blue] [1 byte unused]
Here's the example:
*(pDisplayMemOffset+0) = 0x10;// asigning 0x10 to the red-value of first pixel
*(pDisplayMemOffset+1) = 123; // asigning 123 to green-value of first pixel
// (no need for hex)
*(pDisplayMemOffset+4) = 200; // asigning 200 to red-value of second pixel
// (BYTE is unsigned)
If you have to extract the color values from an integer it largely depends on which byte-ordering and color-ordering that integer was given in, but you can try it out easily.
First i would try this:
*(((unsigned int*)pDisplayMemOffset)+0) = 0x1A2A3A4A
*(((unsigned int*)pDisplayMemOffset)+1) = 0x1B2B3B4B
If this works, then the pixel-format had either an unused 4th byte (like my example above) or an alpha-value that is now set to one of the values. Again: aside from the pixel-format also the ordering of the bytes in your integer decides whether this directly works or whether you have to do some byte-swapping. | unknown | |
d17746 | val | The model should be placed in a new class library project. My preference would be to recreate the model at this point based on the existing model. For the namespace I like to use {CompanyName}.DataAccess. Remove the old model from your web site project, add a reference to the new class library project and build the web site project. The website project will break in many places, but it should be a simple matter of changing the namespace to the new data access assembly. I prefer this method as to cut/paste because now you have nice clean namespaces. Be careful of any places you may have strings with entity names in them, like if you were using Include (if you are using EF 4 and lazy loading, this should not be a problem). Leave the connection string in web.config for both of the web site projects. When you create the model in the class library, it will add a connection string in app.config. That is OK, it is just there so the model knows how to connect to the database when you refresh it. | unknown | |
d17747 | val | You had a couple of errors in your XSD. And one in your XML.
To remove this one error in your XML, change the namespace in the <catalog...> element from xmlns:xsi="http://www.w3.org/2001/XMLSchema-Instance" to xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance". A simple typo.
And your XSD should look like this to validate your XML:
<?xml version="1.0" encoding="UTF-8" ?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="catalog">
<xs:complexType>
<xs:sequence>
<xs:element name="photo" maxOccurs="unbounded">
<xs:complexType>
<xs:sequence>
<xs:element name="name">
<xs:complexType>
<xs:simpleContent>
<xs:extension base="xs:string">
<xs:attribute name="metadata" type="xs:string"/>
</xs:extension>
</xs:simpleContent>
</xs:complexType>
</xs:element>
<xs:element name="description" type="xs:string"/>
<xs:element name="date" type="xs:string"/>
<xs:element name="images" minOccurs="0">
<xs:complexType>
<xs:sequence>
<xs:element name="img"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
<xs:attribute name="cid" type="xs:string"/>
<xs:attribute name="donatedBy" type="xs:string"/>
<xs:attribute name="metadata" type="xs:string"/>
<xs:attribute name="src" type="xs:string"/>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
What I changed was:
*
*The element photo occurs more than one time, so I added maxOccurs="unbounded"
*The element name can have an attribute, so I changed its definition to a complexType with simpleContent.
*The element images does not have to occur, hence I added minOccurs="0"
*I moved the attributes out of the xs:sequence into the xs:complexType
Now the XSD should validate your XML. | unknown | |
d17748 | val | Check out itertools.product(*iterables, repeat=1) here.
Use like:
> product([-1, 0, 1], repeat=6)
[...]
(1, 1, 1, 0, -1, -1),
(1, 1, 1, 0, -1, 0),
(1, 1, 1, 0, -1, 1),
(1, 1, 1, 0, 0, -1),
(1, 1, 1, 0, 0, 0),
(1, 1, 1, 0, 0, 1),
(1, 1, 1, 0, 1, -1),
(1, 1, 1, 0, 1, 0),
(1, 1, 1, 0, 1, 1),
(1, 1, 1, 1, -1, -1),
(1, 1, 1, 1, -1, 0),
(1, 1, 1, 1, -1, 1),
(1, 1, 1, 1, 0, -1),
(1, 1, 1, 1, 0, 0),
(1, 1, 1, 1, 0, 1),
(1, 1, 1, 1, 1, -1),
(1, 1, 1, 1, 1, 0),
(1, 1, 1, 1, 1, 1)]
Gets you len(list(product([-1, 0, 1], repeat=6))) = 729
A: Use itertools.product
import iertools
assert(sum(1 for _ in itertools.product([-1, 0, 1], repeat=6)) == 3**6)
If you know itertools.permutations I think no more explanation is required. | unknown | |
d17749 | val | Use Bootstrap 3 which can do exactly what you wish using its css media queries.
Alternatively you can simpy write your own CSS to be responsive using media queries.
A: As mccainz mentioned, you have several options. All of those examples are under an umberalla term called "responsive design".
According to Wikipedia, "Responsive Web design (RWD) is a Web design approach aimed at crafting sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices (from mobile phones to desktop computer monitors)".
Now, there are many options when it comes to responsive design. Either use ready-made libraries or create it yourself.
Check the links below and see when resizing the browser how the layout changes so that you get an optimal view experience.
*
*Bootstrap
*Skeleton
*Foundation
*HTML Kickstart
All of them share the same idea: Use the great and holy CSS to decouple the layout from the real code, so that people like you who want to have the same codebase and different view experience do not have to re-write the code. Beauty of the CSS lies here.
Also, if you think above libraries do not do what you want write your own responsive CSS. For that, I highly recommend to check the following two books:
*
*HTML & CSS by John ducket
*The Modern Web: Multi-Device Web Development with HTML5, CSS3, and JavaScript by Peter Gasston
UPDATE: Since you commented that you want to avoid Responsive Layouts, There are a few people who think that responsive layout does not work and then use adaptive approaches.Check this out Why responsive layout does not worth it. The author explains a few minimal optimziation points and believes that Responsive layouts are overkill in many circumstances. He offers techniques such as lazy loading (under section 4).
Other resource: Alternatives to responsive design: part 1 (mobile) discusses the issues again with responsive layout.
With adaptive approaches, it is you who decide what is a good experience on devices and you sort of do some part of optimziation with pre-coded layouts for each case, but generally there are no rule of thumbs for that. Just the books comes to my mind.
Also try liquid layouts as well. | unknown | |
d17750 | val | Since new_list is a dict you should be able to just extract that with a simple
instanceid = new_list['facter']['instanceid']
The u you see before your strings are just telling you that the strings are unicode strings, and not a "C-string".
In your case it doesn't matter, since there are no unicode characters in any of the dictionary keys. | unknown | |
d17751 | val | You simply (I say simply, but if can be one of the most aggravating parts of iOS development) need to make sure you are providing the developers with the private key for the certificate, the certificate, and provisioning profile for development. If your project settings are correct, you should not get the team prefix problem. Also, I would encourage the other team members to delete all their other certificates and profiles before installing yours.
The other option would be to add their Apple IDs to your account as a team member. They can have their own credentials, but still access the account through xCode. | unknown | |
d17752 | val | Problem solved.I didn't discretize the instance that was to be tested so that the weka didn't know the format of my instance.add the following code:
discretize.input(instance);//discretize is a filter
instance = discretize.output(); | unknown | |
d17753 | val | new HttpPost("localhost:3000/api/send"); every http request needs to be fully qualified. also do not use local host.Android wont recognize your endpoint that way. I always use ngrok to broadcast to internet. switch to
new HttpPost("http://your_domain");
also switch to retrofit2, it uses java annotations to compose its request. Highly recommended. here is a great link to start:https://futurestud.io/tutorials/retrofit-getting-started-and-android-client | unknown | |
d17754 | val | Deleting a certain number of rows one by one can be slow. You can try the following method, should be faster and achieve the desired outcome.
Private Sub Select_Button_Click()
Application.DisplayAlerts = False: Application.ScreenUpdating = False: Application.EnableEvents = False
On Error GoTo Cleanup
With Worksheets("Sheet1").ListObjects("Table2")
.DataBodyRange.AutoFilter 7, "<>" & Sheet1.Range("$F$2").Value2
.Range.Offset(1).Delete
.AutoFilter.ShowAllData
With .DataBodyRange
Dim i As Long
For i = 1 To 12
.Copy
.Insert Shift:=xlDown
.Columns(10).Formula = "=Month($A$2)-" & i
Next
End With
End With
Cleanup:
With Application
.DisplayAlerts = True: .ScreenUpdating = True: .EnableEvents = True: .CutCopyMode = False
End With
End Sub | unknown | |
d17755 | val | There is no direct conversion between these two entities out of the box - the Outlook message file and the MailMessage class from the .net framework. You can automate Outlook to get the instance of the MSG file instantiated using the NameSpace.OpenSharedItem method which is used to open iCalendar appointment (.ics) files, vCard (.vcf) files, and Outlook message (.msg) files.
You can also use Redemption for that, look for RDOSession.GetMessageFromMsgFile.
A: Thanks for the replies but I found the answer on this existing post: C# Outlook interop and OpenSharedItem for opening MSG files
Like Eugene said, I used the Microsoft.Office.Interop.Outlook.Application to open the .msg file via Outlook. | unknown | |
d17756 | val | It's not really clear from the question exactly what you are trying to achieve. By reading through the code it seems you are wrapping each word in a span, and then using that span's location to work out whether or not it is on a new line, this then leads to each word on the same line being merged together inside a new span, each with class="line".
Whilst reading the offset position of a newly formed span — formed in the same block of code — could be causing your IE7 issue, because it's location information may not be recalculated yet... It really does beg the question as to why you are doing this? Especially if you take the name of your function padSubsequentLines into account.
If all you are doing is padding between lines of words you should use line-height: in your style/css.
update
I'd recommend — assuming you don't have direct access to the markup, which would have to be very likely — that you just stick with your first part of the code. This would be the part that wraps each word with a span. On these spans I'd then apply the class that applies the background colour you want, there should be no need to combine them into their constituent lines. This will remove the need to calculate offsetTop, and should even work for IE7. As I stated before, if you require padding between lines, use line-height.
function padSubsequentLines(element) {
var
words = element.innerHTML.split(' '),
count = words.length,
html = '',
i
;
for (i=0; i<count; i++) {
html += '<span class="background">' + words[i] + ' </span>';
}
element.innerHTML = html;
}
Note: You will need to make sure the spaces between words are kept within the spans, so that the background colour appears seemless.
After the above, if you still have problems with regard to the background colour stretching to fill the space in IE7, I'd look to your CSS definition for your span elements and make sure they aren't being overflow:hidden, zoom:1 or display:block. | unknown | |
d17757 | val | Yes. You should create conection of emr_default of the right type for the operator (you have to pick the right one from the list) .
Here is a detailed instruction on what to do. This is is "1.10.11" Airlfow documentation and if you need any other airflow resources and docs you can always go there and use "Saerch" functionality. I got to that page by choosing Airlfow 1.10.11 version and searching for "connection".
https://airflow.apache.org/docs/apache-airflow/1.10.11/howto/connection/index.html?highlight=connections
A: Try the following connection of emr_default that works for me.
Connection Id: emr_default
Connection Type: Amazon Elastic MapReduce
Login: access_key
Password: secret_key
Extra: {"region_name": "eu-west-3"}
Replace "eu-west-3" with your region | unknown | |
d17758 | val | only Collections are synced across browsers with publish/subscribe in Meteor.
Maybe you can have something like a Users collection with an is_typing field and create a reactive template helper with it?
Very basic example:
Template.messages.is_typing = function() {
return Users.find({is_typing:true}).count() > 0
};
and in the template
<template name="messages">
{{#if is_typing}}
typing
{{else}}
idle
{{/if}}
</template> | unknown | |
d17759 | val | The better option for large number of files transfer is to use ftp protocol. For that you need an ftp server. And, using org.apache.commons.net.ftp.FTPClient you may upload files in to the server (and also various file management).
The storeFile(String, InputStream) method of org.apache.commons.net.ftp.FTPClient can give the file upload status, i.e. if the file is uploaded successfully it will return true otherwise false.
Please refer this link for a sample program. | unknown | |
d17760 | val | Instead of return 0 just do return -1 and you'll get desired height smaller by 1. Corrected code is below:
def maxDepth(self, node):
if node is None:
return -1
else:
# Compute the depth of each subtree
lDepth = self.maxDepth(node.left)
rDepth = self.maxDepth(node.right)
# Use the larger one
if (lDepth > rDepth):
return lDepth + 1
else:
return rDepth + 1
Also you can use built-in max() function to make your code shorter:
def maxDepth(self, node):
if node is None:
return -1
return max(self.maxDepth(node.left), self.maxDepth(node.right)) + 1
Note: OP is correct, height should be edge-based, i.e. tree with one node 5 should have height of 0. And empty tree (None-tree) has height -1. There are two proofs of this:
One proof in Wikipedia Tree Article says that height is edge based and Conventionally, an empty tree (tree with no nodes, if such are allowed) has height −1.
And another proof in famous book Cormen T.H. - Introduction to Algorithms: | unknown | |
d17761 | val | Thanks to all that responded for the clues that helped me solve this. I looked in the console and saw the error message "No 'Access-Control-Allow-Origin' header is present on the requested resource".
What wasn't clear in my question was that the url I was trying to reach was on a different server and I was encountering this problem.
Thanks for the help. | unknown | |
d17762 | val | Pentaho files are found at:
https://sourceforge.net/projects/pentaho/files
The server is in the 'Business Intelligence Server' folder.
Download the zip file of the latest version, and unzip it. Open a terminal at the location of the unziped directory, and then cd into the directory. Now run ./start-pentaho.sh. It is a java application, so make sure you have java installed.
The server is now running, and the 'user console' can be accessed through a web browser at localhost:8080 by default. It was very slow for me, just fyi. The default login is 'admin' and 'password'. | unknown | |
d17763 | val | To get the current date, you need to specify which time zone you're in. So given a clock and a time zone, you'd use:
LocalDate today = clock.Now.InZone(zone).Date;
While you can use SystemClock.Instance, it's generally better to inject an IClock into your code, so you can test it easily.
Note that in Noda Time 2.0 this will be simpler, using ZonedClock, where it will just be:
LocalDate today = zonedClock.GetCurrentDate();
... but of course you'll need to create a ZonedClock by combining an IClock and a DateTimeZone. The fundamentals are still the same, it's just a bit more convenient if you're using the same zone in multiple places. For example:
// These are IClock extension methods...
ZonedClock zonedClock = SystemClock.Instance.InTzdbSystemDefaultZone();
// Or...
ZonedClock zonedClock = SystemClock.Instance.InZone(specificZone); | unknown | |
d17764 | val | int lena = a.length();
int lenb = b.length();
int inta[] = new int[lena];
int intb[] = new int[lenb];
int result[];
int carry = 0, maxLen = 0, tempResult;
if(lena >lenb)
maxLen = lena + 1;
else
maxLen = lenb + 1;
result = new int[maxLen];
for(int i = lena - 1; i>=0 ; i--) {
inta[i] = Integer.valueOf( a.charAt(i) );
}
for(int i = lenb - 1; i>0 ; i--) {
intb[i] = Integer.valueOf( b.charAt(i) );
}
for(int i = maxLen - 1; i >= 0; i--) {
result[i] = 0;
}
for(int i = 1; i < maxLen - 1; i++) {
tempResult = 0;
if(lena > i)
tempResult += inta[lena - i];
if(lenb > i)
tempResult += intb[lenb - i];
result[maxSize - i] += tempResult % 10;
result[maxSize - i - 1] = tempResult / 10;
}
String res = "";
for(int i = 0; i < maxLen; i++) {
res += result[i];
}
System.out.println(res);
I think that would do what you want and will avoid most simple mistakes (couldn't try it, no acces to an IDE).
I actually separate the two string into their inside number and then add them into the result tab.
Then I put the result tab into a string that I print. | unknown | |
d17765 | val | Easy steps to create Cocoapod from existing xcode project
*
*Create a repository on your git account (Repo name, check README,
choose MIT under license).
*Copy the url of your repository.Open terminal and run following command.
git clone copied your repository url
*Now copy your Xcode project inside the cloned repository folder on
your Mac. Now run following commands
git add -u to add all files (if not added use: git add filepath/folder)
git commit -m "your custom message"
git push origin master
*Create a new release to go to your git repository or run following commands
git tag 1.0.0
git push --tags
*First, we need to make sure that you have CocoaPods installed and
ready to use in your Terminal. run the following command:
sudo gem install cocoapods --pre
Creating a Podspec
*
*All Pods have a podspec file. A podspec, as its name suggests,
defines the specifications of the Pod! Now let’s make one, run
following command on terminal
touch PodName.podspec
*After adding and modifying your .podspec file. Validate your .podspec
file by hitting following command on terminal
pod lib lint
*Once you validate it successfully without errors run following
command to register you and build cocoapod respectively
pod trunk register
pod trunk push PodName.podspec
If all goes well, you will get this on terminal
PodName (1.0.0) successfully published
February 5th, 02:32
https://cocoapods.org/pods/PodName
Tell your friends!
Yeah!!!!! congrats you have got your pod link. Use wherever you want to use it.
A: Here is some useful links . You can follow the same.
https://www.appcoda.com/cocoapods-making-guide/
https://www.raywenderlich.com/5823-how-to-create-a-cocoapod-in-swift | unknown | |
d17766 | val | You shall have a field in db which means isDeteled or isRecycled and when it is set to 1 show it in recycleBin/ if user finally deletes record delete if from db.
Better choice is to have status field for enum Active/Archived/Deleted or Active/Archived/Deleted/Purged
when the record in last state of this enum delete it in real time, or by SQL clean-up job or something like this. | unknown | |
d17767 | val | There doesn't appear to be a way to disable zoom completely or specifically the slider after looking around. If you're main mission is to avoid someone clicking on the zoom slider I would probably go with hiding the statusbar all together.
Application.DisplayStatusBar = False
A: To hide the zoom slider alone you can do so by editing the registry key
HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Excel\StatusBar
by setting ZoomSlider value to 0
Anyway, i don't think this can be achieved using VBA even with SaveSetting.
You can try write a .reg file to change the target key and VBA-load it. I'm not sure if this can be done but even if it works the user will still to have click Yes in a system prompt to allow the key to be loaded into registry.
And even when the user click Yes to allow the .reg file to load and change the registry key, Excel status bar doesn't refresh to show/hide the ZoomSlider, until Excel is restarted.
In short, hiding zoom slider alone using VBA doesn't seem to be achievable. | unknown | |
d17768 | val | Assuming you are trying to do a SignalR Client in Windows Forms Appliation then check this post(http://mscodingblog.blogspot.com/2012/12/testing-signalr-in-wpf-console-and.html) on how to do client side SignalR in WPF application in VB. With similar approach I guess you could make Signalr client working in Windows Forms Application. | unknown | |
d17769 | val | If you use rake to run rspec tests then you can edit spec/spec.opts
http://rspec.info/rails/runners.html
A: As you can see in the docs here, the intended use is creating ~/.rspec and in it putting your options, such as --color.
To quickly create an ~/.rspec file with the --color option, just run:
echo '--color' >> ~/.rspec
A: Or simply add alias spec=spec --color --format specdoc to your ~/.bashrc file like me.
A: One can also use a spec_helper.rb file in all projects. The file should include the following:
RSpec.configure do |config|
# Use color in STDOUT
config.color = true
# Use color not only in STDOUT but also in pagers and files
config.tty = true
# Use the specified formatter
config.formatter = :documentation # :progress, :html,
# :json, CustomFormatterClass
end
Any example file must require the helper to be able to use that options.
A: In your spec_helper.rb file, include the following option:
RSpec.configure do |config|
config.color_enabled = true
end
You then must require in each *_spec.rb file that should use that option.
A: One thing to be aware of is the impact of the different ways of running RSpec.
I was trying to turn on the option with the following code in spec/spec_helper.rb -
Rspec.configure do |config|
config.tty = $stdout.tty?
end
*
*calling the 'rspec' binary directly - or as 'bundle exec rspec' and checking $stdout.tty? will return true.
*invoking the 'rake spec' task - or as 'bundle exec rake spec' - Rake will invoke rspec in a separate process, and $stdout.tty? will return false.
In the end I used the ~/.rspec option, with just --tty as its contents. Works well for me and keeps our CI server output clean. | unknown | |
d17770 | val | Your module has a minus symbol in the title. That's the reason. Sometimes (nearly never) the underline symbol may be the fault. If you name your module like "myModules" or "modules", there would be no error.
WRONG:
>>> import my-modules
File "<stdin>", line 1
import my-modules
^
SyntaxError: invalid syntax
VALID:
>>> import myModules
#No error | unknown | |
d17771 | val | The results of expressions in a Python script are not normally printed - this is a feature of the interpreter and the notebook. In a script it would not make much sense to compute x * y and do nothing with it.
Try this instead: print(3j * 9) | unknown | |
d17772 | val | Yes you can add an action hook to wp_head like this:
add_action('wp_head', myCallbackToAddMeta);
function myCallbacktoAddMeta(){
echo "\t<meta name='keywords' content='$contents' />\n";
} | unknown | |
d17773 | val | Create the ZIP file locally and use either commons-net FTP or SFTP to move the ZIP file across to the remote location, assuming that by "remote location" you mean some FTP server, or possibly a blade on your network.
If you are using the renameTo method on java.io.File, note that this doesn't work on some operating systems (e.g. Solaris) where the locations are on different shares. You would have to do a manual copy of the file data from one location to another. This is pretty simple using standard Java I/O. | unknown | |
d17774 | val | It seems you forgot the return in the if clause. There's one in the else but none in the if.
A: @furas' code made iterative instead of recursive:
def radiationExposure2(start, stop, step):
totalExposure = 0
time = stop - start
newStart = start + step
oldStart = start
while time > 0:
totalExposure += f(oldStart)*step
time = stop - newStart
oldStart = newStart
newStart += step
return totalExposure
Converted to a for-loop:
def radiationExposure3(start, stop, step):
totalExposure = 0
for time in range(start, stop, step):
totalExposure += f(time) * step
return totalExposure
Using a generator expression:
def radiationExposure4(start, stop, step):
return sum(f(time) * step for time in range(start, stop, step))
A: As Paulo mentioned, your if statement had no return. Plus, you were referencing the variable radiation before it was assigned. A few tweaks and I am able to get it working.
global totalExposure
totalExposure = 0
def f(x):
import math
return 10 * math.e**(math.log(0.5)/5.27 * x)
def radiationExposure(start, stop, step):
time = (stop-start)
newStart = start+step
if(time!=0):
radiationExposure(newStart, stop, step)
global totalExposure
radiation = f(start) * step
totalExposure += radiation
return totalExposure
else:
return totalExposure
rad = radiationExposure(0, 5, 1)
# rad = 39.1031878433
A: Cleaner version without global
import math
def f(x):
return 10*math.e**(math.log(0.5)/5.27 * x)
def radiationExposure(start, stop, step):
totalExposure = 0
time = stop - start
newStart = start + step
if time > 0:
totalExposure = radiationExposure(newStart, stop, step)
totalExposure += f(start)*step
return totalExposure
rad = radiationExposure(0, 5, 1)
# rad = 39.1031878432624
A: As other mentioned, your if statement had no return. It seems you forgot the return in the if clause. There's one in the else but none in the if. | unknown | |
d17775 | val | For setting up of environment variables.
1) Right-click the My Computer icon on your desktop and select Properties.
2) Click the Advanced tab.
3) Click the Environment Variables button.
4) Under System Variables, click New.
5) Enter the variable name as JAVA_HOME.
6) Enter the variable value as the installation path for the Java Development Kit.
If your Java installation directory has a space in its path name, you should use the shortened path name (e.g. C:\Progra~1\Java\jre6) in the environment variable instead.
Note for Windows users on 64-bit systems
Progra~1 = 'Program Files'
Progra~2 = 'Program Files(x86)'
7 )Click OK.
8) Click Apply Changes.
9) Close any command window which was open before you made these changes, and open a new command window. There is no way to reload environment variables from an active command prompt. If the changes do not take effect even after reopening the command window, restart Windows.
10) If you are running the Confluence EAR/WAR distribution, rather than the regular Confluence distribution, you may need to restart your application server.
Does one need to install ant if the ant libraries are already present in NetBeans?
No. You don't need to install it again.
Is there a better way to import the sphinx jars into my .java project in NetBeans than through using Cygwin?
Using Cygwin(linux environment in Windows) definately works , but unsure about any other method. | unknown | |
d17776 | val | The following works with a CSV file. You may need to do this before proceding.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>StackOverflow</title>
</head>
<body>
<input type="text" id="searchvalue"><button onclick="search()">Search</button>
<table id="userslist"></table>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script>
var data = [],keys,values=[],delimeter=";";
function getData(onsuccess)
{
$.ajax({
url:"/Users/Default/Downloads/test.csv",
type:"GET",
datatype:"csv",
success:onsuccess
});
}
function parseData()
{
keys = data.splice(0,1)[0].split(delimeter);
values = [];
for(var i=0,n=data.length;i<n;i++)
{
values.push(data[i].split(delimeter));
}
}
function getTableRow(data,isheading)
{
var rowtype = isheading ? 'th' : 'td';
var rowhtml = '<tr>';
for(var key in data)
{
rowhtml+='<'+rowtype+'>'+data[key]+'</'+rowtype+'>';
}
return (rowhtml+'</tr>');
}
function populateData()
{
var htmldata = "";
htmldata = getTableRow(keys,true);
for(var index in values)
{
htmldata+=getTableRow(values[index]);
}
document.getElementById("userslist").innerHTML = htmldata;
}
function search()
{
var data = [], keys = [], values = [];;
function addRecord()
{
for(var i=0,n=data.length;i<n;i++)
{
var value = data[i].split(delimeter);
if(value[0].toLowerCase().includes(searchvalue.toLowerCase()))
{
values.push(value);
}
}
if(values.length)
{
for(var index in values)
{
table.insertAdjacentHTML("beforeend",getTableRow(values[index]));
}
}
else
{
alert("No records found.")
}
}
var searchvalue = document.getElementById("searchvalue").value;
var table = document.getElementById("userslist");
getData(function(response){
data = response.split(/\n/);
addRecord();
});
}
getData(function(response){ data = response.split(/\n/); parseData(); populateData();});
</script>
</body>
</html> | unknown | |
d17777 | val | for javaFX there is a library with write back support DataFX2.0
Sample Examples can be found here
If you need any further help on datafx then you can post in datafx google groups Link | unknown | |
d17778 | val | seems like you want to query your DOM by a specific tag, similar to jquery selectors. Take a look at the project below, it might be what you are looking for.
https://github.com/jamietre/csquery
A: Load the HTML into an HtmlDocument object, then select the first node where the text input appears. The node has everything you might need:
var doc = new HtmlDocument();
string input = "Product 1";
doc.LoadHtml("Your HTML here"); //Or doc.Load(), depends on how you're getting your HTML
HtmlNode selectedNode = doc.DocumentNode.SelectSingleNode(string.Format("//*[contains(text(),'{0}')]", input));
var tagName = selectedNode.Name;
var tagClass = selectedNode.Attributes["class"].Value;
//etc
Of course this all depends on the actual page structure, whether "Product 1" is shown anywhere else, whether other elements in the page also use the same node that contains "Product 1", etc. | unknown | |
d17779 | val | Google has fixed the issue: https://issuetracker.google.com/issues/112692348
I was able to run queries this morning using ordinal and offset with no issues. | unknown | |
d17780 | val | You need to add a GROUP BY clause:
SELECT CHINFO.CHILDID
, COUNT(1)
FROM BKA.CHILDEVENTS CHE
JOIN BKA.CHILDEVENTPROPERITIES CHEP ON CHEP.EVENTID = CHE.EVENTID
JOIN BKA.CHILDINFORMATION CHINFO ON CHE.CHILDID = CHINFO.CHILDID
WHERE ( CHE.TYPE = 'ACCIDENT'
OR ( CHE.TYPE = 'BREAK'
AND CHEP.PROPERTY = 'SUCCESS'
AND CHEP.PROPERTYVALUE = 'FALSE'
)
)
AND CHE.ADDDATE BETWEEN DATEADD(DD,
-( DATEPART(DW, @DATETIMENOW - 7) - 1 ),
@DATETIMENOW - 7)
AND DATEADD(DD,
7 - ( DATEPART(DW, @DATETIMENOW - 7) ),
@DATETIMENOW - 7)
GROUP BY CHINFO.CHILDID
A: A value in the where will invalidate an outer join
SELECT CHE.CHILDID
, COUNT(1)
FROM BKA.CHILDEVENTS CHE
LEFT JOIN BKA.CHILDEVENTPROPERITIES CHEP
ON CHEP.EVENTID = CHE.EVENTID
AND ( CHE.TYPE = 'ACCIDENT'
OR ( CHE.TYPE = 'BREAK'
AND CHEP.PROPERTY = 'SUCCESS'
AND CHEP.PROPERTYVALUE = 'FALSE'
)
)
AND CHE.ADDDATE BETWEEN DATEADD(DD,
-( DATEPART(DW, @DATETIMENOW - 7) - 1 ),
@DATETIMENOW - 7)
AND DATEADD(DD,
7 - ( DATEPART(DW, @DATETIMENOW - 7) ),
@DATETIMENOW - 7)
GROUP BY CHE.CHILDID
A: DECLARE @DATETIMENOW DATETIME
SET @DATETIMENOW = GETDATE()
SELECT B.WEEK FROM BKA.CHILDINFORMATION CI LEFT OUTER JOIN
(SELECT Distinct CHINFO.CHILDID,COUNT(*) as week FROM BKA.CHILDINFORMATION CHINFO
JOIN BKA.CHILDEVENTS CHE
ON CHE.CHILDID = CHINFO.CHILDID
JOIN BKA.CHILDEVENTPROPERITIES CHEP
ON CHE.EVENTID = CHEP.EVENTID
WHERE
(CHE.TYPE = 'ACCIDENT' OR (CHE.TYPE = 'POTTYBREAK' AND CHEP.PROPERTY = 'SUCCESS'
AND CHEP.PROPERTYVALUE = 'FALSE'))
AND
CHE.ADDDATE
BETWEEN DATEADD(DD, -(DATEPART(DW, @DATETIMENOW-14)-1), @DATETIMENOW-14) AND
DATEADD(DD, 7-(DATEPART(DW, @DATETIMENOW-14)), @DATETIMENOW-14) group by CHINFO.CHILDID) b
on CI.ChildID = b.ChildID | unknown | |
d17781 | val | This is a duplicate of SLComposeViewController setInitialText not showing up in View.
This behaviour is by design; prefilling was not allowed by policy, and now it's also enforced.
About the cancel button; this is a known issue and will be fixed. See bug report: https://developers.facebook.com/bugs/962985360399542/ | unknown | |
d17782 | val | The most efficient way might be to import the OSM data of the specific area to a local postGIS database using Osm2pgsql or ImpOsm and do your analytics there. | unknown | |
d17783 | val | Crap4j is one fairly good metrics that I'm aware of...
Its a Java implementation of the Change Risk Analysis and Predictions software metric which combines cyclomatic complexity and code coverage from automated tests.
A: If you are looking for some useful metrics that tell you about the quality (or lack there of) of your code, you should look into the following metrics:
*
*Cyclomatic Complexity
*
*This is a measure of how complex a method is.
*Usually 10 and lower is good, 11-25 is poor, higher is terrible.
*Nesting Depth
*
*This is a measure of how many nested scopes are in a method.
*Usually 4 and lower is good, 5-8 is poor, higher is terrible.
*Relational Cohesion
*
*This is a measure of how well related the types in a package or assembly are.
*Relational cohesion is somewhat of a relative metric, but useful none the less.
*Acceptable levels depends on the formula. Given the following:
*
*R: number of relationships in package/assembly
*N: number of types in package/assembly
*H: Cohesion of relationship between types
*Formula: H = (R+1)/N
*Given the above formula, acceptable range is 1.5 - 4.0
*Lack of Cohesion of Methods (LCOM)
*
*This is a measure of how cohesive a class is.
*Cohesion of a class is a measure of how many fields each method references.
*Good indication of whether your class meets the Principal of Single Responsibility.
*Formula: LCOM = 1 - (sum(MF)/M*F)
*
*M: number of methods in class
*F: number of instance fields in class
*MF: number of methods in class accessing a particular instance field
*sum(MF): the sum of MF over all instance fields
*A class that is totally cohesive will have an LCOM of 0.
*A class that is completely non-cohesive will have an LCOM of 1.
*The closer to 0 you approach, the more cohesive, and maintainable, your class.
These are just some of the key metrics that NDepend, a .NET metrics and dependency mapping utility, can provide for you. I recently did a lot of work with code metrics, and these 4 metrics are the core key metrics that we have found to be most useful. NDepend offers several other useful metrics, however, including Efferent & Afferent coupling and Abstractness & Instability, which combined provide a good measure of how maintainable your code will be (and whether or not your in what NDepend calls the Zone of Pain or the Zone of Uselessness.)
Even if you are not working with the .NET platform, I recommend taking a look at the NDepend metrics page. There is a lot of useful information there that you might be able to use to calculate these metrics on whatever platform you develop on.
A: Bug metrics are also important:
*
*Number of bugs coming in
*Number of bugs resolved
To detect for instance if bugs are not resolved as fast as new come in.
A: Code Coverage is just an indicator and helps pointing out lines which are not executed at all in your tests, which is quite interesting. If you reach 80% code coverage or so, it starts making sense to look at the remaining 20% of lines to identify if you are missing some use case. If you see "aha, this line gets executed if I pass an empty vector" then you can actually write a test which passes an empty vector.
As an alternative I can think of, if you have a specs document with Use Cases and Functional Requirements, you should map the unit tests to them and see how many UC are covered by FR (of course it should be 100%) and how many FR are covered by UT (again, it should be 100%).
If you don't have specs, who cares? Anything that happens will be ok :)
A: What about watching the trend of code coverage during your project?
As it is the case with many other metrics a single number does not say very much.
For example it is hard to tell wether there is a problem if "we have a Checkstyle rules compliance of 78.765432%". If yesterday's compliance was 100%, we are definitely in trouble. If it was 50% yesterday, we are probably doing a good job.
I alway get nervous when code coverage has gotten lower and lower over time. There are cases when this is okay, so you cannot turn off your head when looking at charts and numbers.
BTW, sonar (http://sonar.codehaus.org/) is a great tool for watching trends.
A: Using code coverage on it's own is mostly pointless, it gives you only insight if you are looking for unnecessary code.
Using it together with unit-tests and aiming for 100% coverage will tell you that all the 'tested' parts (assumed it was all successfully too) work as specified in the unit-test.
Writing unit-tests from a technical design/functional design, having 100% coverage and 100% successful tests will tell you that the program is working like described in the documentation.
Now the only thing you need is good documentation, especially the functional design, a programmer should not write that unless (s)he is an expert of that specific field.
A: Scenario coverage.
I don't think you really want to have 100% code coverage. Testing say, simple getters and setters looks like a waste of time.
The code always runs in some context, so you may list as many scenarios as you can (depending on the problem complexity sometimes even all of them) and test them.
Example:
// parses a line from .ini configuration file
// e.g. in the form of name=value1,value2
List parseConfig(string setting)
{
(name, values) = split_string_to_name_and_values(setting, '=')
values_list = split_values(values, ',')
return values_list
}
Now, you have many scenarios to test. Some of them:
*
*Passing correct value
*List item
*Passing null
*Passing empty string
*Passing ill-formated parameter
*Passing string with with leading or ending comma e.g. name=value1, or name=,value2
Running just first test may give you (depending on the code) 100% code coverage. But you haven't considered all the posibilities, so that metric by itself doesn't tell you much.
A: How about (lines of code)/(number of test cases)? Not extremely meaningful (since it depends on LOC), but at least it's easy to calculate.
Another one could be (number of test cases)/(number of methods).
A: As a rule of thumb, defect injection rates proportionally trail code yield and they both typically follow a Rayleigh distribution curve.
At some point your defect detection rate will peak and then start to diminish.
This apex represents 40% of discovered defects.
Moving forward with simple regression analysis you can estimate how many defects remain in your product at any point following the peak.
This is one component of Lawrence Putnam's model.
A: I wrote a blog post about why High Test Coverage Ratio is a Good Thing Anyway.
I agree that: when a portion of code is executed by tests, it doesn’t mean that the validity of the results produced by this portion of code is verified by tests.
But still, if you are heavily using contracts to check states validity during tests execution, high test coverage will mean a lot of verification anyway.
A: The value in code coverage is it gives you some idea of what has been exercised by tests.
The phrase "code coverage" is often used to mean statement coverage, e.g., "how much of my code (in lines) has been executed", but in fact there are over a hundred varieties of "coverage". These other versions of coverage try to provide a more sophisticated view what it means to exercise code.
For example, condition coverage measures how many of the separate elements of conditional expressions have been exercised. This is different than statement coverage. MC/DC
"modified condition/decision coverage" determines whether the elements of all conditional expressions have all been demonstrated to control the outcome of the conditional, and is required by the FAA for aircraft software. Path coverage meaures how many of the possible execution paths through your code have been exercised. This is a better measure than statement coverage, in that paths essentially represent different cases in the code. Which of these measures is best to use depends on how concerned you are about the effectiveness of your tests.
Wikipedia discusses many variations of test coverage reasonably well.
http://en.wikipedia.org/wiki/Code_coverage
A: This hasn't been mentioned, but the amount of change in a given file of code or method (by looking at version control history) is interesting particularly when you're building up a test suite for poorly tested code. Focus your testing on the parts of the code you change a lot. Leave the ones you don't for later.
Watch out for a reversal of cause and effect. You might avoid changing untested code and you might tend to change tested code more.
A: SQLite is an extremely well-tested library, and you can extract all kinds of metrics from it.
As of version 3.6.14 (all statistics in the report are against that release of SQLite), the SQLite library consists of approximately 63.2 KSLOC of C code. (KSLOC means thousands of "Source Lines Of Code" or, in other words, lines of code excluding blank lines and comments.) By comparison, the project has 715 times as much test code and test scripts - 45261.5 KSLOC.
In the end, what always strikes me as the most significant is none of those possible metrics seem to be as important as the simple statement, "it meets all the requirements." (So don't lose sight of that goal in the process of achieving it.)
If you want something to judge a team's progress, then you have to lay down individual requirements. This gives you something to point to and say "this one's done, this one isn't". It's not linear (solving each requirement will require varying work), and the only way you can linearize it is if the problem has already been solved elsewhere (and thus you can quantize work per requirement).
A: I like revenue, sales numbers, profit. They are pretty good metrics of a code base.
A: Probably not only measuring the code covered (touched) by the unit tests but how good the assertions are.
One metric easy to implement is to measure the size of the Assert.AreEqual
You can create your own Assert implementation calling Assert.AreEqual and measuring the size of the object passed as second parameter. | unknown | |
d17784 | val | The problem is that your package was encrypted by a user. This could have been you are you loging in to the pc with a diffrent login or from a diffrent machine? Your not going to be able to open it until you figure out who encrypted it or from what account it was encrypted from. | unknown | |
d17785 | val | It is theme issue on your end,probably textcolor set to white in your theme change these
<item name="android:textColorPrimary">@color/white</item>
<item name="android:textColorSecondary">@color/white</item>
change it to black
A: your row_layout.xml file textview in set textcolor:
<TextView
android:layout_width="123dp"
android:layout_height="wrap_content"
android:id="@+id/resultTeamNumber"
android:text="Here Number"
android:textSize="18dp"
android:layout_alignParentTop="true"
android:layout_alignParentStart="true"
android:textColor="@android:color/black"
/>
A: set your adapter upon listview in the last.............. | unknown | |
d17786 | val | With gradle 3 implemention was introduced. Replace compile with implementation.
Use this instead.
pom.withXml {
def dependenciesNode = asNode().appendNode('dependencies')
configurations.implementation.allDependencies.each {
def dependencyNode = dependenciesNode.appendNode('dependency')
dependencyNode.appendNode('groupId', it.group)
dependencyNode.appendNode('artifactId', it.name)
dependencyNode.appendNode('version', it.version)
}
}
A: I was able to work around this by having the script add the dependencies to the pom directly using pom.withXml.
//The publication doesn't know about our dependencies, so we have to manually add them to the pom
pom.withXml {
def dependenciesNode = asNode().appendNode('dependencies')
//Iterate over the compile dependencies (we don't want the test ones), adding a <dependency> node for each
configurations.compile.allDependencies.each {
def dependencyNode = dependenciesNode.appendNode('dependency')
dependencyNode.appendNode('groupId', it.group)
dependencyNode.appendNode('artifactId', it.name)
dependencyNode.appendNode('version', it.version)
}
}
This works for my project, it may have unforeseen consequences in others.
A: Kotlin DSL version of the accepted answer:
create<MavenPublication>("maven") {
groupId = "com.example"
artifactId = "sdk"
version = Versions.sdkVersionName
artifact("$buildDir/outputs/aar/Example-release.aar")
pom.withXml {
val dependenciesNode = asNode().appendNode("dependencies")
val configurationNames = arrayOf("implementation", "api")
configurationNames.forEach { configurationName ->
configurations[configurationName].allDependencies.forEach {
if (it.group != null) {
val dependencyNode = dependenciesNode.appendNode("dependency")
dependencyNode.appendNode("groupId", it.group)
dependencyNode.appendNode("artifactId", it.name)
dependencyNode.appendNode("version", it.version)
}
}
}
}
}
A: I'm upgraded C.Ross solution. This example will generate pom.xml with dependecies from compile configuration and also with special build type dependecies, for example if you use different dependencies for release or debug version (debugCompile and releaseCompile). And also it adding exlusions
publishing {
publications {
// Create different publications for every build types (debug and release)
android.buildTypes.all { variant ->
// Dynamically creating publications name
"${variant.name}Aar"(MavenPublication) {
def manifest = new XmlSlurper().parse(project.android.sourceSets.main.manifest.srcFile);
def libVersion = manifest['@android:versionName'].text()
def artifactName = project.getName()
// Artifact properties
groupId GROUP_ID
version = libVersion
artifactId variant.name == 'debug' ? artifactName + '-dev' : artifactName
// Tell maven to prepare the generated "*.aar" file for publishing
artifact("$buildDir/outputs/aar/${project.getName()}-${variant.name}.aar")
pom.withXml {
//Creating additional node for dependencies
def dependenciesNode = asNode().appendNode('dependencies')
//Defining configuration names from which dependencies will be taken (debugCompile or releaseCompile and compile)
def configurationNames = ["${variant.name}Compile", 'compile']
configurationNames.each { configurationName ->
configurations[configurationName].allDependencies.each {
if (it.group != null && it.name != null) {
def dependencyNode = dependenciesNode.appendNode('dependency')
dependencyNode.appendNode('groupId', it.group)
dependencyNode.appendNode('artifactId', it.name)
dependencyNode.appendNode('version', it.version)
//If there are any exclusions in dependency
if (it.excludeRules.size() > 0) {
def exclusionsNode = dependencyNode.appendNode('exclusions')
it.excludeRules.each { rule ->
def exclusionNode = exclusionsNode.appendNode('exclusion')
exclusionNode.appendNode('groupId', rule.group)
exclusionNode.appendNode('artifactId', rule.module)
}
}
}
}
}
}
}
}
}
}
A: I guess it has something to do with the from components.java directive, as seen in the guide. I had a similar setup and it made the difference to add the line into the publication block:
publications {
mavenJar(MavenPublication) {
artifactId 'rest-security'
artifact jar
from components.java
}
}
A: I was using the maven-publish plugin for publishing my aar dependency and actually I could not use the maven task in my case. So I used the mavenJava task provided by the maven-publish plugin and used that as follows.
apply plugin 'maven-publish'
publications {
mavenAar(MavenPublication) {
from components.android
}
mavenJava(MavenPublication) {
pom.withXml {
def dependenciesNode = asNode().appendNode('dependencies')
// Iterate over the api dependencies (we don't want the test ones), adding a <dependency> node for each
configurations.api.allDependencies.each {
def dependencyNode = dependenciesNode.appendNode('dependency')
dependencyNode.appendNode('groupId', it.group)
dependencyNode.appendNode('artifactId', it.name)
dependencyNode.appendNode('version', it.version)
}
}
}
}
I hope that it helps someone who is looking for help on how to publish the aar along with pom file using the maven-publish plugin.
A: now that compile is deprecated we have to use implementation.
pom.withXml {
def dependenciesNode = asNode().appendNode('dependencies')
configurations.implementation.allDependencies.each {
def dependencyNode = dependenciesNode.appendNode('dependency')
dependencyNode.appendNode('groupId', it.group)
dependencyNode.appendNode('artifactId', it.name)
dependencyNode.appendNode('version', it.version)
} | unknown | |
d17787 | val | ('0' to 'z').filter(_.isLetterOrDigit).toSet
A: A more functional version of your code is this:
scala> Traversable(('A' to 'Z'), ('a' to 'z'), ('0' to '9')) map (_ toSet) reduce (_ ++ _)
Combining it with the above solutions, one gets:
scala> Seq[Seq[Char]](('A' to 'Z'), ('a' to 'z'), ('0' to '9')) reduce (_ ++ _) toSet
If you have just three sets, the other solutions are simpler, but this structure also works nicely if you have more ranges or they are given at runtime.
A: How about this:
scala> ('a' to 'z').toSet ++ ('A' to 'Z') ++ ('0' to '9')
res0: scala.collection.immutable.Set[Char] = Set(E, e, X, s, x, 8, 4, n, 9, N, j, y, T, Y, t, J, u, U, f, F, A, a, 5, m, M, I, i, v, G, 6, 1, V, q, Q, L, b, g, B, l, P, p, 0, 2, C, H, c, W, h, 7, r, K, w, R, 3, k, O, D, Z, o, z, S, d)
Or, alternatively:
scala> (('a' to 'z') ++ ('A' to 'Z') ++ ('0' to '9')).toSet
res0: scala.collection.immutable.Set[Char] = Set(E, e, X, s, x, 8, 4, n, 9, N, j, y, T, Y, t, J, u, U, f, F, A, a, 5, m, M, I, i, v, G, 6, 1, V, q, Q, L, b, g, B, l, P, p, 0, 2, C, H, c, W, h, 7, r, K, w, R, 3, k, O, D, Z, o, z, S, d)
A: I guess it can't be simpler than this:
('a' to 'z') ++ ('A' to 'Z') ++ ('0' to '9')
You might guess that ('A' to 'z') will include both, but it also adds some extra undesirable characters, namely:
([, \, ], ^, _, `)
Note:
This will not return a Set but an IndexedSeq. I assumed you don't mind the implementation, but if you do, and do want a Set, just call toSet to the result.
A: If you want to generate all the possible characters, doing this should generate all the values a char can take:
(' ' to '~').toSet | unknown | |
d17788 | val | Most Heroku CLI commands support the -a parameter to specify the application, in this case:
heroku buildpacks:set heroku/nodejs -a <app name> | unknown | |
d17789 | val | I agree with @Bickknght that the unpacking is unnecessary. Don't use unpacking when dealing with an unknown or variable number of elements.
In [57]: alist = [np.arange(10), np.arange(10,20), np.arange(20,30)]
Making a list of arrays where the we don't need the ravel.
In [58]: for arr in np.nditer(alist):
...: print(arr)
...:
(array(0), array(10), array(20))
(array(1), array(11), array(21))
(array(2), array(12), array(22))
(array(3), array(13), array(23))
(array(4), array(14), array(24))
(array(5), array(15), array(25))
(array(6), array(16), array(26))
(array(7), array(17), array(27))
(array(8), array(18), array(28))
(array(9), array(19), array(29))
Compare this with a straight forward list zip iteration:
In [59]: for arr in zip(*alist):
...: print(arr)
...:
(0, 10, 20)
(1, 11, 21)
(2, 12, 22)
(3, 13, 23)
(4, 14, 24)
(5, 15, 25)
(6, 16, 26)
(7, 17, 27)
(8, 18, 28)
(9, 19, 29)
The difference is that nditer makes 0d arrays rather than scalars. So the elements have a shape ((0,)) and dtype. Or in some cases where you want to modify the arrays (but they have to be defined as read/write. Otherwise nditer does not offer any real advantages.
In [62]: %%timeit
...: ll = []
...: for arr in np.nditer(alist):
...: ll.append(np.var(arr))
...:
539 µs ± 17.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [63]: %%timeit
...: ll = []
...: for arr in zip(*alist):
...: ll.append(np.var(arr))
...:
524 µs ± 3.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
If you can avoid the Python level loops, things will be lot faster:
In [65]: np.stack(alist,1)
Out[65]:
array([[ 0, 10, 20],
[ 1, 11, 21],
[ 2, 12, 22],
[ 3, 13, 23],
[ 4, 14, 24],
[ 5, 15, 25],
[ 6, 16, 26],
[ 7, 17, 27],
[ 8, 18, 28],
[ 9, 19, 29]])
In [66]: np.var(np.stack(alist,1),axis=1)
Out[66]:
array([66.66666667, 66.66666667, 66.66666667, 66.66666667, 66.66666667,
66.66666667, 66.66666667, 66.66666667, 66.66666667, 66.66666667])
In [67]: timeit np.var(np.stack(alist,1),axis=1)
66.7 µs ± 1.47 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
I've not attempted to test for -inf.
===
Another important difference with nditer. It iterates on all elements in a flat sense - in effect it is do the ravel:
Make a list of 2d arrays.
In [81]: alist = [np.arange(10.).reshape(2,5), np.arange(10,20.).reshape(2,5), np.arange(20,30.).reshape(2,5)]
PLain iteration operates on the first dimension - in this case the 2, so zipped elements are arrays:
In [82]: for arr in zip(*alist):
...: print(arr)
...:
(array([0., 1., 2., 3., 4.]), array([10., 11., 12., 13., 14.]), array([20., 21., 22., 23., 24.]))
(array([5., 6., 7., 8., 9.]), array([15., 16., 17., 18., 19.]), array([25., 26., 27., 28., 29.]))
nditer generates the same tuples as in the 1d array case. Some cases that's fine, but it's hard to avoid if if you don't want it.
In [83]: for arr in np.nditer(alist):
...: print(arr)
...:
(array(0.), array(10.), array(20.))
(array(1.), array(11.), array(21.))
(array(2.), array(12.), array(22.))
(array(3.), array(13.), array(23.))
(array(4.), array(14.), array(24.))
(array(5.), array(15.), array(25.))
(array(6.), array(16.), array(26.))
(array(7.), array(17.), array(27.))
(array(8.), array(18.), array(28.))
(array(9.), array(19.), array(29.))
A: The zip function is a solution here, as explained by @hpaulj. Working with 2d arrays instead of 1d simply requires to use two times this function, as the following code shows :
variances = []
for arr in zip(*cost_surfaceS):
for element in zip(*arr):
if(float("-inf") not in element):
variance = np.var(element, dtype=np.float32)
variances.append(variance)
else:
variances.append(float("-inf"))
The -inf values are handled by the if condition that avoids computing the variance of arrays containing at least one infinity value. | unknown | |
d17790 | val | I'm reasonably confident it is to do with the order of your .antMatchers() statements.
You currently have .antMatchers("/secured/**").fullyAuthenticated() before .antMatchers("/secured/admin/**").hasRole("ADMIN"). Spring Security is probably matching against this first matcher and applying the fullyAuthenticated() check, which will mean that authorisation is given if you only have role USER.
I would suggest re-ordering things so that your .antMatchers() statements are like this:
.antMatchers("/public/login.jsp").permitAll()
.antMatchers("/public/home.jsp").permitAll()
.antMatchers("/public/**").permitAll()
.antMatchers("/resources/clients/**").fullyAuthenticated()
.antMatchers("/secured/user/**").hasRole("USER")
.antMatchers("/secured/admin/**").hasRole("ADMIN")
.antMatchers("/secured/**").fullyAuthenticated()
In this scenario Spring will match earlier rules for specific access to the /secured/admin/** and the /secured/user/** resources before falling back to the /secured/** statement. | unknown | |
d17791 | val | In your code
printf ( "%d\n", a[0] );
printf ( "%d\n", a[1] );
printf ( "%d\n", a[10] );
printf ( "%d\n", a[100] );
produces undefined behaviour by accessing out-of-bound memory. | unknown | |
d17792 | val | What kind change detection do you use? Is OnPush?
https://angular.io/api/core/ChangeDetectionStrategy
enum ChangeDetectionStrategy {
OnPush: 0
Default: 1
}
OnPush: 0
Use the CheckOnce strategy, meaning that automatic change detection is deactivated until reactivated by setting the strategy to Default (CheckAlways). Change detection can still be explicitly invoked. This strategy applies to all child directives and cannot be overridden.
If you using OnPush you should start change detection manually.
https://angular.io/api/core/ChangeDetectorRef detectChanges() or markForCheck()
Example:
import { Component, ChangeDetectionStrategy, ChangeDetectorRef } from '@angular/core';
@Component({
selector: 'alert',
template: `
<div *ngIf="isScreenError" class="alert alert-danger alert-dismissible">
<button class="close" type="button" data-dismiss="alert" (click)='closeAlert()'>×</button>
ERROR: {{errorMessage.error.message}}
</div>
`,
changeDetection: ChangeDetectionStrategy.OnPush,
})
export class AlertComponent {
public errorMessage = {
error: {
message: 'Some message'
}
};
public isScreenError = true;
constructor(
private cd: ChangeDetectorRef,
) { }
public closeAlert(): void {
this.isScreenError = false;
this.cd.markForCheck();
}
} | unknown | |
d17793 | val | You are missing the new, when creating your viewmodel.
Your code should look like this:
ko.applyBindings(new ViewModel());
Without the new the this refers to the global window object so your remove function is declared globally, that is why the $parent is not working.
Demo JsFiddle. | unknown | |
d17794 | val | Although this is a very old question. I was also looking but couldnt find the answer until i found out what the problem is.
EasySMPP library is using asynchronous calls to connect to the SMSC which is why when you run the readline() command line you are requested to put your text as readline and while there is a delay in your typing the SMSC has already binded by then. So it works with the stupid Console.Readline()
When you run without the readline() the code executes super fast and by that time the your application has not binded to the SMSC and it fails.
SmsClient client = new SmsClient();
client.Connect();
System.Threading.Thread.Sleep(5000);
if (client.SendSms("MyNumber", "XXXXXXXXX", "Hi"))
Console.WriteLine("Message sent");
else
Console.WriteLine("Error");
client.Disconnect();
Console.ReadLine();
A: Try surrounding it by Try and catch , make the if in try and return error at catch exception.Message . | unknown | |
d17795 | val | You can use a list comprehension.
x = [[el[1]] for el in filtered]
or:
x = [[y] for x,y in filtered]
You can also use map with itemgetter. To print it, iterate over the iterable object returned by map. You can use list for instance.
from operator import itemgetter
x = map(itemgetter(1), filtered)
print(list(x))
A: You are not closer to a solution trying to pass a key to map. map only takes a function and an iterable (or multiple iterables). Key functions are for ordering-related functions (sorted, max, etc.)
But you were actually pretty close to a solution in the start:
a = map(itemgetter(0), filtered)
The first problem is that you want the second item (item 1), but you're passing 0 instead of 1 to itemgetter. That obviously won't work.
The second problem is that a is a map object—a lazily iterable. It does in fact have the information you want:
>>> a = map(itemgetter(1), filtered)
>>> for val in a: print(val, sep=' ')
3.0 70.0 3.0 50.0 5.0 21.0
… but not as a list. If you want a list, you have to call list on it:
>>> a = list(map(itemgetter(1), filtered))
>>> print(a)
[3.0, 70.0, 3.0, 50.0, 5.0, 21.0]
Finally, you wanted a list of single-element lists, not a list of elements. In other words, you want the equivalent of item[1:] or [item[1]], not just item[1]. You can do that with itemgetter, but it's a pretty ugly, because you can't use slice syntax like [1:] directly, you have to manually construct the slice object:
>>> a = list(map(itemgetter(slice(1, None)), filtered))
>>> print(a)
[[3.0], [70.0], [3.0], [50.0], [5.0], [21.0]]
You could write this a lot more nicely by using a lambda function:
>>> a = list(map(lambda item: item[1:], filtered))
>>> print(a)
[[3.0], [70.0], [3.0], [50.0], [5.0], [21.0]]
But at this point, it's worth taking a step back: map does the same thing as a generator expression, but map takes a function, while a genexpr takes an expression. We already know exactly what expression we want here; the hard part was turning it into a function:
>>> a = list(item[1:] for item in filtered)
>>> print(a)
Plus, you don't need that extra step to turn it into a list with a genexpr; just swap the parentheses with brackets and you've got a list comprehension:
>>> a = [item[1:] for item in filtered]
>>> print(a) | unknown | |
d17796 | val | Can you add width and height map div? If you have blank page instead map, it's probably missing css. | unknown | |
d17797 | val | Your counter is indeed stoppping, but you then reassign mytimeout after the if statement so the timer starts again. I'm guessing the $state.go() still runs but the counter continues in the console.
Instead, call the timer if less than 10, otherwise call the resolving function.
$scope.startTimer = function() {
$scope.counter = 0;
$scope.onTimeout = function() {
$log.info($scope.counter);
if($scope.counter < 10){
mytimeout = $timeout($scope.onTimeout, 1000)
}else{
$scope.stop();
$state.go($state.current.name, {}, {
reload: true
})
}
$scope.counter++;
}
mytimeout = $timeout($scope.onTimeout, 1000);
} | unknown | |
d17798 | val | I finally found the reason why the implicit style didn't work.
I'm using ModernUI with WPF4.0 and I deleted the
<Style TargetType="{x:Type Rectangle}"/>
in app.xaml the other day.
Well, it's that simple and everything looks fine now. Except that I still don't know how the empty style works. | unknown | |
d17799 | val | Just reduce the problem dimensionality from 3 to 2 (I know now that it is said "9x9" instead of "3x3", but the important dimensional number for the puzzle is N=3):
% SHIDOKU Solve Shidoku using recursive backtracking.
% shidoku(X), expects a 4-by-4 array X.
function X = shidoku(X)
[C,s,e] = candidates(X);
while ~isempty(s) && isempty(e)
X(s) = C{s};
[C,s,e] = candidates(X);
end;
if ~isempty(e)
return
end;
if any(X(:) == 0)
Y = X;
z = find(X(:) == 0,1);
for r = [C{z}]
X = Y;
X(z) = r;
X = shidoku(X);
if all(X(:) > 0)
return;
end;
end;
end;
% ------------------------------
function [C,s,e] = candidates(X)
C = cell(4,4);
bi = @(k) 2*ceil(k/2-1) + (1:2);
for j = 1:4
for i = 1:4
if X(i,j)==0
z = 1:4;
z(nonzeros(X(i,:))) = 0;
z(nonzeros(X(:,j))) = 0;
z(nonzeros(X(bi(i),bi(j)))) = 0;
C{i,j} = transpose(nonzeros(z));
end;
end;
end;
L = cellfun(@length,C); % Number of candidates.
s = find(X==0 & L==1,1);
e = find(X==0 & L==0,1);
end % candidates
end % shidoku | unknown | |
d17800 | val | This complete example based on tensorflow github worked for me:
(I modified few lines of code by removing name scope for x, keep_prob and changing to tf.placeholder_with_default. There's probably a better way to do this somewhere.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import pandas as pd
import argparse
import sys
import tempfile
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
FLAGS = None
def deepnn(x):
"""deepnn builds the graph for a deep net for classifying digits.
Args:
x: an input tensor with the dimensions (N_examples, 784), where 784 is the
number of pixels in a standard MNIST image.
Returns:
A tuple (y, keep_prob). y is a tensor of shape (N_examples, 10), with values
equal to the logits of classifying the digit into one of 10 classes (the
digits 0-9). keep_prob is a scalar placeholder for the probability of
dropout.
"""
# Reshape to use within a convolutional neural net.
# Last dimension is for "features" - there is only one here, since images are
# grayscale -- it would be 3 for an RGB image, 4 for RGBA, etc.
with tf.name_scope('reshape'):
x_image = tf.reshape(x, [-1, 28, 28, 1])
# First convolutional layer - maps one grayscale image to 32 feature maps.
with tf.name_scope('conv1'):
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
# Pooling layer - downsamples by 2X.
with tf.name_scope('pool1'):
h_pool1 = max_pool_2x2(h_conv1)
# Second convolutional layer -- maps 32 feature maps to 64.
with tf.name_scope('conv2'):
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
# Second pooling layer.
with tf.name_scope('pool2'):
h_pool2 = max_pool_2x2(h_conv2)
# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image
# is down to 7x7x64 feature maps -- maps this to 1024 features.
with tf.name_scope('fc1'):
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# Dropout - controls the complexity of the model, prevents co-adaptation of
# features.
keep_prob = tf.placeholder_with_default(1.0,())
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# Map the 1024 features to 10 classes, one for each digit
with tf.name_scope('fc2'):
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
return y_conv, keep_prob
def conv2d(x, W):
"""conv2d returns a 2d convolution layer with full stride."""
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
"""max_pool_2x2 downsamples a feature map by 2X."""
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
def weight_variable(shape):
"""weight_variable generates a weight variable of a given shape."""
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
"""bias_variable generates a bias variable of a given shape."""
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
# Import data
mnist = input_data.read_data_sets("/tmp")
# Create the model
x = tf.placeholder(tf.float32, [None, 784],name='x')
# Define loss and optimizer
y_ = tf.placeholder(tf.int64, [None])
# Build the graph for the deep net
y_conv, keep_prob = deepnn(x)
with tf.name_scope('loss'):
cross_entropy = tf.losses.sparse_softmax_cross_entropy(
labels=y_, logits=y_conv)
cross_entropy = tf.reduce_mean(cross_entropy)
with tf.name_scope('adam_optimizer'):
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
with tf.name_scope('accuracy'):
correct_prediction = tf.equal(tf.argmax(y_conv, 1), y_)
correct_prediction = tf.cast(correct_prediction, tf.float32)
accuracy = tf.reduce_mean(correct_prediction)
graph_location = tempfile.mkdtemp()
print('Saving graph to: %s' % graph_location)
train_writer = tf.summary.FileWriter(graph_location)
train_writer.add_graph(tf.get_default_graph())
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(1000):
batch = mnist.train.next_batch(50)
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x: batch[0], y_: batch[1], keep_prob: 1.0})
print('step %d, training accuracy %g' % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print('test accuracy %g' % accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
sing = np.reshape(mnist.test.images[0],(-1,784))
output = sess.run(y_conv,feed_dict={x:sing,keep_prob:1.0})
print(tf.argmax(output,1).eval())
saver = tf.train.Saver()
saver.save(sess,"/tmp/network")
Extracting /tmp/train-images-idx3-ubyte.gz
Extracting /tmp/train-labels-idx1-ubyte.gz
Extracting /tmp/t10k-images-idx3-ubyte.gz
Extracting /tmp/t10k-labels-idx1-ubyte.gz
Saving graph to: /tmp/tmp17hf_6c7
step 0, training accuracy 0.2
step 100, training accuracy 0.86
step 200, training accuracy 0.8
step 300, training accuracy 0.94
step 400, training accuracy 0.94
step 500, training accuracy 0.96
step 600, training accuracy 0.88
step 700, training accuracy 0.98
step 800, training accuracy 0.98
step 900, training accuracy 0.98
test accuracy 0.9663
[7]
If you want to restore from a new python run:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import argparse
import sys
import tempfile
from tensorflow.examples.tutorials.mnist import input_data
sess = tf.Session()
saver = tf.train.import_meta_graph('/tmp/network.meta')
saver.restore(sess,tf.train.latest_checkpoint('/tmp'))
graph = tf.get_default_graph()
mnist = input_data.read_data_sets("/tmp")
simg = np.reshape(mnist.test.images[0],(-1,784))
op_to_restore = graph.get_tensor_by_name("fc2/MatMul:0")
x = graph.get_tensor_by_name("x:0")
output = sess.run(op_to_restore,feed_dict= {x:simg})
print("Result = ", np.argmax(output))
A: The losses are oscillating like this but the predictions don't seem to be bad. It works.
It also extracts the mnist archive repeatedly. Accuracy also can reach 0.98 with a simpler network.
Epoch 1 Completed Total Loss: 47.47844
Accuracy on val_set: 0.8685
Epoch 2 Completed Total Loss: 10.217445
Accuracy on val_set: 0.9
Epoch 3 Completed Total Loss: 14.013474
Accuracy on val_set: 0.9104
[2]
import tensorflow as tf
import tensorflow.examples.tutorials.mnist.input_data as input_data
import numpy as np
import matplotlib.pyplot as plt
n_classes = 10
batch_size = 100
x = tf.placeholder(tf.float32, [None, 784],name='Xx')
y = tf.placeholder(tf.float32,[None,10],name='Yy')
input = 784
n_nodes_1 = 300
n_nodes_2 = 300
mnist = input_data.read_data_sets('MNIST_data/', one_hot=True)
def neural_network_model(data):
variables = {'w1':tf.Variable(tf.random_normal([input,n_nodes_1])),
'w2':tf.Variable(tf.random_normal([n_nodes_1,n_nodes_2])),
'w3':tf.Variable(tf.random_normal([n_nodes_2,n_classes])),
'b1':tf.Variable(tf.random_normal([n_nodes_1])),
'b2':tf.Variable(tf.random_normal([n_nodes_2])),
'b3':tf.Variable(tf.random_normal([n_classes]))}
output1 = tf.add(tf.matmul(data,variables['w1']),variables['b1'])
output2 = tf.nn.relu(output1)
output3 = tf.add(tf.matmul(output2, variables['w2']), variables['b2'])
output4 = tf.nn.relu(output3)
output5 = tf.add(tf.matmul(output4, variables['w3']), variables['b3'],name='last')
return output5
def train_neural_network(x):
prediction = neural_network_model(x)
name_of_final_layer = 'fin'
final = tf.nn.softmax_cross_entropy_with_logits_v2(logits=prediction,
labels=y,name=name_of_final_layer)
cost = tf.reduce_mean(final)
optimizer = tf.train.AdamOptimizer().minimize(cost)
hm_epochs = 3
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
for _ in range(int(mnist.train.num_examples/batch_size)):
epoch_x, epoch_y = mnist.train.next_batch(batch_size)
_,c=sess.run([optimizer,cost],feed_dict={x:epoch_x,y:epoch_y})
print("Epoch",epoch+1,"Completed Total Loss:",c)
correct = tf.equal(tf.argmax(prediction,1),tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct,'float'))
print('Accuracy on val_set:',accuracy.eval({x:mnist.test.images,y:mnist.test.labels}))
#path = saver.save(sess,"net/network")
#print("Saved to",path)
return prediction
def eval_neural_network(prediction):
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('net/network.meta')
new_saver.restore(sess, "net/network")
singleprediction = tf.argmax(prediction, 1)
sing = np.reshape(mnist.test.images[1], (-1, 784))
output = singleprediction.eval(feed_dict={x:sing},session=sess)
digit = mnist.test.images[1].reshape((28, 28))
plt.imshow(digit, cmap='gray')
plt.show()
print(output)
prediction = train_neural_network(x)
eval_neural_network(prediction) | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.