_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d8801 | train | You have way too many errors in your code. The following is the solution to your problem:
<?php
if(isset($_GET['result1']))
{
$result1 = $_GET['result1'];
echo $result1;
}
elseif(isset($_GET['result2']))
{
$result2 = $_GET['result2'];
echo $result2;
}
else
{
echo "Error";
exit();
}
?>
For the future, I would recommend you to learn PHP and be familiar with the basic syntax, at least, before posting questions about it here. | unknown | |
d8802 | train | You may use this awk:
awk -F, -v n=85 'n>=$1 && (max=="" || $1>max){max=$1; rec=$0} END{print rec}' file
75,Line8
Run this again with a different value:
awk -F, -v n=55 'n>=$1 && (max=="" || $1>max){max=$1; rec=$0} END{print rec}' file
55,Line6
A: With Perl
perl -0777 -wnE' $in = shift // 85;
%h = split /(?:\s*,\s*|\s*\n\s*)/;
END { --$in while not exists $h{$in}; say "$in, $h{$in}" }
' data.txt 57
Notes
*
*read the whole file into a string ("slurp"), by -0777
*populate a hash with file data; I strip surrounding spaces in the process
*count down from input-value and check for such a key, until we get to one that exists
*input is presumed integer and being in range
The nearest key is the first one that exists as input "clicks" down toward it an integer at a time.
The invocation above (for 57) prints the line: 55, Line6.
A version that does check the range of input and allows non-integer input
perl -MList::Util=min -0777 -wnE' $in = int shift // 85;
%h = split /(?:\s*,\s*|\s*\n\s*)/;
die "Input $in out of range\n" if $in < min keys %h;
END { --$in while not exists $h{$in}; say "$in, $h{$in}" }
' data.txt 57
A: Following code comply with your requirement
use strict;
use warnings;
my $target = shift
or die "Please enter a value";
my $line;
while( <DATA> ) {
my @data = split ',';
last if $data[0] > $target;
$line = $_;
}
print $line;
__DATA__
10,Line1
20,Line2
30,Line3
40,Line4
50,Line5
55,Line6
70,Line7
75,Line8
90,Line9
95,Line10
99,Line11
A: Code for unsorted lines
use strict;
use warnings;
my $target = shift
or die "Please enter a value";
my @lines = <DATA>;
my $line;
my %data;
map { my @array = split ',', $_; $data{$array[0]} = $_ } @lines;
foreach my $key ( sort keys %data ) {
last if $key > $target;
$line = $data{$key};
}
print $line;
__DATA__
10,Line1
20,Line2
30,Line3
40,Line4
50,Line5
55,Line6
70,Line7
75,Line8
90,Line9
95,Line10
99,Line11 | unknown | |
d8803 | train | turns out my message is present in the tokenized form instead of the raw form, and : is one of the default delimiters of the tokenizer, in elastic. And as a reason, I can't use regexp query on the whole message because it matches it with each token individually. | unknown | |
d8804 | train | The first part of the question is quite simple: the iterator reference the value and a given location which is of type std::pair<int const, Object*>. You can't delete such an object. You'll get the pointer using the second part of the object, e.g.:
delete found_it->second;
However, it is actually quite uncommon and error-prone to take care of this kind of maintenance. You are much better off not storing an Object* but rather a std::unique_ptr<Object>: the std::unique_ptr<Object> will automatically release the object upon destruction. That would also simplify removing an element:
int DataSet::deletObject(int id) {
return objects.erase(id);
}
When storing std::unique_ptr<Object> you'll probably need to be a bit more verbose when inserting elements into the container, though, e.g.:
objects.insert(std::make_pair(key, std::unique_ptr<Object>(ptr)));
Given that your Object class doesn't have any virtual functions you could probably just store an Object directly, though:
std::unordered_map<int, Object> objects;
A: Dereferencing an unordered_map::iterator produces a unordered_map::value_type&, i.e. pair<const Key, Value>&.
You want
delete found_it->second;
objects.erase(found_it);
Stepping back a bit, are you sure you even need std::unordered_map<int,Object*> instead of std::unordered_map<int,Object>? If you use the latter you wouldn't have to worry about deleteing anything.
If you do need the mapped_type be an Object* you should use std::unordered_map<int,std::unique_ptr<Object>>. Again, this makes it unnecessary to manually delete the values before erasing an entry from the map.
Also, if Object is the base class for whatever types you're going to be adding to the map, don't forget that it needs a virtual destructor. | unknown | |
d8805 | train | You are facing this issue because of default value of max_input_length parameter is set to 50.
Below is description given for this parameter in documentation:
Limits the length of a single input, defaults to 50 UTF-16 code
points. This limit is only used at index time to reduce the total
number of characters per input string in order to prevent massive
inputs from bloating the underlying datastructure. Most use cases
won’t be influenced by the default value since prefix completions
seldom grow beyond prefixes longer than a handful of characters.
If you enter below string which is exact 50 character then you will get response:
PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE
Now if you add one more or two character to above string then it will not resturn the result:
PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE L
You can use this default behaviour or you can updated your index mapping with increase value of max_input_length parameter and reindex your data.
{
"mappings": {
"dynamic": "false",
"properties": {
"namesuggest": {
"type": "completion",
"analyzer": "keyword_lowercase_analyzer",
"preserve_separators": true,
"preserve_position_increments": true,
"max_input_length": 100,
"contexts": [
{
"name": "searchable",
"type": "CATEGORY"
}
]
}
}
},
"settings": {
"index": {
"mapping": {
"ignore_malformed": "true"
},
"refresh_interval": "5s",
"analysis": {
"analyzer": {
"keyword_lowercase_analyzer": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": "keyword"
}
}
},
"number_of_replicas": "0",
"number_of_shards": "1"
}
}
}
You will get response like below after updating index:
"suggest": {
"legalEntity": [
{
"text": "PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE LIMITED",
"offset": 0,
"length": 58,
"options": [
{
"text": "PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE LIMITED.",
"_index": "74071871",
"_id": "123",
"_score": 1,
"_source": {
"namesuggest": {
"input": [
"PRAXIS CONSULTING AND INFORMATION SERVICES PRIVATE LIMITED."
],
"contexts": {
"searchable": [
"*"
]
}
}
},
"contexts": {
"searchable": [
"*"
]
}
}
]
}
]
} | unknown | |
d8806 | train | You're trying to connect to a Web service. Web services may return either a XML or JSON. The actual Socket library isn't needed nor used for this. This specific web service returns a JSON, so what you need is a library that is capable of making a GET request to this URL and parse the response. For Android, I recommend using Volley. You can include Volley in your project by adding this line to your build.gradle.
implementation 'com.android.volley:volley:1.0.0'
This question might help you get started. | unknown | |
d8807 | train | Read the documentation:
xavier_normal_
Fills the input Tensor with values according to the method described in “Understanding the difficulty of training deep feedforward neural networks” - Glorot, X. & Bengio, Y. (2010), using a normal distribution.
kaiming_normal_
Fills the input Tensor with values according to the method described in “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification” - He, K. et al. (2015), using a normal distribution.
The given equations are completely different. For more details you'll have to read those papers. | unknown | |
d8808 | train | The crypto module is based on OpenSSL that is not available in the browser
The crypto module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign and verify functions.
I suggest to use WebCryptographyApi that is available in all modern browsers. See an example here Angular JS Cryptography. pbkdf2 and iteration | unknown | |
d8809 | train | This is probably because when you're doing the splitting, there is no :, so the function just returns one argument, and not 2. This is probably caused by the last line, meaning that you're last line has nothing but empty spaces. Like so:
>>> a = ' '
>>> a = a.strip()
>>> a
''
>>> a.split(':')
['']
As you can see, the list returned from .split is just a single empty string. So, just to show you a demo, this is a sample file:
a: b
c: d
e: f
g: h
We try to use the following script (val.txt is the name of the above file):
with open('val.txt', 'r') as v:
for line in v:
a, b = line.split(':')
print a, b
And this gives us:
Traceback (most recent call last):
a b
c d
File "C:/Nafiul Stuff/Python/testingZone/28_11_13/val.py", line 3, in <module>
a, b = line.split(':')
e f
ValueError: need more than 1 value to unpack
When trying to look at this through a debugger, the variable line becomes \n, and you can't split that.
However, a simple logical ammendment, would correct this problem:
with open('val.txt', 'r') as v:
for line in v:
if ':' in line:
a, b = line.strip().split(':')
print a, b
A: line.split(':') apparently returns a list with one element, not two.
Hence that's why it can't unpack the result into questions and answers. Example:
>>> line = 'this-line-does-not-contain-a-colon'
>>> question, answers = line.split(':')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: need more than 1 value to unpack
A: Try:
question, answers = line.split(':', maxsplit=1)
question, __, answers = line.partition(':')
Also in Python 3 you can do something else:
question, *many_answers = line.split(':')
which looks like:
temp = line.split(':')
question = temp[0]
many_answers = tuple(temp[1:])
A: The reason why this happens could be a few, as already covered in the other answers. Empty line, or maybe a line only have a question and no colon. If you want to parse the lines even if they don't have the colon (for example if some lines only have the question), you can change your split to the following:
questions, answers, garbage = (line+'::').split(':', maxsplit=2)
This way, the values for questions and answers will be filled if they are there, and will be empty it the original file doesn't have them. For all intents and purposes, ignore the variable garbage. | unknown | |
d8810 | train | To solve your immediate issue you need to access the objects properties in the loop, not push the string in to the array:
let variantOptions = [];
for (i = 1; i < 4; i++) {
var key = 'option' + i;
if (data.hasOwnProperty(key)) {
variantOptions.push(data[key]);
}
}
That being said, if the options are going to be of a dynamic length it makes far more sense to return them as an array instead of individual numbered properties, assuming you have control over the response format.
You could change the response to this:
{
inventory_management: null
inventory_policy: "deny"
inventory_quantity: 0
old_inventory_quantity: 0
options: ['Bronze', 'Satin Gold', '600mm'],
position: 1
price: "550.00"
}
Then the JS to access the options becomes a simple property accessor:
let variantOptions = data.options; | unknown | |
d8811 | train | Seems this was changed since when the blog post was created.
NgForm is now exported as ngForm instead of form.
[ngFormModel]="form" #f="ngForm">
It's correct in the GitHub source but not in the blog post.
Full component according to the example in the blog post in Dart
@Component(selector: 'form-element')
@View(template: '''
<h1>form-element</h1>
<form (ngSubmit)="onSubmit()" [ngFormModel]="form" #f="ngForm">
<div>
<div class="formHeading">First Name</div>
<input type="text" id="firstName" ngControl="firstName">
<div class="errorMessage" *ngIf="f.form.controls['firstName'].touched && !f.form.controls['firstName'].valid">First Name is required</div>
</div>
<div>
<div class="formHeading">Street Address</div>
<input type="text" id="firstName" ngControl="streetAddress">
<div class="errorMessage" *ngIf="f.form.controls['streetAddress'].touched && !f.form.controls['streetAddress'].valid">Street Address is required</div>
</div>
<div>
<div class="formHeading">Zip Code</div>
<input type="text" id="zip" ngControl="zip">
<div class="errorMessage" *ngIf="f.form.controls['zip'].touched && !f.form.controls['zip'].valid">Zip code has to be 5 digits long</div>
</div>
<div>
<div class="formHeading">Address Type</div>
<select id="type" ngControl="type">
<option [value]="'home'">Home Address</option>
<option [value]="'billing'">Billing Address</option>
</select>
</div>
<button type="submit" [disabled]="!f.form.valid">Save</button>
<div>
<div>The form contains the following values</div>
<div>
{{payLoad}}
</div>
</div>
</form>
''')
class FormElement {
ControlGroup form;
String payLoad;
FormElement(FormBuilder fb) {
form = fb.group({
"firstName": ['', Validators.required],
"streetAddress": ['', Validators.required],
"zip": [
'',
Validators.compose([ZipValidator.validate])
],
"type": ['home']
});
}
void onSubmit() {
payLoad = JSON.encode(this.form.value);
}
} | unknown | |
d8812 | train | I figured out this answer. It's doing both, changing target to ARC as well as code to ARC but not converting -fobjc-arc flag at build phase | unknown | |
d8813 | train | You should use CAST() or CONVERT() functions, when you want convert varchar value to datetime
https://msdn.microsoft.com/ru-ru/library/ms187928.aspx
A: I had some bad data in my input files for the DOB column. For example, DOB: 100-01-01, 29999-05-24; I tried to convert these dates using CONVERT function and it was returning error.
So, I was just selecting records only with valid DOB dates using the query as below and after that I could convert the DOB without any error in this case:
SELECT * FROM MyTableTest WHERE ISDATE(dob) <> 1 | unknown | |
d8814 | train | This is the same homework question that I am trying to answer. It appears that your understanding of the question is somewhat incorrect. They type line for models should be:
models :: Prop -> [Valuation]
You are trying to return only the formulas that give an overall evaluation of True.
example = And p (Or q (Not q))
When 'example' is passed to the models function it produces the following list:
[[("p",True),("q",True)],[("p",True),("q",False)]]
This is because an And statement can only ever evaluate to True if both inputs are True. The second input of the And statement, (Or q (Not q)) always evaluates to True as an Or statement always evaluates to True if one of the inputs is True. This is always the case in our example as our Or can be either (Or True (Not True)) or (Or False (Not False)) and both will always evaluate to True. So as long as the first input evaluates to True then the overall And statement will evaluate to True.
Valuations has to be passed a list of Variables and you have a function that does this. If you pass it just the Var "p" then you get [[("p",True)],[("p",False)]] as those are the only two possible combinations.
What you (we) are wanting is to pass the Valuations list and only return those that evaluate to True. For example if your list was just Var "p" and your formula was Not p then only the second list, [("p",False)], would be returned as p only evaluates to True when you assign False to Not p as Not False = True.
Almost forgot, all the possible combinations for And p (Or q (Not q)) --remember (Or q (Not q) is True
And False (False (Not False)) = And False True = False -- [("p",False),("q",False)]]
And False True (True (Not True) = And False True = False -- [("p",False),("q",True)]]
And True (False (Not False)) = And True True = True -- [("p",True),("q",False)]]
And True (True (Not True) = And True True = True -- [("p",True),("q",True)]]
As you can see the bottom two lines are the ones the homework question says will be returned. I just haven't worked out how to finish it myself yet. I can get the full list of [Valuations] I just can't work out how to get only the True ones and discard the ones that evaluate False. | unknown | |
d8815 | train | You have a one to many relationship from type to language. So set the many parameter for the nested languages attribute within the TypeSchema to True and it should work.
from flask import Flask, jsonify
from flask_sqlalchemy import SQLAlchemy
from flask_marshmallow import Marshmallow
app = Flask(__name__)
db = SQLAlchemy(app)
ma = Marshmallow(app)
class Language(db.Model):
id = db.Column(db.Integer, primary_key=True)
code = db.Column(db.String(2), nullable=False)
type_id = db.Column(db.Integer, db.ForeignKey('type.id'))
type = db.relationship('Type', backref='languages')
class Type(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String, nullable=False)
class LanguageSchema(ma.SQLAlchemyAutoSchema):
class Meta:
model = Language
class TypeSchema(ma.SQLAlchemyAutoSchema):
class Meta:
model = Type
languages = ma.Nested(LanguageSchema, only=('code',), many=True)
with app.app_context():
db.drop_all()
db.create_all()
l = Language(code='en')
t = Type(
name='Grocery',
languages=[l]
)
db.session.add(t)
db.session.commit()
@app.route('/')
def index():
types = Type.query.all()
types_schema = TypeSchema(many=True)
types_data = types_schema.dump(types)
return jsonify(data=types_data)
{
"data": [
{
"id": 1,
"languages": [
{
"code": "en"
}
],
"name": "Grocery"
}
]
} | unknown | |
d8816 | train | Data types are extremely important - without them, there's no way to be sure what data is actually stored in the column. For example, a VARCHAR(255) could be anything - string, numeric, date/time... It makes things extremely difficult to find particular data if you can't guarantee what the data is or any form on consistency.
Which means data typing also provides basic validation - can't stick letters into a column with a numeric or date/time data type...
The speed of data retrieval is also improved by using indexes on a column, but an index can not be used if you have to change the data type of the data to something else before the data can be compared.
It's a good thing to realize what the difference is between data, and presentation. Like the case with date field formatting - that's presentation, which can be easily taken care of if the data type is DATETIME (or converted to DATETIME).
A: Short answer: Speed, optimal data storage, Index, Search ability.
Your database can better make use of the internal works when everything has the correct datatype. You can also make better use of the special database function.
If you want to retrieve all data from a certain month. You can use special database functions to just group by a month when using a date time datatype for instance. When using a varchar to store a date, you lose all of that functionality.
Most databases are able to search or retrieve data faster from a table when the index has been created on a INT datatype. So if you had created that storage field as a varchar you would have lost performance
A: Performance is one reason but so far no ones talked about running these types of queries (which wouldn't be possible if all types are VARCHAR)
SELECT SUM(total) total FROM purchases WHERE status = 'PAID';
SELECT * FROM users WHERE DATE_ADD(last_login, INTERVAL 3 MONTH) < NOW(); | unknown | |
d8817 | train | Iterating through a collection of list items is not a good solution.
If you're interested in the graph API, then you can just return specific fields when getting a collection of list items.
GraphServiceClient graphClient = new GraphServiceClient( authProvider );
var queryOptions = new List<QueryOption>()
{
new QueryOption("expand", "fields(select=Name,Color,Quantity)")
};
var items = await graphClient.Sites["{site-id}"].Lists["{list-id}"].Items
.Request( queryOptions )
.GetAsync();
A: Resolved the issue by adding the <ViewFields> clause to the xml.
CamlQuery query = new CamlQuery();
query.ViewXml =
"<View>
<Query>
<Where>
<Contains>
<FieldRef Name='Name' /><Value Type='Text'>House</Value>
</Contains></Where></Query>" +
"<ViewFields>" +
"<FieldRef Name=Ref />" +
"<FieldRef Name=Name />" +
"</ViewFields>
</View>"; | unknown | |
d8818 | train | You are missing the jquery-ui CSS file in your code
<link rel="stylesheet" type="text/css" href="http://code.jquery.com/ui/1.9.2/themes/base/jquery-ui.css"/>
Demo: Plunker
A: Complete working code would be.
</head>
<body>
<div id="draggableHelper" style="display:inline-block">
<div id="image"><img src="http://www.google.com.br/images/srpr/logo3w.png" /></div>
</div>
<script type="text/javascript">
$(document).ready(function(){
$('#image').resizable();
$('#image').draggable();
$('#image').resize(function(){
$(this).find("img").css("width","100%");
$(this).find("img").css("height","100%");
});
});
</script>
</body>
</html>
A: This will work for you.
<div id="draggableHelper" style="display:inline-block">
<div id="image"><img src="http://www.google.com.br/images/srpr/logo3w.png" /></div>
</div>
A: As the resizable property only works on right side and bottom side so find the image borders find that by selecting a image in css and border to it then see the output it will work perfectly | unknown | |
d8819 | train | This should work
<TextView
android:layout_width="match_parent"
android:layout_height="60dp"
android:background="#008080"
android:layout_weight="4"
android:gravity="center"
android:layout_gravity="center" <--------
android:text="IS"
android:textColor="#FFFFFF"
android:textSize="22dp" /> | unknown | |
d8820 | train | Are you trying to get your App into Facebook App center? In your case, it seems that you don't have enough users to do so. Facebook review process states that "Once your app has enough high user ratings and engagement, you can be submit your App Details Page for App Center listing...", it means that you need to run your App for a while.
Per your question (but I don't think that that's your real issue), the eligibility requirements are (see the same link above):
Eligibility Requirements
These apps are eligible for the App Center when they reach high user engagement and user ratings:
A canvas page app with Login Dialog.
A mobile apps built for iOS or Android and uses Facebook Login SDK for iOS or Facebook Login SDK for Android (More)
A app that is a website and uses Facebook Login
These apps are not currently eligible for App Center:
A Page Tab app
A Desktop web game off of Facebook.com | unknown | |
d8821 | train | Try retrieving the date difference from the database as a number - like so:
select A, B, C, D,
((date1-date2) day to second) as differencedate,
(date1-date2) * 86400 as differenceseconds
from problem | unknown | |
d8822 | train | setState is async so if you try to console log after your click event the same state will be there.
If you want to check the updated state you can try to console.log it above render.
render() {
console.log(this.state)
return (
<div className='App'>
<h1>Hello world!!!!</h1>
<Person
name={this.state.persons[0].name}
age={this.state.persons[0].age}
/>
<Person
name={this.state.persons[1].name}
age={this.state.persons[1].age}
>
i'm going to college
</Person>
<Person
name={this.state.persons[2].name}
age={this.state.persons[2].age}
/>
<button onClick={this.changeNameHandler}>Switch Names</button>
</div>
);
// return React.createElement(
// 'div',
// { className: 'App' },
// React.createElement('h1', null, 'hope this works!!')
// );
}
A: try this:
changeNameHandler = () => {
this.setState({
persons: [
{ name: 'sagar', age: 22 },
{ name: 'nitin', age: 18 },
{ name: 'ankita-nanda', age: 91 }
]
});
setTimeout(() => {
console.log(this.state);
}, 0);
};
setState() is async function, so, it take some fraction of time to run and next line of code is executed.
As state is not updated yet, you will get the previous state.
setTimeOut() does the trick and runs after state updated. | unknown | |
d8823 | train | One possible way of accomplishing this would be to use ExpressJS with NodeJS to handle routing your endpoints. By doing this you can set up routes such as GET /media/movies/, fire an AJAX request to this route, then have it return JSON data that you can handle on your app.
Here's an example of ExpressJS routing in an app that I made:
var groups = require('../app/controllers/groups');
// find a group by email
app.post('/groups/find', groups.searchGroupsByEmail);
Here is a great tutorial that I used when getting started with Express. It walks through the entire process from installing Node.JS (which you've already done) to setting up the ExpressJS routing to handle incoming HTTP requests.
Creating a REST API using Node.js, Express, and MongoDB | unknown | |
d8824 | train | Use a shared AssemblyInfo file as described here: http://blogs.msdn.com/b/jjameson/archive/2009/04/03/shared-assembly-info-in-visual-studio-projects.aspx | unknown | |
d8825 | train | 1) dplyr Use mutate/across like this
library(dplyr)
so %>%
mutate(across(starts_with("inst"), ~ replace(., !grepl("FC", .), NA)))
giving:
inst_1 inst_2 inst_3 inst_4 inst_5
1 FC1 FC1 FC2 FC3 FC1
2 FC2 FC5 FC2 FC6 FC2
3 <NA> <NA> <NA> <NA> <NA>
4 <NA> <NA> <NA> <NA> <NA>
5 <NA> <NA> <NA> <NA> <NA>
2) Base R or using only base R:
ok <- startsWith(names(so), "inst")
repl <- function(x) replace(x, !grepl("FC", x), NA)
replace(so, ok, lapply(so[ok], repl))
3) collapse or using the collapse package with repl from (2)
library(collapse)
tfmv(so, startsWith(names(so), "inst"), repl)
4) data.table With data.table we define a vector of inst names and use it with repl from (2).
library(data.table)
DT <- as.data.table(so)
inst <- grep("^inst", names(so))
DT[, (inst) := lapply(.SD, repl), .SDcols = inst] | unknown | |
d8826 | train | Thank you for your replies.
A short-term solution was found in copying the HibernateUtil.class and pasting this file into the source, and setting the dialect in that file.
However, when trying to solve an unrelated problem, I found that there were several more classes and files I needed but couldn't find. At last I've been able to find these missing files in a total different directory (which was still in the repository and not checked-out), and it also included a HibernateUtil.class and hibernate.cfg.xml.
This problem seemed to be quite application-specific, but at last it is solved :) | unknown | |
d8827 | train | You need to use border-collapse: collapse on the table:
th {
border-bottom: 1px solid black;
}
td+td {
border-left: 1px solid black;
border-right: 1px solid black;
}
table {
border-collapse: collapse;
}
<table>
<tr>
<th>Datum</th>
<th>Entschuldigt?</th>
<th>Vermerkt von</th>
</tr>
<tr>
<td>item 1</td>
<td>item 2</td>
<td>item 3</td>
</tr>
<tr>
<td>item 1</td>
<td>item 2</td>
<td>item 3</td>
</tr>
</table>
A: Another Approach:
<table cellspacing="0">
A:
table {
border-collapse: collapse;
}
th, td {
padding: 5px;
text-align: center;
font-size: 20px;
border-left: 1px solid black;
}
th:nth-child(1), td:nth-child(1) {
border-left: none;
}
tr:nth-child(2){
border-top: 1px solid black;
}
<table>
<tr>
<th>Datum</th>
<th>Entschuldigt?</th>
<th>Vermerkt von</th>
</tr>
<tr>
<td>item 1</td>
<td>item 2</td>
<td>item 3</td>
</tr>
<tr>
<td>item 1</td>
<td>item 2</td>
<td>item 3</td>
</tr>
</table> | unknown | |
d8828 | train | I'm guessing you come from a java-script background.
in dart object keys cannot be accessed through the . dot notation.
rather they are accessed like arrays with ['key_name'].
so that's why this line doesn't work
print(newValue.last_supported)
and this one does
print(newValue["last_supported"]);
dot notation in dart only works on class instances, not Map (similar to JavaScript objects).
look at the following :
class User {
final String name;
final int age;
// other props;
User(this.name, this.age);
}
Now when you create a new user object you can access its public attributes with the dot notation
final user = new User("john doe", 20); // the new keyword is optional since Dart v2
// this works
print(user.name);
print(user.age);
// this doesn't work because user is an instance of User class and not a Map.
print(user['name]);
print(user['age]);
For more about the new keyword you can read the v2 release notes here. | unknown | |
d8829 | train | From your question...
that the method paginate() works on BaseQuery which is also Query
I think this is where you're being confused. "Query" refers to the SQLAlchemy Query object. "BaseQuery" refers to the Flask-SQLALchemy BaseQuery object, which is a subclass of Query. This subclass includes helpers such as first_or_404() and paginate(). However, this means that a Query object does NOT have the paginate() function. How you actually build the object you are calling your "Query" object depends on whether you are dealing with a Query or BaseQuery object.
In this code, you are getting the SQLAlchemy Query object, which results in an error:
db.session.query(models.Post).paginate(...)
If you use the following code, you get the pagination you're looking for, because you are dealing with a BaseQuery object (from Flask-SQLAlchemy) rather than a Query object (from SQLAlchemy).
models.Post.query.paginate(...) | unknown | |
d8830 | train | You have a typo in function imgw($divsize, $imgw)
list($width, $height) = getimagesize($imgw);
right code
list($width, $height) = getimagesize($imgurl); | unknown | |
d8831 | train | The plots will close when the script is finished, even if you use block=False
Similar to the answer here, when running from the terminal you have to call plt.show at the end of the script to keep these plots open after you've finished. I made a similar code to yours which works as intended; the plot simply refreshes at the end of the code. (added a for loop for a 5 second delay just so you can see it's running).
import numpy as np
import matplotlib.pyplot as plt
import time
if __name__ == '__main__':
plt.figure(figsize=(10, 10))
plt.plot(range(5), lw=2, label='Real')
plt.title('Prediction')
plt.legend(loc="best")
plt.show(block=False)
print("---Plot graph finish---")
for i in range(5):
print('waiting...{}'.format(i))
time.sleep(1)
print('code is done')
plt.show() | unknown | |
d8832 | train | There is a property which is called SelectionStart, representing the position of the caret in the textbox. Its value will be valid also when there is nothing selected.
Loop left from there, until you encounter a word boundary or zero. Then loop right until you encounter a word boundary or Len-1. Use the two offsets you got this way, to extract the word around the caret with the SubString function.
A: Using this code will solve your problem. If it works for you then please mark it as Solved so that future readers get the confirmation. And ask me any query about it if you got any in the comment section below.
Private Sub Textbox1_MouseDown(sender As Object, e As MouseEventArgs) Handles Textbox1.MouseDown
If MouseButtons.Right Then
If Textbox1.TextLength - Textbox1.Text.IndexOf(" ", 0) > Textbox1.SelectionStart Then
Textbox1.Select((Textbox1.Text.LastIndexOf(" ", Textbox1.SelectionStart) + 1), (Textbox1.Text.IndexOf(" ", Textbox1.SelectionStart) - (Textbox1.Text.LastIndexOf(" ", Textbox1.SelectionStart) + 1)))
Else
Textbox1.Select(Textbox1.Text.LastIndexOf(" ", Textbox1.TextLength) + 1, Textbox1.TextLength - Textbox1.Text.LastIndexOf(" ", Textbox1.TextLength))
End If
End If
End Sub
Best of luck! | unknown | |
d8833 | train | You would need to restart MongoDB and/or data containers only if they are linked to container A. So in your case, the steps I would follow:
*
*Start your 'Data Volume Container' with a volume mounted for the data.
*Start your MongoDB container(s), using the docker option --volumes-from to have access to the data of the first container.
*Start your application container A, linking to the MongoDB container(s).
There is no need of restarting the data or MongoDB containers because they are not linked directly to your container application. | unknown | |
d8834 | train | I did fix this on my own, its not general issue but ill explain here.I needed to firstly define a EncryptionProvider wich is my class that implements PHPSecLib\AES.So this is correct
public function boot()
{
new EncryptionInit();
EncryptionProvider::setCrypter(EncryptionProvider::AES, $this->container->getParameter('uapi_core.security_key'), $x = array());
$connection = $this->container->get('doctrine')->getConnection();
if(!Type::hasType('encrypted'))
{
Type::addType('encrypted', 'Uapi\CoreBundle\System\DBALType\EncryptedType');
$connection->getDatabasePlatform()->registerDoctrineTypeMapping('encrypted', 'encrypted');
}
} | unknown | |
d8835 | train | Short answer: I couldn't.
I had to do something similar, and ended up having to extend various commands and set the "current" command as part of my "_execute" call (so I would now call _execute(selection, pipeline, originalCommand) for my command.
N
A: You cannot find out what the original command is. The assumption is that an extending command is specific to the command it extends and so would know which one it is extending. When creating generic extensions that work on different commands, I can see how it might be useful to know what the configuration would be.
Maybe you could add this as an Enhancement Request?
To work around it for now, you could create a base command with your logic - which takes the name of the command that it extends as a parameter. And then create specific classes for each command you which to extend, which just call the base command and pass in the name.
To put it differently, create a BaseExtendingCommand with all of the required methods - and then both a TextUnderlineExtendingCommand and TextStrikethroughExtendingCommand which call the methods on BaseExtendingCommand (and pass in "TextUnderline" and "TextStrikethrough", respectively, as arguments) | unknown | |
d8836 | train | I don't think you should be unit testing at that level. I'm all in for unit tests, but there is a certain point where you need to stop.
Lets say that code is part of a ClientsLinqRepository, which in turns implements IClientsLinqRepository. You mock IClientsLinqRepository, when you implement code that depends on it.
While the above is perfectly valid, ClientsLinqRepository is an integration implementation. Pretty much like if you had IMessageSender and you implemented MailSender. Here you have code that its main responsibility is integrating with a separate system, for you that's the database.
Based on the above scenario, I suggest you do some focused integration tests on that class. So in that case you do want to hit the external system (database), and make sure that integration code is working appropriately with the external system. It'll allow you to quickly identify if anything in the code vs. the database is broken without dealing with the complexity of the rest of the system (which a pain when trying to do integration tests at other levels).
Keep the focused integration tests separate from the unit tests, so you can run the amazingly fast unit tests as much as you want, and run integration test when changes are made to any of the integration pieces and every now and then. | unknown | |
d8837 | train | You should be able to do
redirection = @get('model')
@store.deleteRecord(redirection) | unknown | |
d8838 | train | This is the one-line method which is not very efficient for large matrices
reshape(diag(A(ix(:),iy(:))),[ny nx])
A clearer and more efficient method would be to use sub2ind. I've incorporated yuk's comment for situations (like yours) when ix and iy have the same number of elements:
newA = A(sub2ind(size(A),ix,iy));
Also, don't confuse x and y for i and j in notation - j and x generally represent columns and i and y represent rows.
A: A faster way is to use linear indexing directly without calling SUB2IND:
output = A( size(A,1)*(iy-1) + ix )
... think of the matrix A as a 1D array (column-wise order) | unknown | |
d8839 | train | It’s not possible to trap such calls with a single method, in CPython at least. While you can define special methods like __getattr__, even defining the low-level __getattribute__ on a metaclass doesn’t work because the interpreter scans for special method names to build a fast lookup table of the sort used by types defined in C. Anything that doesn’t actually possess such individual methods will not respond to the interpreter’s internal calls (e.g., from repr).
What you can do is dynamically generate (or augment, as below) a class that has whatever special methods you want. The readability of the result might suffer too much to make this approach worthwhile:
def mkfwd(n):
def fwd(self,*a): return getattr(self.toList(),n)(*a)
return fwd
for m in "repr","str":
m="__%s__"%m
setattr(myList,m,mkfwd(m)) | unknown | |
d8840 | train | This works quite well with a gesture on the checkbox image. Two Buttons didn't work since every tap went to both buttons.
struct TodayTodoView2: View {
@ObservedObject var todayDietModel = TodayDietModel()
func image(for state: Bool) -> Image {
return state ? Image(systemName: "checkmark.circle") : Image(systemName: "circle")
}
var body: some View {
NavigationView {
VStack {
List {
ForEach($todayDietModel.todayDietItems) { (item: Binding<TodayDietItem>) in
HStack{
self.image(for: item.value.isFinished).onTapGesture {
item.value.isFinished.toggle()
}
NavigationLink(destination: DietItemDetailView2(item: item)) {
Text("DietNamee \(item.value.dietName)")
.font(.system(size:13))
.fontWeight(.medium)
.foregroundColor(Color.black)
}
}
}
}
.background(Color.white)
}
}
}
} | unknown | |
d8841 | train | Have a look at Nerddinner.
http://nerddinnerbook.s3.amazonaws.com/Part11.htm
(use distancebetween function)
Create the relevant functions in your db.
then call them in your c#
A: There is an answer already, although it is in miles (1609.344 meters) instead of 500, but the calculation used is likely to fit your bill. It is using SQL, not LINQ, but you can easily convert this to a LINQ expression, especially if you have a tool like LINQPad. | unknown | |
d8842 | train | Try pd.ExcelFile:
xls = pd.ExcelFile('path_to_file.xls')
df1 = pd.read_excel(xls, 'Sheet1')
df2 = pd.read_excel(xls, 'Sheet2')
As noted by @HaPsantran, the entire Excel file is read in during the ExcelFile() call (there doesn't appear to be a way around this). This merely saves you from having to read the same file in each time you want to access a new sheet.
Note that the sheet_name argument to pd.read_excel() can be the name of the sheet (as above), an integer specifying the sheet number (eg 0, 1, etc), a list of sheet names or indices, or None. If a list is provided, it returns a dictionary where the keys are the sheet names/indices and the values are the data frames. The default is to simply return the first sheet (ie, sheet_name=0).
If None is specified, all sheets are returned, as a {sheet_name:dataframe} dictionary.
A: If:
*
*you want multiple, but not all, worksheets, and
*you want a single df as an output
Then, you can pass a list of worksheet names. Which you could populate manually:
import pandas as pd
path = "C:\\Path\\To\\Your\\Data\\"
file = "data.xlsx"
sheet_lst_wanted = ["01_SomeName","05_SomeName","12_SomeName"] # tab names from Excel
### import and compile data ###
# read all sheets from list into an ordered dictionary
dict_temp = pd.read_excel(path+file, sheet_name= sheet_lst_wanted)
# concatenate the ordered dict items into a dataframe
df = pd.concat(dict_temp, axis=0, ignore_index=True)
OR
A bit of automation is possible if your desired worksheets have a common naming convention that also allows you to differentiate from unwanted sheets:
# substitute following block for the sheet_lst_wanted line in above block
import xlrd
# string common to only worksheets you want
str_like = "SomeName"
### create list of sheet names in Excel file ###
xls = xlrd.open_workbook(path+file, on_demand=True)
sheet_lst = xls.sheet_names()
### create list of sheets meeting criteria ###
sheet_lst_wanted = []
for s in sheet_lst:
# note: following conditional statement based on my sheets ending with the string defined in sheet_like
if s[-len(str_like):] == str_like:
sheet_lst_wanted.append(s)
else:
pass
A: You can read all the sheets using the following lines
import pandas as pd
file_instance = pd.ExcelFile('your_file.xlsx')
main_df = pd.concat([pd.read_excel('your_file.xlsx', sheet_name=name) for name in file_instance.sheet_names] , axis=0)
A: You can also use the index for the sheet:
xls = pd.ExcelFile('path_to_file.xls')
sheet1 = xls.parse(0)
will give the first worksheet. for the second worksheet:
sheet2 = xls.parse(1)
A: You could also specify the sheet name as a parameter:
data_file = pd.read_excel('path_to_file.xls', sheet_name="sheet_name")
will upload only the sheet "sheet_name".
A: df = pd.read_excel('FileName.xlsx', 'SheetName')
This will read sheet SheetName from file FileName.xlsx
A: There are various options depending on the use case:
*
*If one doesn't know the sheets names.
*If the sheets name is not relevant.
*If one knows the name of the sheets.
Below we will look closely at each of the options.
See the Notes section for information such as finding out the sheet names.
Option 1
If one doesn't know the sheets names
# Read all sheets in your File
df = pd.read_excel('FILENAME.xlsx', sheet_name=None)
# Prints all the sheets name in an ordered dictionary
print(df.keys())
Then, depending on the sheet one wants to read, one can pass each of them to a specific dataframe, such as
sheet1_df = pd.read_excel('FILENAME.xlsx', sheet_name=SHEET1NAME)
sheet2_df = pd.read_excel('FILENAME.xlsx', sheet_name=SHEET2NAME)
Option 2
If the name is not relevant and all one cares about is the position of the sheet. Let's say one wants only the first sheet
# Read all sheets in your File
df = pd.read_excel('FILENAME.xlsx', sheet_name=None)
sheet1 = list(df.keys())[0]
Then, depending on the sheet name, one can pass each it to a specific dataframe, such as
sheet1_df = pd.read_excel('FILENAME.xlsx', sheet_name=SHEET1NAME)
Option 3
Here we will consider the case where one knows the name of the sheets.
For the examples, one will consider that there are three sheets named Sheet1, Sheet2, and Sheet3. The content in each is the same, and looks like this
0 1 2
0 85 January 2000
1 95 February 2001
2 105 March 2002
3 115 April 2003
4 125 May 2004
5 135 June 2005
With this, depending on one's goals, there are multiple approaches:
*
*Store everything in same dataframe. One approach would be to concat the sheets as follows
sheets = ['Sheet1', 'Sheet2', 'Sheet3']
df = pd.concat([pd.read_excel('FILENAME.xlsx', sheet_name = sheet) for sheet in sheets], ignore_index = True)
[Out]:
0 1 2
0 85 January 2000
1 95 February 2001
2 105 March 2002
3 115 April 2003
4 125 May 2004
5 135 June 2005
6 85 January 2000
7 95 February 2001
8 105 March 2002
9 115 April 2003
10 125 May 2004
11 135 June 2005
12 85 January 2000
13 95 February 2001
14 105 March 2002
15 115 April 2003
16 125 May 2004
17 135 June 2005
Basically, this how pandas.concat works (Source):
*Store each sheet in a different dataframe (let's say, df1, df2, ...)
sheets = ['Sheet1', 'Sheet2', 'Sheet3']
for i, sheet in enumerate(sheets):
globals()['df' + str(i + 1)] = pd.read_excel('FILENAME.xlsx', sheet_name = sheet)
[Out]:
# df1
0 1 2
0 85 January 2000
1 95 February 2001
2 105 March 2002
3 115 April 2003
4 125 May 2004
5 135 June 2005
# df2
0 1 2
0 85 January 2000
1 95 February 2001
2 105 March 2002
3 115 April 2003
4 125 May 2004
5 135 June 2005
# df3
0 1 2
0 85 January 2000
1 95 February 2001
2 105 March 2002
3 115 April 2003
4 125 May 2004
5 135 June 2005
Notes:
*
*If one wants to know the sheets names, one can use the ExcelFile class as follows
sheets = pd.ExcelFile('FILENAME.xlsx').sheet_names
[Out]: ['Sheet1', 'Sheet2', 'Sheet3']
*In this case one is assuming that the file FILENAME.xlsx is on the same directory as the script one is running.
*
*If the file is in a folder of the current directory called Data, one way would be to use r'./Data/FILENAME.xlsx' create a variable, such as path as follows
path = r'./Data/Test.xlsx'
df = pd.read_excel(r'./Data/FILENAME.xlsx', sheet_name=None)
*This might be a relevant read.
A: There are a few options:
Read all sheets directly into an ordered dictionary.
import pandas as pd
# for pandas version >= 0.21.0
sheet_to_df_map = pd.read_excel(file_name, sheet_name=None)
# for pandas version < 0.21.0
sheet_to_df_map = pd.read_excel(file_name, sheetname=None)
Read the first sheet directly into dataframe
df = pd.read_excel('excel_file_path.xls')
# this will read the first sheet into df
Read the excel file and get a list of sheets. Then chose and load the sheets.
xls = pd.ExcelFile('excel_file_path.xls')
# Now you can list all sheets in the file
xls.sheet_names
# ['house', 'house_extra', ...]
# to read just one sheet to dataframe:
df = pd.read_excel(file_name, sheet_name="house")
Read all sheets and store it in a dictionary. Same as first but more explicit.
# to read all sheets to a map
sheet_to_df_map = {}
for sheet_name in xls.sheet_names:
sheet_to_df_map[sheet_name] = xls.parse(sheet_name)
# you can also use sheet_index [0,1,2..] instead of sheet name.
Thanks @ihightower for pointing it out way to read all sheets and @toto_tico,@red-headphone for pointing out the version issue.
sheetname : string, int, mixed list of strings/ints, or None, default 0
Deprecated since version 0.21.0: Use sheet_name instead Source Link
A: Yes unfortunately it will always load the full file. If you're doing this repeatedly probably best to extract the sheets to separate CSVs and then load separately. You can automate that process with d6tstack which also adds additional features like checking if all the columns are equal across all sheets or multiple Excel files.
import d6tstack
c = d6tstack.convert_xls.XLStoCSVMultiSheet('multisheet.xlsx')
c.convert_all() # ['multisheet-Sheet1.csv','multisheet-Sheet2.csv']
See d6tstack Excel examples
A: If you have saved the excel file in the same folder as your python program (relative paths) then you just need to mention sheet number along with file name.
Example:
data = pd.read_excel("wt_vs_ht.xlsx", "Sheet2")
print(data)
x = data.Height
y = data.Weight
plt.plot(x,y,'x')
plt.show()
A: pd.read_excel('filename.xlsx')
by default read the first sheet of workbook.
pd.read_excel('filename.xlsx', sheet_name = 'sheetname')
read the specific sheet of workbook and
pd.read_excel('filename.xlsx', sheet_name = None)
read all the worksheets from excel to pandas dataframe as a type of OrderedDict means nested dataframes, all the worksheets as dataframes collected inside dataframe and it's type is OrderedDict.
A: If you are interested in reading all sheets and merging them together. The best and fastest way to do it
sheet_to_df_map = pd.read_excel('path_to_file.xls', sheet_name=None)
mdf = pd.concat(sheet_to_df_map, axis=0, ignore_index=True)
This will convert all the sheet into a single data frame m_df | unknown | |
d8843 | train | With create-react-app you don't have to use dotenv library of anything like that. To read from .env file all you have to do is prefix all the variables name with REACT_APP_. For example:
REACT_APP_PORT=5432
REACT_APP_TEST=911
REACT_APP_WEATHER=12345678
create-react-app will automatically make it available to you during the build time. On your react app, from anywhere, you can console.log this:
console.log(process.env.REACT_APP_PORT); //5432 | unknown | |
d8844 | train | Most probably you are missing execution using package name.
You would be executing main class from src directory as :
D:\sudhanshu\documents\netbeansprojects\Firstapp\src> java firstapp.First
A: If your package is : somepackage,
And you have saved your .java file at: d:\sudhanshu\documents\netbeansprojects\Firstapp\src\firstapp
then in command prompt, go to the above directory and compile using: -
javac -d . First.java // Don't use javac First.java
Then your class file will be created inside the folder somepackage inside the above path..
i.e. your class file will be at d:\sudhanshu\documents\netbeansprojects\Firstapp\src\firstapp\somepackage\First.class
Then you should go at the directory where your .java. file is saved, and run as: -
java First
From d:\sudhanshu\documents\netbeansprojects\Firstapp\src\firstapp directory..
If you want to run this class file from anywhere: -
Add d:\sudhanshu\documents\netbeansprojects\Firstapp\src\firstapp directory in your classpath..
Note: - You need to add the directory containing your package directory to classpath..
A: - If you are able to create a .class file then i think you have successfully installed your Java
Tip:
Always do a java -version on the command prompt, and if it returns with the version of JDK installed on your system, then you have successfully installed your Java.
- Now i will advice you to do this from the same directory where you class file is located to execute the file.
java First | unknown | |
d8845 | train | You can place this rule in /website/js/.htaccess:
RewriteEngine On
RewriteBase /website/js/
RewriteRule ^(.*)$ /js/$1 [L]
A: You can change the current directory to something like this:
<?php
// Get current directory
$cur_dir = getcwd();
//change the dir
chdir("../js");
//do something here
//back to your main current dir if you have still something to do
chdir($cur_dir);
?> | unknown | |
d8846 | train | The error is because ASP.NET Core did not find any cookie that it could convert into a ClaimsPrincipal user.
As you mention, requests to "/.well-known/openid-configuration/jwks" is never made by the browser, instead its made by the client and apis on the backend to retrieve the signing keys. And in these requests, there is no cookie to authenticate. | unknown | |
d8847 | train | This comment answers the question:
You have to close the GUI instance and start it again. | unknown | |
d8848 | train | A better approach would be to use context managers:
with open(inputfile, 'r') as inf:
with open(outputfile, 'w') as outf:
out.write(in.read())
close will then be called implicitly on leaving the with block even if an exception is encountered.
However, for that specific example, why not just copy the file? | unknown | |
d8849 | train | The accepted answer will do the job when the text is in a TextView. This is a more general answer, applicable both to the basic/happy scenario and to other, more complicated use cases.
There are situations when mixed-language text is to be used someplace other than inside a TextView. For instance, the text may be passed in a share Intent to Gmail or WhatsApp and so on. In such cases, you must use a combination of the following classes:
*
*BidiFormatter
*TextDirectionHeuristics
As quoted in the documentation, these are ...
Utility class[es] for formatting text for display in a potentially opposite-directionality context without garbling. The directionality of the context is set at formatter creation and the directionality of the text can be either estimated or passed in when known.
So for example, say you have a String that has a combination of English & Arabic, and you need the text to be
*
*right-to-left (RTL).
*always right-aligned, even if the sentence begins with English.
*English & Arabic words in the correct sequence and without garbling.
then you could achieve this using the unicodeWrap() method as follows:
String mixedLanguageText = ... // mixed-language text
if(BidiFormatter.getInstance().isRtlContext()){
Locale rtlLocale = ... // RTL locale
mixedLanguageText = BidiFormatter.getInstance(rtlLocale).unicodeWrap(mixedLanguageText, TextDirectionHeuristics.ANYRTL_LTR);
}
This would convert the string into RTL and align it to the left, if even one RTL-language character was in the string, and fallback to LTR otherwise. If you want the string to be RTL even if it is completely in, say English (an LTR language), then you could use TextDirectionHeuristics.RTL instead of TextDirectionHeuristics.ANYRTL_LTR.
This is the proper way of handling mixed-direction text in the absence of a TextView. Interestingly, as the documentation states,
Also notice that these direction heuristics correspond to the same types of constants provided in the View class for setTextDirection(), such as TEXT_DIRECTION_RTL.
Update:
I just found the Bidi class in Java which seems to do something similar. Look it up!
Further references:
1. Write text file mix between arabic and english.
2. Unicode Bidirectional Algorithm.
A: Try adding to your TextView:
android:textDirection="anyRtl"
For more reading:
http://developer.android.com/reference/android/view/View.html#attr_android:textDirection
A: I had the same issue and I am targeting API 16
My solution was very simple, I added to the beginning of the String "\u200F"
String mixedLanguageText = ... // mixed-language text
newText = "\u200F" + mixedLanguageText;
"\u200F" = Unicode Character 'RIGHT-TO-LEFT MARK' (U+200F)
A: The following code snippet demonstrates how to use unicodeWrap():
String mySuggestion = "15 Bay Street, Laurel, CA";
BidiFormatter myBidiFormatter = BidiFormatter.getInstance();
// The "did_you_mean" localized string resource includes
// a "%s" placeholder for the suggestion.
String.format(R.string.did_you_mean,
myBidiFormatter.unicodeWrap(mySuggestion)); | unknown | |
d8850 | train | it's highly discouraged to modify jars you depend on simply because if you ever want to upgrade versions you'd need to modify the new jar you are looking for.
In those situations you have these options:
*
*if it is an open source project, contribute to the project and correct the URL
*try and set the property from your code (this may not be possible in certain situations)
*try and extend the class you're trying to use and set the URL on the property you need (like the previous one, it may not be possible to do this)
*this should be your last resource: create your own project (from the original jar), make the changes you require, package it up and add it to you app. | unknown | |
d8851 | train | Use jQuery's .not()
$('#navigation li').not("#search").hover(function () {
// Show the sub menu
$('ul', this).stop(true,true).slideDown(300);
},
function () {
//hide its submenu
$('ul', this).stop(true,true).slideUp(200);
}); | unknown | |
d8852 | train | Create loadingview in viewdidload
override func viewDidLoad() {
super.viewDidLoad()
let spinner = UIActivityIndicatorView(activityIndicatorStyle: .gray)
spinner.activityIndicatorViewStyle = .whiteLarge
spinner.color = .red
spinner.startAnimating()
spinner.frame = CGRect(x: CGFloat(0), y: CGFloat(5), width: tableView.bounds.width, height: CGFloat(44))
self.tableView.tableFooterView = spinner
self.tableView.tableFooterView?.isHidden = true
}
and update willDisplay cell
func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) {
let lastSectionIndex = tableView.numberOfSections - 1
let lastRowIndex = tableView.numberOfRows(inSection: lastSectionIndex) - 1
if indexPath.row == self.sneakers.count - 1 && !didPageEnd {
pageNo += 1
getSneakerSearch()
}
}
Update getSnackerSearch func:
...
if self.pageNo == 0{
commonClass.sharedInstance.startLoading()
}else{
self.tableView.tableFooterView?.isHidden = false
}
Alamofire.request(Constants.API.url("search"), method: .post, parameters: param, encoding: URLEncoding.httpBody, headers: nil).responseJSON {
(response:DataResponse<Any>) in
if self.pageNo == 0{
commonClass.sharedInstance.stopLoading()
}else{
self.tableView.tableFooterView?.isHidden = true
}
}
...
A: For center of you spinner to tableview you can follow these steps to do it. You don't need to provide height and width for spinner.
let spinner = UIActivityIndicatorView(activityIndicatorStyle: .gray)
spinner.activityIndicatorViewStyle = .whiteLarge
spinner.color = .red
spinner.startAnimating()
let topBarHeight = UIApplication.shared.statusBarFrame.size.height +
(self.navigationController?.navigationBar.frame.height ?? 0.0)
spinner.center=CGPointMake(self.tableView.center.x, self.tableView.center.y- topBarHeight); | unknown | |
d8853 | train | If <br> or <br /> don't work, then you probably can't.
Unfortunately, Bitbucket's documentation regarding table support is rather sparse and consists solely of the following example:
| Day | Meal | Price |
| --------|---------|-------|
| Monday | pasta | $6 |
| Tuesday | chicken | $8 |
However, that syntax looks like the rather common table syntax first introduced by PHP Markdown Extra and later popularized by GitHub, MultiMarkdown, and others.
PHP Markdown Extra's rules state:
You can apply span-level formatting to the content of each cell using regular Markdown syntax:
| Function name | Description |
| ------------- | ------------------------------ |
| `help()` | Display the help window. |
| `destroy()` | **Destroy your computer!** |
GitHub's spec plainly states:
Block-level elements cannot be inserted in a table.
And MultiMarkdown's rules state:
Cell content must be on one line only
In fact, notice that the syntax does not offer any way to define when one row ends and another row begins (unlike the header row, which includes a line to divide it from the body of the table). As you cannot define the division between rows, then the only way it can work is if each line is its own row. For that reason, a row cannot contain multiple lines of text.
Therefore, any text within a table cell must be inline text only, which can be represented on a single line (therefore the Markdown standard of two spaces followed by a newline won't work; neither will bullets as they would be block level list elements). Of course, a raw HTML <br> tag is inline text and would qualify as a way to insert line breaks within table cells. However, some Markdown implementations disallow all raw HTML as a security measure. If you are using such an implementation (which Bitbucket appears to be using), then it is simply not possible to include a line break in a table cell.
I realize some find the above frustrating and limiting. However, it is interesting to note the following advice in MultiMarkdown's documentation regarding tables:
MultiMarkdown table support is designed to handle most tables for most people; it doesn’t cover all tables for all people. If you need complex tables you will need to create them by hand or with a tool specifically designed for your output format. At some point, however, you should consider whether a table is really the best approach if you find MultiMarkdown tables too limiting.
The interesting suggestion is to consider using something other than a table if you can't do what you want with the rather limited table syntax. However, if a table really is the right tool, then you may need to create it by hand. As a reminder, the original rules state:
Markdown is not a replacement for HTML, or even close to it. Its
syntax is very small, corresponding only to a very small subset of
HTML tags. The idea is not to create a syntax that makes it easier
to insert HTML tags. In my opinion, HTML tags are already easy to
insert. The idea for Markdown is to make it easy to read, write, and
edit prose. HTML is a publishing format; Markdown is a writing
format. Thus, Markdown's formatting syntax only addresses issues that
can be conveyed in plain text.
For any markup that is not covered by Markdown's syntax, you simply
use HTML itself. There's no need to preface it or delimit it to
indicate that you're switching from Markdown to HTML; you just use the
tags.
Therefore, creating a table by hand requires using raw HTML for the entire table. Of course, for those implementations which disallow all raw HTML you are without that option. Thus, you are back to considering either non-table solutions or a way to format a table without any line breaks within cells. | unknown | |
d8854 | train | *
*You've initialized your app at /www/mysite.com but pointed your DocumentRoot at a different directory, /www/htdocs/mysitecom (and I'm assuming you meant rails new mysite.com).
*DocumentRoot should point to your app's public dir.
Change DocumentRoot to /www/mysite.com/public or wherever your app's public folder actually lives.
Make sure passenger is enabled (and quit using root):
hack $ sudo a2enmod passenger
hack $ sudo /etc/init.d/apache2 restart | unknown | |
d8855 | train | It seems you are using AlexNet, but on samll size images
If you are using AlexNet architecture on smaller images, you need to resize images or you have to do some changes in Pool and Stride hyperparameters, because with current shape something funcky happens in conv5 affecting the architecture. | unknown | |
d8856 | train | The correct way, not violating the strict aliasing rule would be to alias the float with a char array, which is allowed by the C standard.
union U {
float y;
char c[sizeof (float)];
};
This way you will be able to access the individual float y; bytes using the c array, and convert them into binary/hex using the very simple algorithm that can be easily found around (I will leave it up to you, as it is your assignment). | unknown | |
d8857 | train | I'm not 100% sure about what you are trying to achieve, but, you can pass data between forms. So for example you can do something like:
Public Class Form1
Private Sub Button1_click(...) Handles Button1.Click
Dim newForm2 as New Form2()
newForm2.stringText = ""
If newForm2.ShowDialog() = DialogResult.OK Then
Button1.Text = newForm2.stringText
End If
End Sub
End Class
And in Form2 you have
Public Class Form2
Dim stringText as string
Private Sub changeStringText()
'your method to change your data
Me.DialogResult = DialogResult.OK 'this will close form2
End Sub
End Class
I hope this is what you need, if not let me know
A: Thanks for your answer and comment. So I declared the wrong class for the parentform, means in Form2 it needs to be "parentform as Form1":
Public Sub New(parentform As Form1)
InitializeComponents()
MessageBox.Show(parentform.Button1.Text)
End Sub
and yes, I need to skip the "shared" in the ChangeText:
Public Sub ChangeText(newtext As Sting)
Me.Button1.Text=newtext
End Sub
This way it worked for me. | unknown | |
d8858 | train | try np.where with str.contains and case=False to ignore the case.
and $ to only match pgk at the end of a string.
df['check'] = np.where(df['condition'].str.contains('pgk$',case=False), True,False)
print(df)
condition count check
0 merged_pgk 10 True
1 merged_Pgk 3 True
2 merged_pgk_stim 12 False
3 merged_Scp1 5 False
A: you can use endswith function
df[df['condition'].str.lower().str.endswith(pattern)] | unknown | |
d8859 | train | Remove whitespace:
for (int j = 0; j < hex1.Length; j++)
{
string fieldString = hex1[j].Trim();
if(string.IsNullOrWhiteSpace(fieldString)) throw ... // or other error handling
bytes1[j] = Convert.ToByte(hex1[j], 16);
Should help...
A: change hex.Split(','); to
hex.Split(",\r\n".ToCharArray(), StringSplitOptions.RemoveEmptyEntries); | unknown | |
d8860 | train | React updates a component whenever any state or props variable changes.
So, to prevent a re-render conditionally, you can use this lifecycle function (returning false means avoiding a re-render):
class component
shouldComponentUpdate(prevState, prevProps) {
if (this.state.someVariable !== prevState.someVariable ||
this.props.someVariable !== prevProps.someVariable) {
return false;
}
return true;
}
functional component
React.memo(MyComponent, (props, nextProps)=> {
if(this.state.someVariable !== prevState.someVariable) {
// if don't re-render/update
return true
}
//...
}) | unknown | |
d8861 | train | I'm experiencing the same issue since Chrome 20 update. This thing happen in Windows Xp and Mac Os X 10.6.8.
Safari and Mobile Safari (that share WebKit engine with Chrome) works perfectly like Firefox and IE.
My css code is exactly like yours.
Looking in the inspector it seems that the font doesn't get downloaded.
Sometimes while navigating different pages (that share the same css external file) in my website the font loads and get displayed properly.
Still trying to solve this...
EDIT:
I managed to solve this issue.
I don't know why, but using this worked:
http://www.fontsquirrel.com/fontface/generator/
I uploaded my font, got the css and converted files, uploaded them to my server and replaced font-face declaration...bling! It works! Hope that works for you too!
A: It's working now, I think Google has update the browser.
A: Since there was an update of Chrome for about a week, you may try using an older version to find out if it's a bug (I myself didn't notice any problems). Get one at http://www.oldapps.com/google_chrome.php.
Also check if you're using this font in addion to other font-related CSS values (if so, deactivate them). There were some problems in the past which actually have been solved, but you never know...
A: First: Convert you font using this service as Mr Stefano suggest:
Later use this CSS code to use your font in your project:
@font-face {
font-family: 'aljazeeraregular';
src: url('aljazeera-webfont.eot');
src: url('aljazeera-webfont.eot?#iefix') format('eot'),
url('aljazeera-webfont.woff') format('woff'),
url('aljazeera-webfont.ttf') format('truetype'),
url('aljazeera-webfont.svg') format('svg');
font-weight: normal;
font-style: normal;
}
body {
background-color: #eaeaea;
font-family: 'Aljazeera';
font-size: 14px;
}
Note that when you call font-family in your site you have to use its original name not like what you declare it in @font-face. | unknown | |
d8862 | train | I don't have an exact solution to your problem but in terms of measuring the width of text, if you have a reference to the Font object for the font being used in your field, you can call Font.getAdvance() to get the width of the specified text in pixels. This might help if you have to insert manual breaks in your text. | unknown | |
d8863 | train | *
*Set the emulator to be charging.
*Make sure Settings - Developer options - Stay awake when charging is enabled.(default)
PS: Click Settings - System - About - Build number quickly to enable Developer options. | unknown | |
d8864 | train | David Gross's answer above worked for me, although I am using the option of colour. Here's what my code looks like and the steps I took to get it to work.
1) Copy an unedited version of product_filters.rb to lib/product_filters.rb
2) Initialise it: in initializers/spree.rb, add in:
require 'product_filters'
# Spree.config do |config| etc........
3) Add this code to product_filters.rb:
def ProductFilters.option_with_values(option_scope, option, values)
# get values IDs for Option with name {@option} and value-names in {@values} for use in SQL below
option_values = Spree::OptionValue.where(:presentation => [values].flatten).joins(:option_type).where(OptionType.table_name => {:name => option}).pluck("#{OptionValue.table_name}.id")
return option_scope if option_values.empty?
option_scope = option_scope.where("#{Product.table_name}.id in (select product_id from #{Variant.table_name} v left join spree_option_values_variants ov on ov.variant_id = v.id where ov.option_value_id in (?))", option_values)
option_scope
end
# option scope
Spree::Product.add_search_scope :option_any do |*opts|
option_scope = Spree::Product.includes(:variants_including_master)
option_type = ProductFilters.colour_filter[:option]
opts.map { |opt|
# opt is an array => ['option-name', [value1, value2, value3, ...]]
option_scope = ProductFilters.option_with_values(option_scope, option_type, *opt)
}
option_scope
end
# colour option - object that describes the filter.
def ProductFilters.colour_filter
# Get an array of possible colours (option type of 'colour')
# e.g. returns ["Gold", "Black", "White", "Silver", "Purple", "Multicoloured"]
colours = Spree::OptionValue.where(:option_type_id => Spree::OptionType.find_by_name("colour")).order("position").map(&:presentation).compact.uniq
{
:name => "Colour",
:scope => :option_any,
:conds => nil,
:option => 'colour', # this is MANDATORY
:class => "colour",
:labels => colours.map { |k| [k, k] }
}
end
4) Add your new filter to app/models/spree/taxons.rb so it appears on the front end:
def applicable_filters
fs = []
# fs << ProductFilters.taxons_below(self)
## unless it's a root taxon? left open for demo purposes
fs << Spree::Core::ProductFilters.price_filter if Spree::Core::ProductFilters.respond_to?(:price_filter)
fs << Spree::Core::ProductFilters.brand_filter if Spree::Core::ProductFilters.respond_to?(:brand_filter)
fs << Spree::Core::ProductFilters.colour_filter if Spree::Core::ProductFilters.respond_to?(:colour_filter)
fs
end
That should be it. I hope that helps - let me know if I can help further. Unfortunately the Spree filtering docs are nonexistent so we have to make do.
A: I dont know exactly what Spree::Product.scope is doing but try changing it to Spree::Product.add_search_scope. You are also missing an argument OptionType in option_with_values, you can use ProductFilters.metal_filter[:option].
Spree::Product.add_search_scope :option_any do |*opts|
option_scope = Spree::Product.includes(:variants_including_master)
option_type = ProductFilters.metal_filter[:option]
opts.map { |opt|
# opt is an array => ['option-name', [value1, value2, value3, ...]]
option_scope = ProductFilters.option_with_values(option_scope, option_type, *opt)
}
option_scope
end | unknown | |
d8865 | train | public class AuthorsAdapter extends ArrayAdapter<Entity> {
static class ViewHolder {
TextView textViewTitle;
TextView textViewEntitySummary;
ImageView imageViewPicture;
}
public AuthorsAdapter(Context context, LinkedList<Entity> entity) {
super(context, 0, entity);
}
@Override
public View getView( final int position, View convertView, final ViewGroup parent) {
final ViewHolder holder;
if (convertView == null) {
convertView = LayoutInflater.from(getContext()).inflate(R.layout.listview_search_author_template, parent, false);
holder = new ViewHolder();
holder.textViewTitle = (TextView) convertView.findViewById(R.id.textView_TitleTopLeft);
holder.textViewEntitySummary = (TextView) convertView.findViewById(R.id.textView_EntitySummary);
holder.imageViewPicture = (ImageView) convertView.findViewById(R.id.imageView_author);
convertView.setTag(holder);
} else{
holder = (ViewHolder) convertView.getTag();
}
if (!getItem(position).isImageResized()) {
holder.imageViewPicture.setLayoutParams(new LinearLayout.LayoutParams(
LinearLayout.LayoutParams.WRAP_CONTENT,
LinearLayout.LayoutParams.WRAP_CONTENT));
getItem(position).setIsImageResized(true);
} else if (getItem(position).isImageResized()) {
holder.imageViewPicture.setLayoutParams(new LinearLayout.LayoutParams(
LinearLayout.LayoutParams.WRAP_CONTENT,
LinearLayout.LayoutParams.MATCH_PARENT));
getItem(position).setIsImageResized(false);
} else {
holder.imageViewPicture.setLayoutParams(new LinearLayout.LayoutParams(
LinearLayout.LayoutParams.WRAP_CONTENT,
LinearLayout.LayoutParams.MATCH_PARENT));
}
holder.textViewTitle.setText(getItem(position).getTitle());
if (!getItem(position).getPictureURL().isEmpty())
Picasso.with(getContext()).load(getItem(position).getPictureURL()).into(holder.imageViewPicture);
holder.textViewEntitySummary.setText(getItem(position).getBiography());
holder.imageViewPicture.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (!getItem(position).isImageResized()) {
holder.imageViewPicture.setLayoutParams(new LinearLayout.LayoutParams(
LinearLayout.LayoutParams.WRAP_CONTENT,
LinearLayout.LayoutParams.WRAP_CONTENT));
getItem(position).setIsImageResized(true);
} else if (getItem(position).isImageResized()) {
holder.imageViewPicture.setLayoutParams(new LinearLayout.LayoutParams(
LinearLayout.LayoutParams.WRAP_CONTENT,
LinearLayout.LayoutParams.MATCH_PARENT));
getItem(position).setIsImageResized(false);
} else {
holder.imageViewPicture.setLayoutParams(new LinearLayout.LayoutParams(
LinearLayout.LayoutParams.WRAP_CONTENT,
LinearLayout.LayoutParams.MATCH_PARENT));
}
}
});
return convertView;
}
A: ok i found a workaround by referencing the
private LinkedList<Entity> entity;
public AuthorsAdapter(Context context, LinkedList<Entity> entity) {
super(context, 0, entity);
this.entity = entity;
}
in the constructor
then instead of doing like you said :
.... else{
holder = (ViewHolder) convertView.getTag();
}
if (!getItem(position).isImageResized()) {
holder.imageViewPicture.setLayoutParams(new LinearLayout.LayoutParams(
LinearLayout.LayoutParams.WRAP_CONTENT,
LinearLayout.LayoutParams.WRAP_CONTENT));
getItem(position).setIsImageResized(true);
} else if (getItem(position).isImageResized()) {
holder.imageViewPicture.setLayoutParams(new LinearLayout.LayoutParams(
LinearLayout.LayoutParams.WRAP_CONTENT,
LinearLayout.LayoutParams.MATCH_PARENT));
getItem(position).setIsImageResized(false);
} else {
holder.imageViewPicture.setLayoutParams(new LinearLayout.LayoutParams(
LinearLayout.LayoutParams.WRAP_CONTENT,
LinearLayout.LayoutParams.MATCH_PARENT));
} .....
ive made an enhanced for loop like this :
.... else{
holder = (ViewHolder) convertView.getTag();
}
for (Entity foo : entity) {
if (!foo.isImageResized()) {
holder.imageViewPicture.setLayoutParams(new LinearLayout.LayoutParams(
25,
25));
} else if (foo.isImageResized())
holder.imageViewPicture.setLayoutParams(new LinearLayout.LayoutParams(
LinearLayout.LayoutParams.MATCH_PARENT,
LinearLayout.LayoutParams.MATCH_PARENT));
} ....
note that i do some changes in my xml layout so now the image is right Under the title :) and i make it match parent to have bigger sized image...
btw thanks a lot for the hint @Ammar aly it really helped me | unknown | |
d8866 | train | The property name is actually askedQuestion, and not askQuestion. | unknown | |
d8867 | train | To compare a given time to the current time:
if (strtotime($given_time) >= time()+300) echo "You are online";
300 is the difference in seconds that you want to check. In this case, 5 minutes times 60 seconds.
If you want to compare two arbitrary times, use:
if (strtotime($timeA) >= strtotime($timeB)+300) echo "You are online";
Be aware: this will fail if the times are on different dates, such as 23:58 Friday and 00:03 Saturday, since you're only passing the time as a variable. You'd be better off storing and comparing the Unix timestamps to begin with.
A: $difference = strtotime( $current_time ) - strtotime( $passed_time );
Now $difference holds the difference in time in seconds, so just divide by 60 to get the difference in minutes.
A: Use Datetime class
//use new DateTime('now') for current
$current_time = new DateTime('2013-10-11 21:07:35');
$passed_time = new DateTime('2013-10-11 21:02:37');
$interval = $current_time->diff($passed_time);
$diff = $interval->format("%i%");
if($diff < 5){
echo "online";
}
A: $my_time = "3:25:00";
$time_diff = strtotime(strftime("%F") . ' ' .$my_time) - time();
if($time_diff < 0)
printf('Time exceeded by %d seconds', -$time_diff);
else
printf('Another %d seconds to go', $time_diff); | unknown | |
d8868 | train | Set CSS for your footer element:
.footer{
position: fixed;
bottom: 0;
left: 0;
width: 100%;
}
A: .footer{
position: fixed;
bottom: 0;
left: 0;
width: 100%;
}
works mostly fine! BUT, if you really want in to be sticky, you should use for example iScroll (http://iscrolljs.com) With iScroll you have only one area to scroll and headers and footers can't be scrolled!
A: You have a viewport in which you can draw (layout) your page, and you can't draw outside of it. The scroll bar is a control/decoration on the window itself, and you can't cover it.
What you can do is avoid having a scrollbar on the window, and have one on your main content instead.
Set both the footer and the main content positions, and make the main content scrollable with overflow: scroll — that way the scroll bar is attached to the content div instead of the browser window.
The footer won't have the scroll bar next to it then, but there may be reserved space on the right. That is out of your control — it is up to the browser vendor.
It will look like this (I'm using IDs in place of Classes):
#content {
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 90%;
overflow: scroll;
}
#footer {
background: white;
position: fixed;
bottom: 0;
left: 0;
width: 100%;
height: 10%;
}
<div id="content">
This is the content area. It will have lots of vertical space so that it can scroll.<br>
<br>
a<br><br>
b<br><br>
c<br><br>
d<br><br>
e<br><br>
f<br><br>
g<br><br>
h<br><br>
i<br><br>
j<br><br>
k<br><br>
l<br><br>
m<br><br>
n<br><br>
o<br><br>
p<br><br>
</div>
<div id="footer">
This is the footer part and may have <em>the fine print</em> and/or navigation links; whatever you like.
</div>
... or see this fiddle demonstrating it. | unknown | |
d8869 | train | You seem to think that images are downloaded as part of the HTTP response containing the HTML page containing the <img> elements. This is not true. The browser obtains the HTML page, parses the HTML structure, encounters the <img> element and fires separate HTTP requests for each of them. It is always GET. The same applies to CSS/JS and other resources. Install a HTTP request debugger tool like Firebug and check the Net panel.
The actual problem is likely that the URL as definied in src attribute is wrong. You're using a context-relative path (i.e. no leading slash and no domain in the src attribute, it's fully dependent on the current request URL -the one in browser address bar). Probably the HTML page get posted to a different context/folder which caused that the image becomes unreachable from that point. Try using an absolute path or a domain-relative path.
<h:graphicImage value="#{request.contextPath}/image?fileId=#{bean.currentUser.photo.id}" />
If still in vain, then have a closer look at Firebug's analysis. | unknown | |
d8870 | train | The easiest way to hide the text part is probably to set the Width of the ComboBox:
<ComboBox Width="20" />
Otherwise, you could always define your own custom ControlTemplate and set the Template property to this one but this requires some more effort.
A: EDIT - To use the below code you will need to use ComboBoxStyleWithoutText as a style on your combobox
To achieve this I had to re-create a combo-box and edit the template itself whilst tweaking the individual styles to suit the colour scheme of our app.
This is the result
In the application
Please feel free to use the code below, it likely has some custom colours I've used from other dictionaries (Change these to whatever suits your app)
<ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="Shared.xaml" />
<ResourceDictionary Source="Colours.xaml" />
<ResourceDictionary Source="Brushes.xaml" />
</ResourceDictionary.MergedDictionaries>
<!-- SimpleStyles: ComboBox -->
<ControlTemplate x:Key="ComboBoxToggleButton"
TargetType="ToggleButton">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition />
<ColumnDefinition Width="20" />
</Grid.ColumnDefinitions>
<Border x:Name="Border"
Grid.ColumnSpan="2"
CornerRadius="2"
Background="{StaticResource ScreenBackgroundBrush}"
BorderBrush="{StaticResource GridBorderBrush}"
BorderThickness="2" />
<Border Grid.Column="0"
CornerRadius="2,0,0,2"
Margin="1"
Background="{StaticResource ScreenBackgroundBrush}"
BorderBrush="{StaticResource GridBorderBrush}"
BorderThickness="1,1,1,1" />
<Path x:Name="Arrow"
Grid.Column="1"
Fill="{StaticResource SecondaryControlBrush}"
HorizontalAlignment="Center"
VerticalAlignment="Center"
Data="M 0 0 L 4 4 L 8 0 Z" />
</Grid>
<ControlTemplate.Triggers>
<Trigger Property="ToggleButton.IsMouseOver"
Value="true">
<Setter TargetName="Border"
Property="Background"
Value="{StaticResource DataGridHeaderRowBackgroundBrush}" />
</Trigger>
<Trigger Property="ToggleButton.IsChecked"
Value="true">
<Setter TargetName="Border"
Property="Background"
Value="{StaticResource DataGridHeaderRowBackgroundBrush}" />
</Trigger>
<Trigger Property="IsEnabled"
Value="False">
<Setter Property="Opacity"
Value="0.25" />
<Setter TargetName="Border"
Property="Background"
Value="{StaticResource DisabledBackgroundBrush}" />
<Setter TargetName="Border"
Property="BorderBrush"
Value="{StaticResource DisabledBorderBrush}" />
<Setter Property="Foreground"
Value="{StaticResource DisabledForegroundBrush}" />
<Setter TargetName="Arrow"
Property="Fill"
Value="{StaticResource DisabledForegroundBrush}" />
</Trigger>
</ControlTemplate.Triggers>
</ControlTemplate>
<ControlTemplate x:Key="ComboBoxTextBox"
TargetType="TextBox">
<Border x:Name="PART_ContentHost"
Focusable="False"
Background="{TemplateBinding Background}" />
</ControlTemplate>
<Style x:Key="ComboBoxStyleWithoutText" TargetType="{x:Type ComboBox}">
<Setter Property="SnapsToDevicePixels" Value="true" />
<Setter Property="OverridesDefaultStyle" Value="true" />
<Setter Property="ScrollViewer.HorizontalScrollBarVisibility" Value="Auto" />
<Setter Property="ScrollViewer.VerticalScrollBarVisibility" Value="Auto" />
<Setter Property="ScrollViewer.CanContentScroll" Value="true" />
<Setter Property="MinWidth" Value="0" />
<Setter Property="MinHeight" Value="20" />
<Setter Property="Background" Value="Green" />
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="ComboBox">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="0"></ColumnDefinition>
<ColumnDefinition MinWidth="{DynamicResource {x:Static SystemParameters.VerticalScrollBarWidthKey}}" Width="0"/>
</Grid.ColumnDefinitions>
<ToggleButton Name="ToggleButton"
Template="{StaticResource ComboBoxToggleButton}"
Grid.Column="2"
Focusable="false"
IsChecked="{Binding Path=IsDropDownOpen,Mode=TwoWay,RelativeSource={RelativeSource TemplatedParent}}"
ClickMode="Press">
</ToggleButton>
<ContentPresenter Name="ContentSite"
IsHitTestVisible="False"
Content="{TemplateBinding SelectionBoxItem}"
ContentTemplate="{TemplateBinding SelectionBoxItemTemplate}"
ContentTemplateSelector="{TemplateBinding ItemTemplateSelector}"
Margin="3,3,3,3"
VerticalAlignment="Center"
HorizontalAlignment="Left" />
<TextBox x:Name="PART_EditableTextBox"
Style="{x:Null}"
Template="{StaticResource ComboBoxTextBox}"
HorizontalAlignment="Left"
VerticalAlignment="Center"
Margin="3,3,3,3"
Focusable="True"
Visibility="Hidden"
IsReadOnly="{TemplateBinding IsReadOnly}" />
<Popup Name="Popup"
Placement="Bottom"
IsOpen="{TemplateBinding IsDropDownOpen}"
AllowsTransparency="True"
Focusable="False"
PopupAnimation="Slide">
<Grid Name="DropDown"
SnapsToDevicePixels="True"
MinWidth="{TemplateBinding ActualWidth}"
MaxHeight="{TemplateBinding MaxDropDownHeight}">
<Border x:Name="DropDownBorder"
Background="{StaticResource TextBoxBrush}"
BorderThickness="2"
BorderBrush="{StaticResource SolidBorderBrush}" />
<ScrollViewer Margin="4,6,2,2"
SnapsToDevicePixels="True">
<StackPanel IsItemsHost="True"
KeyboardNavigation.DirectionalNavigation="Contained" />
</ScrollViewer>
</Grid>
</Popup>
</Grid>
<ControlTemplate.Triggers>
<Trigger Property="HasItems"
Value="false">
<Setter TargetName="DropDownBorder"
Property="MinHeight"
Value="95" />
</Trigger>
<Trigger Property="IsEnabled"
Value="false">
<Setter Property="Foreground"
Value="{StaticResource DisabledForegroundBrush}" />
</Trigger>
<Trigger Property="IsGrouping"
Value="true">
<Setter Property="ScrollViewer.CanContentScroll"
Value="false" />
</Trigger>
<Trigger SourceName="Popup"
Property="Popup.AllowsTransparency"
Value="true">
<Setter TargetName="DropDownBorder"
Property="CornerRadius"
Value="4" />
<Setter TargetName="DropDownBorder"
Property="Margin"
Value="0,2,0,0" />
</Trigger>
<Trigger Property="IsEditable"
Value="true">
<Setter Property="IsTabStop"
Value="false" />
<Setter TargetName="PART_EditableTextBox"
Property="Visibility"
Value="Visible" />
<Setter TargetName="ContentSite"
Property="Visibility"
Value="Hidden" />
</Trigger>
</ControlTemplate.Triggers>
</ControlTemplate>
</Setter.Value>
</Setter>
<Style.Triggers>
</Style.Triggers>
</Style>
<Style x:Key="{x:Type ComboBox}"
TargetType="ComboBox">
<Setter Property="SnapsToDevicePixels"
Value="true" />
<Setter Property="OverridesDefaultStyle"
Value="true" />
<Setter Property="ScrollViewer.HorizontalScrollBarVisibility"
Value="Auto" />
<Setter Property="ScrollViewer.VerticalScrollBarVisibility"
Value="Auto" />
<Setter Property="ScrollViewer.CanContentScroll"
Value="true" />
<Setter Property="MinWidth"
Value="120" />
<Setter Property="MinHeight"
Value="20" />
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="ComboBox">
<Grid>
<ToggleButton Name="ToggleButton"
Template="{StaticResource ComboBoxToggleButton}"
Grid.Column="2"
Focusable="false"
IsChecked="{Binding Path=IsDropDownOpen,Mode=TwoWay,RelativeSource={RelativeSource TemplatedParent}}"
ClickMode="Press">
</ToggleButton>
<ContentPresenter Name="ContentSite"
IsHitTestVisible="False"
Content="{TemplateBinding SelectionBoxItem}"
ContentTemplate="{TemplateBinding SelectionBoxItemTemplate}"
ContentTemplateSelector="{TemplateBinding ItemTemplateSelector}"
Margin="3,3,23,3"
VerticalAlignment="Center"
HorizontalAlignment="Left" />
<TextBox x:Name="PART_EditableTextBox"
Style="{x:Null}"
Template="{StaticResource ComboBoxTextBox}"
HorizontalAlignment="Left"
VerticalAlignment="Center"
Margin="3,3,23,3"
Focusable="True"
Visibility="Hidden"
IsReadOnly="{TemplateBinding IsReadOnly}" />
<Popup Name="Popup"
Placement="Bottom"
IsOpen="{TemplateBinding IsDropDownOpen}"
AllowsTransparency="True"
Focusable="False"
PopupAnimation="Slide">
<Grid Name="DropDown"
SnapsToDevicePixels="True"
MinWidth="{TemplateBinding ActualWidth}"
MaxHeight="{TemplateBinding MaxDropDownHeight}">
<Border x:Name="DropDownBorder"
Background="{StaticResource TextBoxBrush}"
BorderThickness="2"
BorderBrush="{StaticResource SolidBorderBrush}" />
<ScrollViewer Margin="4,6,4,6"
SnapsToDevicePixels="True">
<StackPanel IsItemsHost="True"
KeyboardNavigation.DirectionalNavigation="Contained" />
</ScrollViewer>
</Grid>
</Popup>
</Grid>
<ControlTemplate.Triggers>
<Trigger Property="HasItems"
Value="false">
<Setter TargetName="DropDownBorder"
Property="MinHeight"
Value="95" />
</Trigger>
<Trigger Property="IsEnabled"
Value="false">
<Setter Property="Foreground"
Value="{StaticResource DisabledForegroundBrush}" />
</Trigger>
<Trigger Property="IsGrouping"
Value="true">
<Setter Property="ScrollViewer.CanContentScroll"
Value="false" />
</Trigger>
<Trigger SourceName="Popup"
Property="Popup.AllowsTransparency"
Value="true">
<Setter TargetName="DropDownBorder"
Property="CornerRadius"
Value="4" />
<Setter TargetName="DropDownBorder"
Property="Margin"
Value="0,2,0,0" />
</Trigger>
<Trigger Property="IsEditable"
Value="true">
<Setter Property="IsTabStop"
Value="false" />
<Setter TargetName="PART_EditableTextBox"
Property="Visibility"
Value="Visible" />
<Setter TargetName="ContentSite"
Property="Visibility"
Value="Hidden" />
</Trigger>
</ControlTemplate.Triggers>
</ControlTemplate>
</Setter.Value>
</Setter>
<Style.Triggers>
</Style.Triggers>
</Style>
<!-- SimpleStyles: ComboBoxItem -->
<Style x:Key="{x:Type ComboBoxItem}"
TargetType="ComboBoxItem">
<Setter Property="SnapsToDevicePixels"
Value="true" />
<Setter Property="OverridesDefaultStyle"
Value="true" />
<Setter Property="Background"
Value="{StaticResource ScreenBackgroundBrush}"/>
<Setter Property="Foreground"
Value="{StaticResource PrimaryTextBrush}" />
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="ComboBoxItem">
<Border Name="Border"
Padding="2"
SnapsToDevicePixels="true">
<ContentPresenter />
</Border>
<ControlTemplate.Triggers>
<Trigger Property="IsHighlighted"
Value="true">
<Setter TargetName="Border"
Property="Background"
Value="{StaticResource DataGridRowSelectedBrush}" />
<Setter TargetName="Border"
Property="CornerRadius"
Value="2" />
</Trigger>
<Trigger Property="IsEnabled"
Value="false">
<Setter Property="Foreground"
Value="{StaticResource DisabledForegroundBrush}" />
</Trigger>
</ControlTemplate.Triggers>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style> | unknown | |
d8871 | train | Upping the timeout is fine where it's needed. You should use cfsetting requestTimeOut="xxx" to up the timeout just where it's needed, not upping it in the administrator, as that will affect all templates.
The disadvantages are that there's a pool of threads to handle requests and whilst one is handling your long-running request, it's not available to do other things. This is fine as long as you're confident that the long-running request will only be run by one or two people at a time, but problematic if lots of people could be running it. If CF will run 8 requests concurrently and all 8 are hadnling your long-running request, then your site is effectively offline. CF will queue requests up to a certain point, but you don't want to get to this state in the first place.
I've worked on apps where there are background admin-only tasks which can take up to an hour, but we were confident that no more that 2 people would ever run them.
You could also look at using cfthread in order to run your query without blocking the page, but then it's harder to provide feedback to the user.
A: I think it would help if we understood the situation a little better. Is this something like a nightly job that is run? Or is this a page that a user might navigate to which generates a report while they wait for it? Give us some basic information about what your page is doing.
Also, there are different ways to increase the request timeout; system wide via the administrator or page specific via the <cfsetting requestTimeOut=""> tag. Which one are you increasing? I would suggest that you not increase the system wide setting but it is okay to increase the page level setting when needed.
There is also a timeout attribute for the <cfquery> tag. Are you using that?
Can you tell us if the <cfquery> is timing out or if the timeout is happening after the cfquery and during the data output?
I would also suggest attempting to optimize your query as much as possible. Can an index be setup (if one is not already)? Do you really only need a subset of the records being returned?
Perhaps you could split this one page up into multiple pages that would each run faster.
A: Rather than just increase your timeout limit, IF the user who initiates the request doesn't need to wait for the results (i.e an end of day or overnight report) then I would use a cfthread with its timeout set to 0 (never times-out) and leave it running in the background until it has finished, or set an extremly long timeout, like 1 hour. I would also use cflock to prevent the thread being able to proccess more than once otherwise you could grind your system to a halt with many heavy threads.
read this -docs- and see what you think | unknown | |
d8872 | train | The good answer is, it depends. It's in general unsafe to do:
__weak MyObject *safeSelf = self;
[self doWithCompletionBlock:^{
[safeSelf doAnotherThingWithCompletionBlock:^{
[safeSelf doSomethingElse];
}];
}];
The reason for that, is that when you call -doAnotherThingWithCompletionBlock if that method thinks it holds a reference on self, which a selector usually assumes, and it dereferences self to access ivars, then you will crash, because you actually don't hold a reference.
So whether you need to take a new weak reference or not depends on the lifetime semantics you need/want.
EDIT:
By the way, clang even has a warning about that issue that you can enable in Xcode with the CLANG_WARN_OBJC_RECEIVER_WEAK setting (CLANG_WARN_OBJC_REPEATED_USE_OF_WEAK is also useful while we're at it)
A: __block MyObject *safeSelf = self;
[self doWithCompletionBlock:^{
[safeSelf doAnotherThingWithCompletionBlock:^{
[safeSelf doSomethingElse];
}];
}];
Hope it will do what it should normally does. Tell the compiler not to retain the __weak MyObject* no matter in which block scope it is. The fact is i didn't tested. Meanwhile I'll be surprised if it will actually retain MyObject * | unknown | |
d8873 | train | Use TabLayout from the Material Design support library.
See the "Tabs" paragraph in the blog post that introduces the library:
However, if you are using a ViewPager for horizontal paging between
tabs, you can create tabs directly from your PagerAdapter’s
getPageTitle() and then connect the two together using
setupWithViewPager(). This ensures that tab selection events update
the ViewPager and page changes update the selected tab. | unknown | |
d8874 | train | As you realise, the problem with your code is that 2008 and 2015 values will be non-missing values only for those years respectively, and hence never not missing on both variables. Here is one way to spread values to all years for each industry:
by industry: egen tot_2008 = total(revenue / (year == 2008))
by industry: egen tot_2015 = total(revenue / (year == 2015))
gen change = (tot_2015-tot_2008)/tot_2008
That hinges on expressions such as year == 2008 being evaluated as 1 if true and 0 if false. If you divide by 0, the result is a missing value, which Stata ignores, which is exactly what you want. Taking totals over all observations in an industry ensures that the same value is recorded for each industry.
Here is another way that some find more explicit:
by industry: egen tot_2008 = total(cond(year == 2008, revenue, .))
by industry: egen tot_2015 = total(cond(year == 2015, revenue, .))
gen change = (tot_2015-tot_2008)/tot_2008
which hinges on the same principle, that missings will be ignored.
Note the use of the egen function total() here. The egen function sum() still works, and is the same function, but that name is undocumented as of Stata 9, in an attempt to avoid confusion with the Stata function sum().
To avoid double (indeed multiple) counting, use
egen tag = tag(industry)
to tag just one observation for each industry, to be used in graphs and tables for which you want that.
For discussion, see here, sections 9 and 10. | unknown | |
d8875 | train | You can either:
<style name="noActionBar">
<item name="windowActionBar">false</item>
<item name="windowNoTitle">true</item>
</style>
and in your layout(at the root layout)
android:theme="@theme/noActionBar"
Or
requestWindowFeature(Window.FEATURE_NO_TITLE);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
However, the last one also hides the Statusbar. You can also set the theme to a default theme(look through the themes and find those with NoActionBar). The last one is useful if you have no plans to style the activity(for an instance if you use SurfaceView) with custom colors, etc.
For an instance you can change AppBaseTheme to have the parent Holo.Light.NoActionBar or any similar theme, but the name has to contain NoActionBar, othervise it has an actionbar
A: Add the below theme to style.xml:
<style name="AppTheme.NoActionBar">
<item name="windowActionBar">false</item>
<item name="windowNoTitle">true</item>
</style>
Then set this theme to your Activity in manifest:
<activity
android:name=".MainActivity"
android:theme="@style/AppTheme.NoActionBar" /> | unknown | |
d8876 | train | Looks like you're missing case in your method call. You have to use TakeOffline according to msdn
static void TakeOffLine(ManagementObject resourceGroup)
{
ManagementBaseObject outParams =
resourceGroup.InvokeMethod("TakeOffline", null, null);
} | unknown | |
d8877 | train | ImageMagick -chop does just that
Man page on -chop:
The -chop option removes entire rows and columns, and moves the remaining corner blocks leftward and upward to close the gaps.
Also handy is the counterpart function -splice:
This will add rows and columns of the current -background color into the given image
Chop out the undesired red row
→
convert in.png -chop x92+0+50 out-chopped.png
*
*in.png is the original image
*out-chopped.png is the desired outcome
*-chop x92+0+50: From the default reference point top left (could be changed with -gravity) at x +0px and y +50px (after the top blue part we want to keep) chop out the red segment at full width (because no number is specified before the x and hence it assumes full canvas width) and at a height of 92px (had some seam, hence I added 2px to cut clean)
Chop out the undesired red part + insert a thin separator row
→
→
If you want to insert some separator where you chopped out, you can achieve that with -splice.
convert in.png -chop x92+0+50 -background black -splice x2+0+50 out-chopped-spliced-separator.png
*
*-chop already explained above
*as the next processing step we change -background to black which applies to all later command in the queue.
*-splice x2+0+50 From the default reference point top-left at X 0px and Y 50px splice in a row of full width (nothin specified in front of the x) and of 2px height. Because we have set the background color black in the previous step that 2px row is filled black.
Batch processing
mogrify -path ../batch-done -chop x92+0+50 -background black -splice x2+0+50 *
*
*mogrify keeps the same filename of each input file for the corresponding output file. Normally it overwrites in place. But we use:
*-path to write the out files to target directory ../batch-done
** to consider all files of your current directory via shell globbing as the input files of your batch.
Sources
*
*v7 man page on -chop
*v6 legacy manpage with sample images on Chop, removing rows, columns and edges | unknown | |
d8878 | train | Use
PROCEDURE details
instead of
CREATE OR REPLACE PROCEDURE details
https://docs.oracle.com/cd/E11882_01/appdev.112/e25519/packages.htm#LNPLS00905
On the other hand, the SELECT statement needs a INTO statement. So the code could be:
CREATE OR REPLACE PACKAGE pack1
AS
PROCEDURE details;
END pack1;
/
CREATE OR REPLACE PACKAGE BODY pack1
AS
PROCEDURE details
IS
l_rec table1%ROWTYPE;
BEGIN
SELECT *
INTO l_rec
FROM table1;
END details;
END pack1; | unknown | |
d8879 | train | I couldn't come up with a formula, but here is a pretty brute force way of doing it:
Sub test()
Dim i As Long, _
lr As Long, _
sumValue As Long, _
counter As Long
lr = Range("A" & Rows.Count).End(xlUp).Row
counter = 1
For i = 1 To lr
sumValue = sumValue + Range("A" & i)
If sumValue <= 20 Then
Range("B" & i).Value = counter
Else
sumValue = 0
counter = counter + 1
i = i - 1
End If
Next i
End Sub
For what it's worth, my results were a tad bit different as I calculated 5+10+12 to be greater than 20...
A: I found the answer by browsing through the related section.
=IF(A2="a",ROUNDDOWN((B2-1)/20,0)+1,IF(SUMIF($C1:C$2,C1,$B1:B$2)+B2>20,C1+1,C1))
Here is a link to that answer.
I'm going to accept @sous2817's answer because it was the most helpful and would've been the path I took if I went for the VBA approach.
Thanks again guys!
A: =ROUNDUP(SUM(R5C[-1]:RC[-1])/20,0)
or
=ROUNDUP(SUM(C$5:C5)/20,0)
where the formula starts at D5 and is pulled down.
A: The described problem can also be solved with a dynamic array solution in newer versions of excel (Office365).
An example for a possible solution could look like this:
which uses the following formula for the sets 2-5:
=LET(
rows,$B$5:$B$17,
vals,$C$5:$C$17,
excl,$F$5:F$15,
rows_filt,OFFSET(rows,COUNT(excl),0,COUNT(rows)-COUNT(excl)),
vals_filt,OFFSET(vals,COUNT(excl),0,COUNT(rows)-COUNT(excl)),
cumsum,MMULT(N(ROW(vals_filt)>=TRANSPOSE(ROW(vals_filt))),vals_filt),
FILTER(rows_filt,cumsum<=20)
)
but to avoid reference to itself, the formula for set 1 is:
=LET(
rows,$B$5:$B$17,
vals,$C$5:$C$17,
cumsum,MMULT(N(ROW(vals)>=TRANSPOSE(ROW(vals))),vals),
FILTER(rows,cumsum<=20)
)
(also shown in the screenshot here) | unknown | |
d8880 | train | POST redirect the received JSON to server, and deserialize the JSON to your typed object. You can consider keeping the object in your SESSION and access it when required.
UPDATE
put a hidden field in your aspx/ascx page
Once you receive your JSON Data from your service. Just put your response to the hidden field. (USE JQUERY)
$("input[id$=jsonResponse]").val(responseFromService);
On your Page_Load method on your home.aspx
Use JavaScriptSerializer to deserialize your JSON data
JavaScriptSerializer serializer = new JavaScriptSerializer();
LoginData loginDataObject = serializer.Deserialize<LoginData>(jsonResponse.Value);
Now you can consider putting your loginDataObject to SESSION and access through out your application scope
// to store in session
Request.Session["loginData"] = loginDataObject;
// to retrieve from session
LoginData loginDataObject = (LoginData) Request.Session["loginData"];
A: You could store the object in a cookie and change the url with location.href or something similar? | unknown | |
d8881 | train | I think what you're looking for are those products that have all three of those features that don't have any others. If so, something like this should work:
SELECT P.Id_Product, P.Name
FROM Products P
JOIN Features F ON P.id_product = F.id_product
JOIN Features F2 ON P.id_product = F2.id_product
WHERE F.id_feature IN (SELECT id_feature FROM Features WHERE id_product = 5)
GROUP BY P.Id_Product, P.Name
HAVING COUNT(DISTINCT F2.id_Feature) = 3
And here is a condensed Fiddle to demonstrate.
A: I think you want to use a join statement- i didn't check my query for exact syntax but you get the idea:
SELECT * FROM `mydb`.`products`
JOIN mydb.features ON mydb.features.id_product = mydb.products.id_product
WHERE
`mydb`.`features`.`description` = 'strong' OR
`mydb`.`features`.`description` = 'tall' OR
`mydb`.`features`.`description` = 'expensive'; | unknown | |
d8882 | train | How can I solve this issue?
By ignoring it. No issue here.
It looks to me like a permanent roundtrip from the Blazor app back to the IdentityServer4
Not at all. A message such as this:
info: Microsoft.AspNetCore.Authorization.DefaultAuthorizationService[1]
Authorization was successful.
is issued by the authorization service when you try to access a protected resource ( annotated by the Authorize attribute), and it is called to check if you are authorized to access the protected resource. In your Blazor client, annotate the Counter page with the Authorize attribute, run your app, and alternately navigate from the Index page to the Counter page (after being authenticated), you'll notice that each time you try to navigate to the Counter page the above message will increase by one (to left of the message). Once again, this is because the authorization service is being called to check if you are authorized to access the protected resource... This is by design. You authenticate only once, but being spied on again and again, not without reason of course, and even that not always succeed. | unknown | |
d8883 | train | A quick fix is to use .*? instead of just .*. The ? changes the * into a non-greedy repetition, which will match up until the nearest #user code, instead of the furthest. | unknown | |
d8884 | train | If you take a look at the component you are rendering, and at the renderNavigationView prop:
renderNavigationView={this.renderNavigationView}
It seems fine, but since the this context in functions is window by default, this refers to window in renderNavigationView. Consider your onPress event handler:
onPress={this.nextScene()}
Since you use this.nextScene() and this refers to window in a function, you're effectively trying to do window.nextScene which does not exist, thus throwing the error. (Also note that that is an invocation - not a reference. Remove the parentheses).
So if I try this.nextScene.bind(this), I get a cannot read property 'bind' of undefined
This is because the function is undefined because window.nextScene doesn't exist. To fix this, use Function.prototype.bind to bind the this correctly on both renderNavigationView and nextScene:
renderNavigationView={this.renderNavigationView.bind(this)}
What bind does in this situation is set the this context in the function. Since this here refers to the class, the class will be used to execute the nextScene method which should work correctly. You must also use bind on nextScene because inside nextScene we want this to refer to the class, not window:
onPress={this.nextScene.bind(this)}
A: Another alternative to using the bind method that winter pointed out in his answer is to use arrow functions which automatically bind this to the parent context for you.
class MyComponent extends React.Component {
clickHandler = (e) => {
// do stuff here
}
render() {
return (
<button onClick={this.clickHandler}></button>
)
}
} | unknown | |
d8885 | train | You're probably misunderstanding what ,omitempty means. It takes effect when marshalling data, only. If you unmarshal <billing/> onto a pointer field with ,omitempty, it will still initialize the field. Then, since the XML element is empty, the fields of Billing itself won't be set. In practice, if you assume that customer.Billing != nil means customer.Billing.Address != nil, you'll get the observed panic.
Note: http://play.golang.org/p/dClkfOVLXh | unknown | |
d8886 | train | Rather than making a derived property for discrete, you could make it a stored property which is publicly readonly but privately readwrite. Implement the setter for continuous yourself, like so:
- (void) setContinuous:(float)newValue
{
_continuous = newValue;
NSInteger newDiscrete = floor(_continuous);
if (newDiscrete != _discrete)
self.discrete = newDiscrete;
} | unknown | |
d8887 | train | The reason your chart is not displaying data when you converted to the time scale is because you are passing string values for your labels that the Date constructor is not able to parse.
The time scale expects either an integer (number of milliseconds since epoch), a Date object, a moment.js object, or a string representation of a date that is in a format that Date.parse() can understand.
So in your case, when the chart is rendered it can not create Date objects for the X axis (because of your label values), so it creates 2 date objects using new Date() (this is why the X axis still displays some date data...notice that it is 2017 dates because new Date() returns a Date initialized to right now).
To correct this, just convert the label values to integers (e.g. remove the quotes) and the chart will then display your data points. The chart still looks funny however because the X scale is configured in units of month but your data values are only 1 day apart (1485903600 = Jan 17, 1970 10:45 PM and 1490738400 = Jan 18, 1970 12:05 AM).
Here is a codepen example that shows your original chart and the correct chart so you can see the difference. | unknown | |
d8888 | train | From mysql v5.7.6 you can create generated columns. If you set its type to stored, then you can create secondary indexes on it. From v5.7.8 you can even create indexes on virtual generated columns, see mysql's documentation on create table:
As of MySQL 5.7.8, InnoDB supports secondary indexes on virtual
generated columns. Other index types are not supported.
A secondary index may be created on one or more virtual columns or on
a combination of virtual columns and non-virtual generated columns.
Secondary indexes on virtual columns may be defined as UNIQUE. | unknown | |
d8889 | train | Well it looks like your httppost is sending it to "http://", which is not exactly a valid address, which is part of your problem.
Also: Is that code verbatim? There is no way it should run. at the very beginning you have this line
new HttpPost("http://"");
You have two quotes at the end, which effectively makes everything after the second quote a string until it runs into a closing quote. | unknown | |
d8890 | train | For keeping track of unique objects, I like using set. A set behaves like a mathematical set in that it can have at most one copy of any given thing in it.
from collections import namedtuple
# by convention, instances of `namedtuple` should be in UpperCamelCase
Paper = namedtuple('paper', ['title', 'authors', 'year', 'doi'])
papers = [
Paper('On Unicorns', ['J. Atwood', 'J. Spolsky'], 2008, 'foo'),
Paper('Discourse', ['J. Atwood', 'R. Ward', 'S. Saffron'], 2012, 'bar'),
Paper('Joel On Software', ['J. Spolsky'], 2000, 'baz')
]
authors = set()
for paper in papers:
authors.update(paper.authors) # "authors = union(authors, paper.authors)"
print(authors)
print(len(authors))
Output:
{'J. Spolsky', 'R. Ward', 'J. Atwood', 'S. Saffron'}
4
More compactly (but also perhaps less readably), you could construct the authors set by doing:
authors = set([author for paper in papers for author in paper.authors])
This may be faster if you have a large volume of data (I haven't checked), since it requires fewer update operations on the set.
A: If you don't want to use embeded type set() and want to understand the logic, use a list and if bifurcation.
If we don't use set() in senshin's code:
# authors = set()
# for paper in papers:
# authors.update(paper.authors) # "authors = union(authors, paper.authors)"
authors = []
for paper in papers:
for author in paper.authors:
if not author in authors:
authors.append(author)
You can get similar result as senshin's. I hope it helps. | unknown | |
d8891 | train | With a sample dataframe
In [138]: df
Out[138]:
col1 col2 col3 newcol
0 a 1 x Wow
1 b 2 y Dud
2 c 1 z Wow
In [139]: df['newcol']
Out[139]:
0 Wow
1 Dud
2 Wow
Name: newcol, dtype: object
In [140]: type(_)
Out[140]: pandas.core.series.Series
Selecting a column gives me a Series; no need for another Series wrapper
In [141]: pd.Series(df['newcol'])
Out[141]:
0 Wow
1 Dud
2 Wow
Name: newcol, dtype: object
We can put it in a list, but that doesn't do any good:
In [142]: [pd.Series(df['newcol'])]
Out[142]:
[0 Wow
1 Dud
2 Wow
Name: newcol, dtype: object]
In [143]: len(_)
Out[143]: 1
We can extract the values as a numpy array:
In [144]: pd.Series(df['newcol']).values
Out[144]: array(['Wow', 'Dud', 'Wow'], dtype=object)
We can apply a string slicing to each element of either the array or series - with a list comprehension:
In [145]: [astr[:2] for astr in _144]
Out[145]: ['Wo', 'Du', 'Wo']
In [146]: [astr[:2] for astr in _141]
Out[146]: ['Wo', 'Du', 'Wo']
The list comprehension isn't necessarily the most 'advanced' way, but it's a good start. Actually it is close to the best, since slicing a string has to use string methods; no one else implements string slicing.
pandas has a str method for applying string methods to a series:
In [147]: ds = df['newcol']
In [151]: ds.str.slice(0,2) # or ds.str[:2]
Out[151]:
0 Wo
1 Du
2 Wo
Name: newcol, dtype: object
This is cleaner and prettier than the list comprehensions, but actually slower.
A: I might be missing the gist of the question, but here's a regular expression implementation.
import re
# Sample data
disc = [' Disc - Standard Removal & Herbicide ',
' Disc - Standard Removal & Herbicide ',
' Standard Trim ',
' Disc - Hazard Tree',
' Disc - Hazard Tree ',]
# Regular Expression pattern
# We have Disc in parenthesis because that's what we want to capture.
# Using re.search(<pattern>, <string>).group(1) returns the first matching group. Using just
# re.search(<pattern>, <string>).group() would return the entire row.
disc_pattern = r"\s+?(Disc)\s+?"
# List comprehension that skips rows without 'Disc'
[re.search(disc_pattern, i).group(1) for i in disc if re.match(disc_pattern, i)]
Output:
['Disc', 'Disc', 'Disc', 'Disc'] | unknown | |
d8892 | train | You can use a helper template to wrap your integer and turn it into a type. This is the approach used, for instance, by Boost.MPL.
#include <iostream>
template <int N>
struct int_ { }; // Wrapper
template <class> // General template for types
struct Foo { static constexpr char const *str = "Foo<T>"; };
template <int N> // Specialization for wrapped ints
struct Foo<int_<N>> { static constexpr char const *str = "Foo<N>"; };
template <int N> // Type alias to make the int version easier to use
using FooI = Foo<int_<N>>;
struct Bar { };
int main() {
std::cout << Foo<Bar>::str << '\n' << FooI<42>::str << '\n';
}
Output:
Foo<T>
Foo<N>
Live on Coliru | unknown | |
d8893 | train | An <svg> element in HTML is set to display: inline by default. This can cause it to be affected by line-wrapping when space is constrained; the icon will wrap to the next line in the same way as a word in a paragraph.
Easiest fix, if you are positioning the SVG precisely, is therefore to set display: block.
https://jsfiddle.net/rpk6c6r0/6/ | unknown | |
d8894 | train | I am afraid there is no short answer. The best storage scheme depends on the problem you are trying to solve. The things to consider are not only the storage size, but also how efficient, from a computational and hardware perspective, access and operations on this storage format are.
For sparse matrix vector multiplication CSR is a good format as it allows linear access to the elements of a matrix row which is good for memory and cache performance. However CSR induces a more irregular access pattern into the multiplicand: fetch elements at different positions, depending on what index you retrieve from the row; this is bad for cache performance. A CSC matrix vector multiplication can remove the irregular access on the multiplicand, at the cost of more irregular access in the solution vector. Depending on your matrix structure you may choose one or another. For example a matrix with a few, long rows, with a similar nonzero distribution may be more efficient to handle in a CSC format.
Some examples in the well known software packages/tools:
*
*To the best of my knowledge Matlab uses a column storage by default.
*Scientific codes (and BLAS) based on Fortran also use a column storage by default. This is due mostly to historical reasons since Fortran arrays were AFAIK column oriented to begin with and a large number of Dense/Sparse BLAS codes were originally written in Fortran.
*Eigen also uses a column storage by default, but this can be customised.
*Intel MKL requires you to choose IIRC.
*Boost ublas uses a row based storage format by default.
*PetSC, which is a widely used tool in larger scale scientific computing, uses a row based format (SequentialAIJ stands for CSR), but also allows you to choose from a wide variety of storage formats (see the MatCreate* functions on their documentation)
And the list could go on. As you can see there is some spread between the various tools, and I doubt the criteria was the performance of the SpMV operation :) Probably aspects such as the common storage formats in the target problem domains, typical expectations of programmers in the target problem domain, integration with other library aspects and already existing codes have been the prime reason behind using CSR / CSC. These differ on a per tool basis, obviously.
Anyhow, a short overview on sparse storage formats can be found here but many more storage formats were/are being proposed in sparse matrix research:
*
*There are also block storage formats, which attempt to leverage locally dense substructures of the matrix. See for example "Fast Sparse Matrix-Vector Multiplication by Exploiting Variable Block Structure" by Richard W. Vuduc, Hyun-Jin Moon.
*A very brief but useful overview of some common storage formats can be found on the Python scipy documentation on sparse formats http://docs.scipy.org/doc/scipy/reference/sparse.html.
*Further information of the advantages of various formats can be found in the following texts (and many others):
*
*Iterative methods for sparse linear systems, Yousef Saad
*SPARSKIT: A basic tool kit for sparse matrix computation, Tech. Rep. CSRD TR 1029, CSRD, University of Illinois, Urbana, IL, 1990.
*LAPACK working note 50: Distributed sparse data structures for linear algebra operations, Tech. Rep. CS 92-169, Computer Science Department, University of Tennessee, Knoxville, TN, 1992.
I have been doing research in the sparse matrix area on creating custom hardware architectures for sparse matrix algorithms (such as SpMV). From experience, some sparse matrix benchmarks tend to ignore the overhead of conversion between various formats. This is because, in principle, it can be assumed that you could just adapt the storage format of your entire algorithm. SpMV itself is hardly used in isolation, and generally a part of some larger iterative algorithm (e.g. a linear or nonlinear solver). In this case, the cost of converting between formats can be amortised across the many iterations and total runtime of the entire algorithm. Of course you would have to justify that your assumption holds in this situation.
As a disclaimer, in my area we are particularly inclined to make as many assumptions as possible, since the cost and time to implement a hardware architecture for a linear solver to benchmark a new SpMV storage format is usually substantial (on the order of months). When working in software, it is a lot easier to test, qualify and quantify your assumptions by running as many benchmarks as possible, which would probably take less than a week to set up :D
A: This is not the answer but can't write in the comments. Best representation format depends on the underlying implementation. For example,
let
M = [m_11 m_12 m_13; == [r1; == [c1 c2 c3]
m_21 m_22 m_23] r2]
where r1,2 are the rows and c1,2,3 are columns
and
v = [v1;
v2;
v3]
You can implement M*v as
M*v = [r1.v;
r2.v]
as dot product of vectors, or
M*v = v1*c1 + v2*c2 + v3*c3
where * is the scalar vector multiplication.
You can minimize the number of operations by choosing the format depending on the sparsity of the matrix. Usually the fewer the rows/columns the better. | unknown | |
d8895 | train | If you're configuring iPhones in a commercial environment, you should look at the Enterprise Deployment Guide. Specifically, you should look at using the iPhone Configuration Utility to create a *.mobileconfig configuration file that can be distributed to all the phones in your network. The *.mobileconfig plist supports changing the following proxy configuration settings on the phone:
PropNetProxiesHTTPEnable (Integer, 1 = Proxy enabled)
PropNetProxiesHTTPProxy (String, Proxy server address)
PropNetProxiesHTTPPort (Integer, Proxy port number)
HTTPProxyUsername (String, optional username)
HTTPProxyPassword (String, optional password)
PropNetProxiesProxyAutoConfigEnable (Integer, 1 = Auto proxy enabled)
PropNetProxiesProxyAutoConfigURLString (String, URL that points to a PAC file where the configuration information is stored)
The iPhone Configuration Utility does not currently support adding or editing those settings, so you may need to get your hands dirty with the Property List Editor application. Also, it looks like the latest version of the Enterprise Deployment Guide does not include the settings I've included above, but you should be able to find it in the previous version of the document.
A: Pretty sure this is outside the Apple provided SDK sandbox. Probably possible with a jailbreak though. | unknown | |
d8896 | train | Finally found the solution - $ brew install vim --with-python3
A: I thought I had the same issue but realised I needed to re-start the shell.
If the problem still persists, it may be that you have older versions that homebrew is still trying to install. brew cleanup would remove older bottles and perhaps allow you to install the latest.
If this is still giving you trouble, I found removing vim with brew uninstall --force vim and then reinstalling with brew install vim --override-system-vim --with-python3 worked for me.
EDIT 2018-08-22
Python 3 is now default when compiling vim. Therefore the command below should integrate vim with Python 3 automatically.
brew install vim --override-system-vim
A: This worked for me with the latest OS for mac at this date.
Hope it works for you.
brew install vim python3 | unknown | |
d8897 | train | first of all you need to find the correct file to modify. For your example, you can modify the mkdir command by changing/adding code in the usr/src/servers/vfs/open.c file. If you look the open.c file you'll see that there is a do_mkdir function there. You can use :
printf("New dir -> %s",fullpath);
do_mkdir actually has the name of the new directory in the fullpath array so don't have to make a variable yourself. As for the acces rights you can use S_IRWXU/S_IRWXG/S_IRWXO to see the acces rights(for more information visit http://pubs.opengroup.org/onlinepubs/7908799/xsh/sysstat.h.html). For example you can store the access rights in integer variables :
if(bits & S_IRUSR) x = x + 4;
if(bits & S_IWUSR) x = x + 2;
if(bits % S_IXUSR) x = x + 1;
Just do the same for the group and others rights and there you go
Keep in mind that you'll need to compile the file in order to aply the changes. Go to usr/src/realeasetools directory and use the make hdbootcommand in the terminal. Restart and you'll see the changes. | unknown | |
d8898 | train | Did you synchronized the NSUserDefaults after changing the value of the switch? Try calling
[[NSUserDefaults standardUserDefaults] synchronize];
after you set the new value and see if that helps. Also try moving your if/else statement in the - (void)viewWillAppear:(BOOL)animated method because - (void)viewDidLoad is only called once when the view is loaded but not when the modal view is closed. | unknown | |
d8899 | train | The term business logic is in my opinion not a precise definition. Evans talks in his book, Domain Driven Design, about two types of business logic:
*
*Domain logic.
*Application logic.
This separation is in my opinion a lot clearer. And with the realization that there are different types of business rules also comes the realization that they don't all necessarily go the same place.
Domain logic is logic that corresponds to the actual domain. So if you are creating an accounting application, then domain rules would be rules regarding accounts, postings, taxation, etc. In an agile software planning tool, the rules would be stuff like calculating release dates based on velocity and story points in the backlog, etc.
For both these types of application, CSV import/export could be relevant, but the rules of CSV import/export has nothing to do with the actual domain. This kind of logic is application logic.
Domain logic most certainly goes into the model layer. The model would also correspond to the domain layer in DDD.
Application logic however does not necessarily have to be placed in the model layer. That could be placed in the controllers directly, or you could create a separate application layer hosting those rules. What is most logical in this case would depend on the actual application.
A: It does not make sense to put your business layer in the Model for an MVC project.
Say that your boss decides to change the presentation layer to something else, you would be screwed! The business layer should be a separate assembly. A Model contains the data that comes from the business layer that passes to the view to display. Then on post for example, the model binds to a Person class that resides in the business layer and calls PersonBusiness.SavePerson(p); where p is the Person class. Here's what I do (BusinessError class is missing but would go in the BusinessLayer too):
A: A1: Business Logic goes to Model part in MVC. Role of Model is to contain data and business logic. Controller on the other hand is responsible to receive user input and decide what to do.
A2: A Business Rule is part of Business Logic. They have a has a relationship. Business Logic has Business Rules.
Take a look at Wikipedia entry for MVC. Go to Overview where it mentions the flow of MVC pattern.
Also look at Wikipedia entry for Business Logic. It is mentioned that Business Logic is comprised of Business Rules and Workflow.
A: As a couple of answers have pointed out, I believe there is some some misunderstanding of multi tier vs MVC architecture.
Multi tier architecture involves breaking your application into tiers/layers (e.g. presentation, business logic, data access)
MVC is an architectural style for the presentation layer of an application. For non trivial applications, business logic/business rules/data access should not be placed directly into Models, Views, or Controllers. To do so would be placing business logic in your presentation layer and thus reducing reuse and maintainability of your code.
The model is a very reasonable choice choice to place business logic, but a better/more maintainable approach is to separate your presentation layer from your business logic layer and create a business logic layer and simply call the business logic layer from your models when needed. The business logic layer will in turn call into the data access layer.
I would like to point out that it is not uncommon to find code that mixes business logic and data access in one of the MVC components, especially if the application was not architected using multiple tiers. However, in most enterprise applications, you will commonly find multi tier architectures with an MVC architecture in place within the presentation layer.
A: Fist of all:
I believe that you are mixing up the MVC pattern and n-tier-based design principles.
Using an MVC approach does not mean that you shouldn't layer your application.
It might help if you see MVC more like an extension of the presentation layer.
If you put non-presentation code inside the MVC pattern you might very soon end up in a complicated design.
Therefore I would suggest that you put your business logic into a separate business layer.
Just have a look at this: Wikipedia article about multitier architecture
It says:
Today, MVC and similar model-view-presenter (MVP) are Separation of Concerns design patterns that apply exclusively to the presentation layer of a larger system.
Anyway ... when talking about an enterprise web application the calls from the UI to the business logic layer should be placed inside the (presentation) controller.
That is because the controller actually handles the calls to a specific resource, queries the data by making calls to the business logic and links the data (model) to the appropriate view.
Mud told you that the business rules go into the model.
That is also true, but he mixed up the (presentation) model (the 'M' in MVC) and the data layer model of a tier-based application design.
So it is valid to place your database related business rules in the model (data layer) of your application.
But you should not place them in the model of your MVC-structured presentation layer as this only applies to a specific UI.
This technique is independent of whether you use a domain driven design or a transaction script based approach.
Let me visualize that for you:
Presentation layer: Model - View - Controller
Business layer: Domain logic - Application logic
Data layer: Data repositories - Data access layer
The model that you see above means that you have an application that uses MVC, DDD and a database-independed data layer.
This is a common approach to design a larger enterprise web application.
But you can also shrink it down to use a simple non-DDD business layer (a business layer without domain logic) and a simple data layer that writes directly to a specific database.
You could even drop the whole data-layer and access the database directly from the business layer, though I do not recommend it.
[Note:]
You should also be aware of the fact that nowadays there is more than just one "model" in an application.
Commonly, each layer of an application has it's own model.
The model of the presentation layer is view specific but often independent of the used controls.
The business layer can also have a model, called the "domain-model". This is typically the case when you decide to take a domain-driven approach.
This "domain-model" contains of data as well as business logic (the main logic of your program) and is usually independent of the presentation layer.
The presentation layer usually calls the business layer on a certain "event" (button pressed etc.) to read data from or write data to the data layer.
The data layer might also have it's own model, which is typically database related. It often contains a set of entity classes as well as data-access-objects (DAOs).
The question is: how does this fit into the MVC concept?
Answer -> It doesn't!
Well - it kinda does, but not completely.This is because MVC is an approach that was developed in the late 1970's for the Smalltalk-80 programming language. At that time GUIs and personal computers were quite uncommon and the world wide web was not even invented!
Most of today's programming languages and IDEs were developed in the 1990s.
At that time computers and user interfaces were completely different from those in the 1970s.
You should keep that in mind when you talk about MVC.
Martin Fowler has written a very good article about MVC, MVP and today's GUIs.
A: Q1:
Business logics can be considered in two categories:
*
*Domain logics like controls on an email address (uniqueness, constraints, etc.), obtaining the price of a product for invoice, or, calculating the shoppingCart's total price based of its product objects.
*More broad and complicated workflows which are called business processes, like controlling the registration process for the student (which usually includes several steps and needs different checks and has more complicated constraints).
The first category goes into model and the second one belongs to controller. This is because the cases in the second category are broad application logics and putting them in the model may mix the model's abstraction (for example, it is not clear if we need to put those decisions in one model class or another, since they are related to both!).
See this answer for a specific distinction between model and controller, this link for very exact definitions and also this link for a nice Android example.
The point is that the notes mentioned by "Mud" and "Frank" above both can be true as well as "Pete"'s (business logic can be put in model, or controller, according to the type of business logic).
Finally, note that MVC differs from context to context. For example, in Android applications, some alternative definitions are suggested that differs from web-based ones (see this post for example).
Q2:
Business logic is more general and (as "decyclone" mentioned above) we have the following relation between them:
business rules ⊂ business logics
A: Why don't you introduce a service layer. then your controller will be lean and more readable, then your all controller functions will be pure actions. you can decompose business logic as you much as you need within service layer . code reusability is hight . no impact on models and repositories.
A: Business rules go in the model.
Say you were displaying emails for a mailing list. The user clicks the "delete" button next to one of the emails, the controller notifies the model to delete entry N, then notifies the view the model has changed.
Perhaps the admin's email should never be removed from the list. That's a business rule, that knowledge belongs in the model. The view may ultimately represent this rule somehow -- perhaps the model exposes an "IsDeletable" property which is a function of the business rule, so that the delete button in the view is disabled for certain entries - but the rule itself isn't contained in the view.
The model is ultimately gatekeeper for your data. You should be able to test your business logic without touching the UI at all.
A: This is an answered question, but I'll give my "one cent":
Business rules belong in the model.
The "model" always consists of (logically or physically separated):
*
*presentation model - a set of classes that is well suited for use in the view (it's tailored toward specific UI/presentation),
*domain model - the UI-independent portion of the model, and
*repository - the storage-aware portion of the "model".
Business rules live in the domain model, are exposed in a presentation-suitable form to the "presentation" model and are sometimes duplicated (or also enforced) in the "data layer".
A: Model = code for CRUD database operations.
Controller = responds to user actions, and passes the user requests for data retrieval or delete/update to the model, subject to the business rules specific to an organization. These business rules could be implemented in helper classes, or if they are not too complex, just directly in the controller actions. The controller finally asks the view to update itself so as to give feedback to the user in the form of a new display, or a message like 'updated, thanks', etc.,
View = UI that is generated based on a query on the model.
There are no hard and fast rules regarding where business rules should go. In some designs they go into model, whereas in others they are included with the controller. But I think it is better to keep them with the controller. Let the model worry only about database connectivity. | unknown | |
d8900 | train | It looks like the userAgent is being checked by the @angular/forms lib and has been deleted or not provided at all for some reason.
If it's not your code that's changing the userAgent then it's probably some third-party script.
If I were you I'd start with the latest additions in terms of libraries and dependencies and peel them back until I get a unit test running. That will give the clue. | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.