qid
int64 1
74.6M
| question
stringlengths 45
24.2k
| date
stringlengths 10
10
| metadata
stringlengths 101
178
| response_j
stringlengths 32
23.2k
| response_k
stringlengths 21
13.2k
|
---|---|---|---|---|---|
51,943,181 |
Hey guys so i got this dummy data:
```
115,IROM,1
125,FOLCOM,1
135,SE,1
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
144,BLIZZARC,1
166,STEAD,3
166,STEAD,3
166,STEAD,3
168,BANDOI,1
179,FOX,1
199,C4,2
199,C4,2
```
Desired output:
```
IROM,1
FOLCOM,1
SE,1
ATLUZ,3
BLIZZARC,1
STEAD,1
BANDOI,1
FOX,1
C4,1
```
which comes from counting the distinct game id (the 115,125,etc). so for example the
```
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
```
Will be
```
ATLUZ,3
```
Since it have 3 distinct game id
I tried using
```
cut -d',' -f 2 game.csv|uniq -c
```
Where i got the following output
```
1 IROM
1 FOLCOM
1 SE
5 ATLUZ
1 BLIZZARC COMP
3 STEAD
1 BANDOI
1 FOX
2 C4
```
How do i fix this ? using bash ?
|
2018/08/21
|
['https://Stackoverflow.com/questions/51943181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3646742/']
|
Before executing the `cut` command, do a `uniq`. This will remove the redundant lines and then you follow your command, i.e. apply `cut` to extract `2` field and do `uniq -c` to count character
```
uniq game.csv | cut -d',' -f 2 | uniq -c
```
|
Using [GNU `datamash`](https://www.gnu.org/software/datamash/manual/datamash.html):
```
datamash -t, --sort --group 2 countunique 1 < input
```
Using [`awk`](https://www.gnu.org/software/gawk/manual/gawk.html):
```
awk -F, '!a[$1,$2]++{b[$2]++}END{for(i in b)print i FS b[i]}' input
```
Using [`sort`](https://www.gnu.org/software/coreutils/manual/html_node/sort-invocation.html), [`cut`](https://www.gnu.org/software/coreutils/manual/html_node/cut-invocation.html), [`uniq`](https://www.gnu.org/software/coreutils/manual/html_node/uniq-invocation.html):
```
sort -u -t, -k2,2 -k1,1 input | cut -d, -f2 | uniq -c
```
---
**Test run:**
```
$ cat input
111,ATLUZ,1
121,ATLUZ,1
121,ATLUZ,2
142,ATLUZ,2
115,IROM,1
142,ATLUZ,2
$ datamash -t, --sort --group 2 countunique 1 < input
ATLUZ,3
IROM,1
```
As you can see, `121,ATLUZ,1` and `121,ATLUZ,2` are correctly considered to be just one `game ID`.
|
51,943,181 |
Hey guys so i got this dummy data:
```
115,IROM,1
125,FOLCOM,1
135,SE,1
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
144,BLIZZARC,1
166,STEAD,3
166,STEAD,3
166,STEAD,3
168,BANDOI,1
179,FOX,1
199,C4,2
199,C4,2
```
Desired output:
```
IROM,1
FOLCOM,1
SE,1
ATLUZ,3
BLIZZARC,1
STEAD,1
BANDOI,1
FOX,1
C4,1
```
which comes from counting the distinct game id (the 115,125,etc). so for example the
```
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
```
Will be
```
ATLUZ,3
```
Since it have 3 distinct game id
I tried using
```
cut -d',' -f 2 game.csv|uniq -c
```
Where i got the following output
```
1 IROM
1 FOLCOM
1 SE
5 ATLUZ
1 BLIZZARC COMP
3 STEAD
1 BANDOI
1 FOX
2 C4
```
How do i fix this ? using bash ?
|
2018/08/21
|
['https://Stackoverflow.com/questions/51943181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3646742/']
|
Before executing the `cut` command, do a `uniq`. This will remove the redundant lines and then you follow your command, i.e. apply `cut` to extract `2` field and do `uniq -c` to count character
```
uniq game.csv | cut -d',' -f 2 | uniq -c
```
|
This also does the trick. The only thing is that your output is not sorted.
```
awk 'BEGIN{ FS = OFS = "," }{ a[$2 FS $1] }END{ for ( i in a ){ split(i, b, "," ); c[b[1]]++ } for ( i in c ) print i, c[i] }' yourfile
```
Output:
```
BANDOI,1
C4,1
STEAD,1
BLIZZARC,1
FOLCOM,1
ATLUZ,3
SE,1
IROM,1
FOX,1
```
|
51,943,181 |
Hey guys so i got this dummy data:
```
115,IROM,1
125,FOLCOM,1
135,SE,1
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
144,BLIZZARC,1
166,STEAD,3
166,STEAD,3
166,STEAD,3
168,BANDOI,1
179,FOX,1
199,C4,2
199,C4,2
```
Desired output:
```
IROM,1
FOLCOM,1
SE,1
ATLUZ,3
BLIZZARC,1
STEAD,1
BANDOI,1
FOX,1
C4,1
```
which comes from counting the distinct game id (the 115,125,etc). so for example the
```
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
```
Will be
```
ATLUZ,3
```
Since it have 3 distinct game id
I tried using
```
cut -d',' -f 2 game.csv|uniq -c
```
Where i got the following output
```
1 IROM
1 FOLCOM
1 SE
5 ATLUZ
1 BLIZZARC COMP
3 STEAD
1 BANDOI
1 FOX
2 C4
```
How do i fix this ? using bash ?
|
2018/08/21
|
['https://Stackoverflow.com/questions/51943181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3646742/']
|
Could you please try following too in a single `awk`.
```
awk -F, '
!a[$1,$2,$3]++{
b[$1,$2,$3]++
}
!f[$2]++{
g[++count]=$2
}
END{
for(i in b){
split(i,array,",")
c[array[2]]++
}
for(q=1;q<=count;q++){
print c[g[q]],g[q]
}
}' SUBSEP="," Input_file
```
It will give the order of output same as Input\_file's 2nd field occurrence as follows.
```
1 IROM
1 FOLCOM
1 SE
3 ATLUZ
1 BLIZZARC
1 STEAD
1 BANDOI
1 FOX
1 C4
```
|
Less elegant, but you may use awk as well. If it is not granted that the same ID+NAME combos will always come consecutively, you have to count each by reading the whole file before output:
```
awk -F, '{c[$1,$2]+=1}END{for (ck in c){split(ck,ca,SUBSEP); print ca[2];g[ca[2]]+=1}for(gk in g){print gk,g[gk]}}' game.csv
```
This will count first every [COL1,COL2] pairs then for each COL2 it counts how many distinct [COL1,COL2] pairs are nonzero.
|
51,943,181 |
Hey guys so i got this dummy data:
```
115,IROM,1
125,FOLCOM,1
135,SE,1
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
144,BLIZZARC,1
166,STEAD,3
166,STEAD,3
166,STEAD,3
168,BANDOI,1
179,FOX,1
199,C4,2
199,C4,2
```
Desired output:
```
IROM,1
FOLCOM,1
SE,1
ATLUZ,3
BLIZZARC,1
STEAD,1
BANDOI,1
FOX,1
C4,1
```
which comes from counting the distinct game id (the 115,125,etc). so for example the
```
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
```
Will be
```
ATLUZ,3
```
Since it have 3 distinct game id
I tried using
```
cut -d',' -f 2 game.csv|uniq -c
```
Where i got the following output
```
1 IROM
1 FOLCOM
1 SE
5 ATLUZ
1 BLIZZARC COMP
3 STEAD
1 BANDOI
1 FOX
2 C4
```
How do i fix this ? using bash ?
|
2018/08/21
|
['https://Stackoverflow.com/questions/51943181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3646742/']
|
Could you please try following too in a single `awk`.
```
awk -F, '
!a[$1,$2,$3]++{
b[$1,$2,$3]++
}
!f[$2]++{
g[++count]=$2
}
END{
for(i in b){
split(i,array,",")
c[array[2]]++
}
for(q=1;q<=count;q++){
print c[g[q]],g[q]
}
}' SUBSEP="," Input_file
```
It will give the order of output same as Input\_file's 2nd field occurrence as follows.
```
1 IROM
1 FOLCOM
1 SE
3 ATLUZ
1 BLIZZARC
1 STEAD
1 BANDOI
1 FOX
1 C4
```
|
Using [GNU `datamash`](https://www.gnu.org/software/datamash/manual/datamash.html):
```
datamash -t, --sort --group 2 countunique 1 < input
```
Using [`awk`](https://www.gnu.org/software/gawk/manual/gawk.html):
```
awk -F, '!a[$1,$2]++{b[$2]++}END{for(i in b)print i FS b[i]}' input
```
Using [`sort`](https://www.gnu.org/software/coreutils/manual/html_node/sort-invocation.html), [`cut`](https://www.gnu.org/software/coreutils/manual/html_node/cut-invocation.html), [`uniq`](https://www.gnu.org/software/coreutils/manual/html_node/uniq-invocation.html):
```
sort -u -t, -k2,2 -k1,1 input | cut -d, -f2 | uniq -c
```
---
**Test run:**
```
$ cat input
111,ATLUZ,1
121,ATLUZ,1
121,ATLUZ,2
142,ATLUZ,2
115,IROM,1
142,ATLUZ,2
$ datamash -t, --sort --group 2 countunique 1 < input
ATLUZ,3
IROM,1
```
As you can see, `121,ATLUZ,1` and `121,ATLUZ,2` are correctly considered to be just one `game ID`.
|
51,943,181 |
Hey guys so i got this dummy data:
```
115,IROM,1
125,FOLCOM,1
135,SE,1
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
144,BLIZZARC,1
166,STEAD,3
166,STEAD,3
166,STEAD,3
168,BANDOI,1
179,FOX,1
199,C4,2
199,C4,2
```
Desired output:
```
IROM,1
FOLCOM,1
SE,1
ATLUZ,3
BLIZZARC,1
STEAD,1
BANDOI,1
FOX,1
C4,1
```
which comes from counting the distinct game id (the 115,125,etc). so for example the
```
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
```
Will be
```
ATLUZ,3
```
Since it have 3 distinct game id
I tried using
```
cut -d',' -f 2 game.csv|uniq -c
```
Where i got the following output
```
1 IROM
1 FOLCOM
1 SE
5 ATLUZ
1 BLIZZARC COMP
3 STEAD
1 BANDOI
1 FOX
2 C4
```
How do i fix this ? using bash ?
|
2018/08/21
|
['https://Stackoverflow.com/questions/51943181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3646742/']
|
Could you please try following too in a single `awk`.
```
awk -F, '
!a[$1,$2,$3]++{
b[$1,$2,$3]++
}
!f[$2]++{
g[++count]=$2
}
END{
for(i in b){
split(i,array,",")
c[array[2]]++
}
for(q=1;q<=count;q++){
print c[g[q]],g[q]
}
}' SUBSEP="," Input_file
```
It will give the order of output same as Input\_file's 2nd field occurrence as follows.
```
1 IROM
1 FOLCOM
1 SE
3 ATLUZ
1 BLIZZARC
1 STEAD
1 BANDOI
1 FOX
1 C4
```
|
This also does the trick. The only thing is that your output is not sorted.
```
awk 'BEGIN{ FS = OFS = "," }{ a[$2 FS $1] }END{ for ( i in a ){ split(i, b, "," ); c[b[1]]++ } for ( i in c ) print i, c[i] }' yourfile
```
Output:
```
BANDOI,1
C4,1
STEAD,1
BLIZZARC,1
FOLCOM,1
ATLUZ,3
SE,1
IROM,1
FOX,1
```
|
51,943,181 |
Hey guys so i got this dummy data:
```
115,IROM,1
125,FOLCOM,1
135,SE,1
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
144,BLIZZARC,1
166,STEAD,3
166,STEAD,3
166,STEAD,3
168,BANDOI,1
179,FOX,1
199,C4,2
199,C4,2
```
Desired output:
```
IROM,1
FOLCOM,1
SE,1
ATLUZ,3
BLIZZARC,1
STEAD,1
BANDOI,1
FOX,1
C4,1
```
which comes from counting the distinct game id (the 115,125,etc). so for example the
```
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
```
Will be
```
ATLUZ,3
```
Since it have 3 distinct game id
I tried using
```
cut -d',' -f 2 game.csv|uniq -c
```
Where i got the following output
```
1 IROM
1 FOLCOM
1 SE
5 ATLUZ
1 BLIZZARC COMP
3 STEAD
1 BANDOI
1 FOX
2 C4
```
How do i fix this ? using bash ?
|
2018/08/21
|
['https://Stackoverflow.com/questions/51943181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3646742/']
|
Using [GNU `datamash`](https://www.gnu.org/software/datamash/manual/datamash.html):
```
datamash -t, --sort --group 2 countunique 1 < input
```
Using [`awk`](https://www.gnu.org/software/gawk/manual/gawk.html):
```
awk -F, '!a[$1,$2]++{b[$2]++}END{for(i in b)print i FS b[i]}' input
```
Using [`sort`](https://www.gnu.org/software/coreutils/manual/html_node/sort-invocation.html), [`cut`](https://www.gnu.org/software/coreutils/manual/html_node/cut-invocation.html), [`uniq`](https://www.gnu.org/software/coreutils/manual/html_node/uniq-invocation.html):
```
sort -u -t, -k2,2 -k1,1 input | cut -d, -f2 | uniq -c
```
---
**Test run:**
```
$ cat input
111,ATLUZ,1
121,ATLUZ,1
121,ATLUZ,2
142,ATLUZ,2
115,IROM,1
142,ATLUZ,2
$ datamash -t, --sort --group 2 countunique 1 < input
ATLUZ,3
IROM,1
```
As you can see, `121,ATLUZ,1` and `121,ATLUZ,2` are correctly considered to be just one `game ID`.
|
Less elegant, but you may use awk as well. If it is not granted that the same ID+NAME combos will always come consecutively, you have to count each by reading the whole file before output:
```
awk -F, '{c[$1,$2]+=1}END{for (ck in c){split(ck,ca,SUBSEP); print ca[2];g[ca[2]]+=1}for(gk in g){print gk,g[gk]}}' game.csv
```
This will count first every [COL1,COL2] pairs then for each COL2 it counts how many distinct [COL1,COL2] pairs are nonzero.
|
51,943,181 |
Hey guys so i got this dummy data:
```
115,IROM,1
125,FOLCOM,1
135,SE,1
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
144,BLIZZARC,1
166,STEAD,3
166,STEAD,3
166,STEAD,3
168,BANDOI,1
179,FOX,1
199,C4,2
199,C4,2
```
Desired output:
```
IROM,1
FOLCOM,1
SE,1
ATLUZ,3
BLIZZARC,1
STEAD,1
BANDOI,1
FOX,1
C4,1
```
which comes from counting the distinct game id (the 115,125,etc). so for example the
```
111,ATLUZ,1
121,ATLUZ,2
121,ATLUZ,2
142,ATLUZ,2
142,ATLUZ,2
```
Will be
```
ATLUZ,3
```
Since it have 3 distinct game id
I tried using
```
cut -d',' -f 2 game.csv|uniq -c
```
Where i got the following output
```
1 IROM
1 FOLCOM
1 SE
5 ATLUZ
1 BLIZZARC COMP
3 STEAD
1 BANDOI
1 FOX
2 C4
```
How do i fix this ? using bash ?
|
2018/08/21
|
['https://Stackoverflow.com/questions/51943181', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3646742/']
|
Using [GNU `datamash`](https://www.gnu.org/software/datamash/manual/datamash.html):
```
datamash -t, --sort --group 2 countunique 1 < input
```
Using [`awk`](https://www.gnu.org/software/gawk/manual/gawk.html):
```
awk -F, '!a[$1,$2]++{b[$2]++}END{for(i in b)print i FS b[i]}' input
```
Using [`sort`](https://www.gnu.org/software/coreutils/manual/html_node/sort-invocation.html), [`cut`](https://www.gnu.org/software/coreutils/manual/html_node/cut-invocation.html), [`uniq`](https://www.gnu.org/software/coreutils/manual/html_node/uniq-invocation.html):
```
sort -u -t, -k2,2 -k1,1 input | cut -d, -f2 | uniq -c
```
---
**Test run:**
```
$ cat input
111,ATLUZ,1
121,ATLUZ,1
121,ATLUZ,2
142,ATLUZ,2
115,IROM,1
142,ATLUZ,2
$ datamash -t, --sort --group 2 countunique 1 < input
ATLUZ,3
IROM,1
```
As you can see, `121,ATLUZ,1` and `121,ATLUZ,2` are correctly considered to be just one `game ID`.
|
This also does the trick. The only thing is that your output is not sorted.
```
awk 'BEGIN{ FS = OFS = "," }{ a[$2 FS $1] }END{ for ( i in a ){ split(i, b, "," ); c[b[1]]++ } for ( i in c ) print i, c[i] }' yourfile
```
Output:
```
BANDOI,1
C4,1
STEAD,1
BLIZZARC,1
FOLCOM,1
ATLUZ,3
SE,1
IROM,1
FOX,1
```
|
20,473,565 |
We have a automated batch Script which takes care of merge and outputs all the log (conflicts) in a text file for developers to get proper visibility.
Now the problem is sometimes it stops in between and gives the below error
**svn: E155015: One or more conflicts were produced while merging Resolve all conflicts and rerun the merge to apply the remaining In autoated Merge Script**
Below is the command
svn merge <http://xyzBranch.local.com> C:\WORKSPACE\Trunk\ --username=xyz--password=zyz --non-interactive >> C:\mergelogs\logs
Any help would be appreciated I tried a lot of ways to fix this but no success
Regards
Pravin
|
2013/12/09
|
['https://Stackoverflow.com/questions/20473565', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2971999/']
|
This happens when some of the commits were already "cherry-picked", i.e. merged using the `-r x:y` flag. In such case subversion first merges everything up to `x` and than everything above `y`. If merging `x` fails, it gives this error.
I don't think you should be working around it. If you want to do the merge, just do it manually. If you don't, just tell the script to stop trying.
|
I would try adding the parameters:
```
--accept=postpone
```
I use this for running `svn merge` and what it will do is add conflict markers to the files, but should always return. I'm surprised that the `--non-interactive` flag doesn't do this automatically though. The other thing to try is amend the redirection to include `2>&1` which will also redirect standard error, in case there's a warning that isn't being caught in the log file. So your new command becomes:
```
svn merge http://xyzBranch.local.com C:\WORKSPACE\Trunk\ --username=xyz--password=zyz --non-interactive --accept=postpone >> C:\mergelogs\logs 2>&1
```
|
1,765,441 |
I am updating a piece of legacy code in one of our web apps. The app allows the user to upload a spreadsheet, which we will process as a background job.
Each of these user uploads creates a new table to store the spreadsheet data, so the number of tables in my SQL Server 2000 database will grow quickly - thousands of tables in the near term. I'm worried that this might not be something that SQL Server is optimized for.
It would be easiest to leave this mechanism as-is, but I don't want to leave a time-bomb that is going to blow up later. Better to fix it now if it needs fixing (the obvious alternative is one large table with a key associating records with user batches).
Is this architecture likely to create a performance problem as the number of tables grows? And if so, could the problem be mitigated by upgrading to a later version of SQL Server ?
**Edit**: Some more information in response to questions:
* Each of these tables has the same schema. There is no reason that it couldn't have been implemented as one large table; it just wasn't.
* Deleting old tables is also an option. They might be needed for a month or two, no longer than that.
|
2009/11/19
|
['https://Stackoverflow.com/questions/1765441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13356/']
|
Having many tables is not an issue for the engine. The catalog metadata is optimized for very large sizes. There are also some advantages on having each user own its table, like ability to have separate security ACLs per table, separate table statistics for each user content and not least improve query performance for the 'accidental' table scan.
What is a problem though is maintenance. If you leave this in place you must absolutely set up task for automated maintenance, you cannot let this as a manual task for your admins.
|
I think this is definitely a problem that will be a pain later. Why would you need to create a new table every time? Unless there is a really good reason to do so, I would not do it.
The best way would be to simply create an ID and associate all uploaded data with an ID, all in the same table. This will require some work on your part, but it's much safer and more manageable to boot.
|
1,765,441 |
I am updating a piece of legacy code in one of our web apps. The app allows the user to upload a spreadsheet, which we will process as a background job.
Each of these user uploads creates a new table to store the spreadsheet data, so the number of tables in my SQL Server 2000 database will grow quickly - thousands of tables in the near term. I'm worried that this might not be something that SQL Server is optimized for.
It would be easiest to leave this mechanism as-is, but I don't want to leave a time-bomb that is going to blow up later. Better to fix it now if it needs fixing (the obvious alternative is one large table with a key associating records with user batches).
Is this architecture likely to create a performance problem as the number of tables grows? And if so, could the problem be mitigated by upgrading to a later version of SQL Server ?
**Edit**: Some more information in response to questions:
* Each of these tables has the same schema. There is no reason that it couldn't have been implemented as one large table; it just wasn't.
* Deleting old tables is also an option. They might be needed for a month or two, no longer than that.
|
2009/11/19
|
['https://Stackoverflow.com/questions/1765441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13356/']
|
I think this is definitely a problem that will be a pain later. Why would you need to create a new table every time? Unless there is a really good reason to do so, I would not do it.
The best way would be to simply create an ID and associate all uploaded data with an ID, all in the same table. This will require some work on your part, but it's much safer and more manageable to boot.
|
I will suggest you to store these data in a single table. At the server side you can create a console from where user/operator could manually start the task of freeing up the table entries. You can ask them for range of dates whose data is no longer needed and the same will be deleted from the db.
You can take a step ahead and set a database trigger to wipe the entries/records after a specified time period. You can again add the UI from where the User/Operator/Admin could set these data validity limit
Thus you could create the system such that the junk data will be auto deleted after specified time which could again be set by the Admin, as well as provide them with a console using which they can manually delete additional unwanted data.
|
1,765,441 |
I am updating a piece of legacy code in one of our web apps. The app allows the user to upload a spreadsheet, which we will process as a background job.
Each of these user uploads creates a new table to store the spreadsheet data, so the number of tables in my SQL Server 2000 database will grow quickly - thousands of tables in the near term. I'm worried that this might not be something that SQL Server is optimized for.
It would be easiest to leave this mechanism as-is, but I don't want to leave a time-bomb that is going to blow up later. Better to fix it now if it needs fixing (the obvious alternative is one large table with a key associating records with user batches).
Is this architecture likely to create a performance problem as the number of tables grows? And if so, could the problem be mitigated by upgrading to a later version of SQL Server ?
**Edit**: Some more information in response to questions:
* Each of these tables has the same schema. There is no reason that it couldn't have been implemented as one large table; it just wasn't.
* Deleting old tables is also an option. They might be needed for a month or two, no longer than that.
|
2009/11/19
|
['https://Stackoverflow.com/questions/1765441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13356/']
|
Having many tables is not an issue for the engine. The catalog metadata is optimized for very large sizes. There are also some advantages on having each user own its table, like ability to have separate security ACLs per table, separate table statistics for each user content and not least improve query performance for the 'accidental' table scan.
What is a problem though is maintenance. If you leave this in place you must absolutely set up task for automated maintenance, you cannot let this as a manual task for your admins.
|
Having all of these tables isn't ideal for any database. After the upload, does the web app use the newly created table? Maybe it gives some feedback to the user on what was uploaded?
Does your application utilize all of these tables for any reporting etc? You mentioned keeping them around for a few months - not sure why. If not move the contents to a central table and drop the individual table.
Once the backend is taken care of, recode the website to save uploads to a central table. You may need two tables. An UploadHeader table to track the upload batch: who uploaded, when, etc. and link to a detail table with the individual records from the excel upload.
|
1,765,441 |
I am updating a piece of legacy code in one of our web apps. The app allows the user to upload a spreadsheet, which we will process as a background job.
Each of these user uploads creates a new table to store the spreadsheet data, so the number of tables in my SQL Server 2000 database will grow quickly - thousands of tables in the near term. I'm worried that this might not be something that SQL Server is optimized for.
It would be easiest to leave this mechanism as-is, but I don't want to leave a time-bomb that is going to blow up later. Better to fix it now if it needs fixing (the obvious alternative is one large table with a key associating records with user batches).
Is this architecture likely to create a performance problem as the number of tables grows? And if so, could the problem be mitigated by upgrading to a later version of SQL Server ?
**Edit**: Some more information in response to questions:
* Each of these tables has the same schema. There is no reason that it couldn't have been implemented as one large table; it just wasn't.
* Deleting old tables is also an option. They might be needed for a month or two, no longer than that.
|
2009/11/19
|
['https://Stackoverflow.com/questions/1765441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13356/']
|
Having many tables is not an issue for the engine. The catalog metadata is optimized for very large sizes. There are also some advantages on having each user own its table, like ability to have separate security ACLs per table, separate table statistics for each user content and not least improve query performance for the 'accidental' table scan.
What is a problem though is maintenance. If you leave this in place you must absolutely set up task for automated maintenance, you cannot let this as a manual task for your admins.
|
I will suggest you to store these data in a single table. At the server side you can create a console from where user/operator could manually start the task of freeing up the table entries. You can ask them for range of dates whose data is no longer needed and the same will be deleted from the db.
You can take a step ahead and set a database trigger to wipe the entries/records after a specified time period. You can again add the UI from where the User/Operator/Admin could set these data validity limit
Thus you could create the system such that the junk data will be auto deleted after specified time which could again be set by the Admin, as well as provide them with a console using which they can manually delete additional unwanted data.
|
1,765,441 |
I am updating a piece of legacy code in one of our web apps. The app allows the user to upload a spreadsheet, which we will process as a background job.
Each of these user uploads creates a new table to store the spreadsheet data, so the number of tables in my SQL Server 2000 database will grow quickly - thousands of tables in the near term. I'm worried that this might not be something that SQL Server is optimized for.
It would be easiest to leave this mechanism as-is, but I don't want to leave a time-bomb that is going to blow up later. Better to fix it now if it needs fixing (the obvious alternative is one large table with a key associating records with user batches).
Is this architecture likely to create a performance problem as the number of tables grows? And if so, could the problem be mitigated by upgrading to a later version of SQL Server ?
**Edit**: Some more information in response to questions:
* Each of these tables has the same schema. There is no reason that it couldn't have been implemented as one large table; it just wasn't.
* Deleting old tables is also an option. They might be needed for a month or two, no longer than that.
|
2009/11/19
|
['https://Stackoverflow.com/questions/1765441', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13356/']
|
Having all of these tables isn't ideal for any database. After the upload, does the web app use the newly created table? Maybe it gives some feedback to the user on what was uploaded?
Does your application utilize all of these tables for any reporting etc? You mentioned keeping them around for a few months - not sure why. If not move the contents to a central table and drop the individual table.
Once the backend is taken care of, recode the website to save uploads to a central table. You may need two tables. An UploadHeader table to track the upload batch: who uploaded, when, etc. and link to a detail table with the individual records from the excel upload.
|
I will suggest you to store these data in a single table. At the server side you can create a console from where user/operator could manually start the task of freeing up the table entries. You can ask them for range of dates whose data is no longer needed and the same will be deleted from the db.
You can take a step ahead and set a database trigger to wipe the entries/records after a specified time period. You can again add the UI from where the User/Operator/Admin could set these data validity limit
Thus you could create the system such that the junk data will be auto deleted after specified time which could again be set by the Admin, as well as provide them with a console using which they can manually delete additional unwanted data.
|
14,541,090 |
So far I have this working properly for the error message only. However, I would like this to work for success message as well. This should happen when the submit button is pressed in the contact form. Click contact at the top right of the page to scroll to it.
You can test it [here](http://new.syntheticmedia.net).
Here is the jQuery I'm using for the error message:
```
$(document).ready(function() {
$(".error:first").attr("id","errors");
$("#errors").each(function (){
$("html,body").animate({scrollTop:$('#errors').offset().top-175}, 1000);
});
});
```
Any way to modify it to work with scrolling to #success and #errors with the same offset().top-175 ?
Thanks in advance!
|
2013/01/26
|
['https://Stackoverflow.com/questions/14541090', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1254063/']
|
You could do :
```
$(document).ready(function() {
var pos = null;
if($("#contact-form #errors.visible").length > 0)
pos = $('#errors').offset().top;
if($("#contact-form #success.visible").length > 0)
pos = $('#success').offset().top;
if(pos != null)
$("html,body").animate({scrollTop:pos-175}, 1000);
});
```
**And fix the fact that your script "js/contact\_script.js" must be declared after JQuery lib**
|
```
$(document).ready(function () {
var $elementToScrollTo;
var $firstError = $(".error:first");
if ($firstError.length > 0) {
$firstError.attr("id", "errors");
$elementToScrollTo = $firstError;
}
else {
$elementToScrollTo = $("#success");
}
$("html,body").animate({
scrollTop: $elementToScrollTo.offset().top - 175
}, 1000);
});
```
|
14,541,090 |
So far I have this working properly for the error message only. However, I would like this to work for success message as well. This should happen when the submit button is pressed in the contact form. Click contact at the top right of the page to scroll to it.
You can test it [here](http://new.syntheticmedia.net).
Here is the jQuery I'm using for the error message:
```
$(document).ready(function() {
$(".error:first").attr("id","errors");
$("#errors").each(function (){
$("html,body").animate({scrollTop:$('#errors').offset().top-175}, 1000);
});
});
```
Any way to modify it to work with scrolling to #success and #errors with the same offset().top-175 ?
Thanks in advance!
|
2013/01/26
|
['https://Stackoverflow.com/questions/14541090', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1254063/']
|
this solution make the same job for Contact Form 7 (popular form plugin for WordPress). I found this page during the search by Google of my problem, so I added the solution below to help others who ended also at this page.
```
jQuery(function ($) {
$(document).ready(function ()
{
var wpcf7Elm = document.querySelector( '.wpcf7' );
wpcf7Elm.addEventListener( 'wpcf7submit', function( event ) {
setTimeout(function() {
$([document.documentElement, document.body]).animate({
scrollTop: $(".wpcf7-response-output").offset().top - 100
}, 500);
}, 500);
//console.log("Submited");
}, false );
});
});
```
|
```
$(document).ready(function () {
var $elementToScrollTo;
var $firstError = $(".error:first");
if ($firstError.length > 0) {
$firstError.attr("id", "errors");
$elementToScrollTo = $firstError;
}
else {
$elementToScrollTo = $("#success");
}
$("html,body").animate({
scrollTop: $elementToScrollTo.offset().top - 175
}, 1000);
});
```
|
14,541,090 |
So far I have this working properly for the error message only. However, I would like this to work for success message as well. This should happen when the submit button is pressed in the contact form. Click contact at the top right of the page to scroll to it.
You can test it [here](http://new.syntheticmedia.net).
Here is the jQuery I'm using for the error message:
```
$(document).ready(function() {
$(".error:first").attr("id","errors");
$("#errors").each(function (){
$("html,body").animate({scrollTop:$('#errors').offset().top-175}, 1000);
});
});
```
Any way to modify it to work with scrolling to #success and #errors with the same offset().top-175 ?
Thanks in advance!
|
2013/01/26
|
['https://Stackoverflow.com/questions/14541090', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1254063/']
|
You could do :
```
$(document).ready(function() {
var pos = null;
if($("#contact-form #errors.visible").length > 0)
pos = $('#errors').offset().top;
if($("#contact-form #success.visible").length > 0)
pos = $('#success').offset().top;
if(pos != null)
$("html,body").animate({scrollTop:pos-175}, 1000);
});
```
**And fix the fact that your script "js/contact\_script.js" must be declared after JQuery lib**
|
this solution make the same job for Contact Form 7 (popular form plugin for WordPress). I found this page during the search by Google of my problem, so I added the solution below to help others who ended also at this page.
```
jQuery(function ($) {
$(document).ready(function ()
{
var wpcf7Elm = document.querySelector( '.wpcf7' );
wpcf7Elm.addEventListener( 'wpcf7submit', function( event ) {
setTimeout(function() {
$([document.documentElement, document.body]).animate({
scrollTop: $(".wpcf7-response-output").offset().top - 100
}, 500);
}, 500);
//console.log("Submited");
}, false );
});
});
```
|
19,313 |
What is the difference between the words "inquiry" and "query?" I tend to associate the latter with technology (e.g., search engine queries), but I'm not sure what the actual meaning is.
|
2011/04/03
|
['https://english.stackexchange.com/questions/19313', 'https://english.stackexchange.com', 'https://english.stackexchange.com/users/2852/']
|
>
> **inquiry** describes an act of asking for information or an official investigation
>
>
> **query** is simply a question, especially one addressed to an official or an organization. In writing or speaking it is used to question the accuracy of a following statement or to introduce a question.
>
>
>
[NOAD]
|
Query is asking a simple question that does not require more than basic knowledge.
Inquiry is asking a question that requires further research or an investigation.
|
18,042,485 |
Via GDI/GDI+ get the text pixels or glyph, how to convert to 3d mesh? Does any exists library or source code can be used?
PS: I know D3DXCreateText, but Im using opengl...
|
2013/08/04
|
['https://Stackoverflow.com/questions/18042485', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1716020/']
|
If you works on OpenGL, you can try FTGL, it allows you to generate different polygon meshes from fonts, including extrudes meshes as well as render them:
<http://ftgl.sourceforge.net/docs/html/ftgl-tutorial.html>
but I am not sure how portable is this library specially for OpenGL ES...
|
Using GDI is definitely not among the best ways to go if you need to obtain glyphs for the text, you could use FreeType library instead (<http://www.freetype.org>), which is open-source and portable. It can produce both bitmaps and vectorized representation for the glyphs. You will have to initialize single instance of class FT\_Library in your program that is later used to work with multiple fonts. After loading font form file (TrueType, OpenType, PostScript and some other formats) you'll be able to obtain geometrical parameters of specific characters and use it appropriately to create texture or build primitives using your preferred rendering API, OpenGL let it be.
|
501 |
An extreme form of constructivism is called *finitisim*. In this form, unlike the standard axiom system, infinite sets are not allowed. There are important mathematicians, such as Kronecker, who supported such a system. I can see that the natural numbers and rational numbers can easily defined in a finitist system, by easy adaptations of the standard definitions. But in order to do any significant mathematics, we need to have definitions for the irrational numbers that one is likely to encounter in practice, such as $e$ or $\sqrt{2}$. In the standard constructions, real numbers are defined as Dedekind cuts or Cauchy sequences, which are actually sets of infinite cardinality, so they are of no use here. My question is, how would a real number like those be defined in a finitist axiom system (Of course we have no hope to construct the entire set of real numbers, since that set is uncountably infinite).
After doing a little research I found a constructivist definition in Wikipedia <http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis> , but we need a finitist definition of a function for this definition to work (Because in the standard system, a function over the set of natural numbers is actually an infinite set).
So my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system?
*Original version of this question, [which had been closed during private beta](http://meta.math.stackexchange.com/questions/172/why-did-you-close-my-question-if-all-sets-were-finite), is as follows:*
>
> **If all sets were finite, how would mathematics be like?**
>
>
> If we replace the axiom that 'there
> exists an infinite set' with 'all sets
> are finite', how would mathematics be
> like? My guess is that, all the theory
> that has practical importance would
> still show up, but everything would be
> very very unreadable for humans. Is
> that true?
>
>
> We would have the natural numbers,
> athough the class of all natural
> numbers would not be a set. In the
> same sense, we could have the rational
> numbers. But could we have the real
> numbers? Can the standard
> constructions be adapted to this
> setting?
>
>
>
|
2010/07/22
|
['https://math.stackexchange.com/questions/501', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/145/']
|
Disclaimer: I am not a finitist --- but as a theoretical computer scientist, I have a certain sympathy for finitism. The following is the result of me openly speculating what an "official" finitist response would be, based on grounds of computability.
The short version is this: **(a)** It depends on what you mean by a 'number', but there's a reasonable approach which makes it reasonable to talk about finitistic approaches to real numbers; **(b)** What you can do finitisitically with numbers, real, rational, or otherwise, depends on how you represent those numbers.
1. **What is a number?** Is −1 a number? Is sqrt(2) a number? Is *i* = sqrt(−1) a number? What about quaternions? --- I'm going to completely ignore this question and suggest a pragmatic, formalist approach: a "number" is an element of a "number system"; and a "number system" is a collection of expressions which you can transform or describe properties of in some given ways (*i.e.* certain given arithmetic operations) and test certain properties (*e.g.* tests for equality, ordering, *etc.*) These expressions don't have to have a meaningful interpretation in terms of quantities or magnitudes as far as I'm concerned; *you* get to choose which operations/tests you care about.
A finitist would demand that any operation or property be described by an algorithm which provably terminates. That is, it isn't sufficient to prove existence or universality *a la* classical logic; existence proofs must be finite constructions --- of a "number", that is a representation in some "number system" --- and univserality must be shown by a computable test.
2. **Representation of numbers:** How we represent the numbers matters. A finitist should have no qualms about rational numbers: ratios which ultimately boil down to ordered pairs. Despite this, the decimal expansions of these numbers may be infinitely long: 1/3 = 0.33333... what's going on here?
Well, the issue is that we have two representations for the same number, one of which is finite in length (and allows us to perform computations) and another which is not finite in length. However, the decimal expansion can be easily expressed as a function: for all *k*, the *k*th decimal place after the point is '3'; so you can still characterize it precisely in terms of a finite rule.
What's important is that there exists **some** finite way to express the number. But the way in which we choose to *define* the number (as a part of system or numbers, using some way of expressing numbers) will affect what we can do with it...
there is now a question about what operations we can perform.
--- For rationals-as-ratios, we can add/subtract, multiply/divide, and test order/equality. So this representation is a very good one for rationals.
--- For rationals-as-decimal-expansions, we can still add/subtract and multiply/divide, by defining a new digit-function which describes how to compute the result from the decimal expansions; these will be messier than the representations as ratios. Order comparisons are still possible for *distinct* rationals; but you cannot test equality for arbitrary decimal-expansion representations, because you cannot necessarily verify that all decimal places of the difference |*a*−*b*| are 0. The best you can do in general is testing "equality up to precision ε", wherein you show that |*a*−*b*| < ε, for some desired precision ε. This is a number system which informally we may say has certain amount of "vagueness"; but it is in principle completely specified --- there's nothing wrong with this in principle. It's just a matter of how you wish to define your system of arithmetic.
3. **What representation of reals?** Obviously, because there are uncountably many real numbers, you cannot represent all real numbers even if you *aren't* a finitist. But we can still express some of them. The same is true if you're a finitist: you just don't have access to as many, and/or you're restricted in what you can do with them, according to what your representation can handle.
--- Algebraic irrational numbers such as sqrt(2) can be expressed simply like that: "sqrt(2)". There's nothing wrong with the expressions "sqrt(2) − 1" or "[1 + sqrt(5)]/2" --- they express quantities perfectly well. You can perform arithmetic operations on them perfectly well; and you can also perform ordering/equality tests by transforming them into a normal form of the type "[sum of integers and roots of integers]/[positive integer]"; if the difference of two quantities is zero, the normal form of the difference will just end up being '0'. For order comparisons, we can compute enough decimal places of each term in the sum to determine whether the result is positive or negative, a process which is guaranteed to terminate.
--- Numbers such as π and e can be represented by decimal expansions, and computed with in this form, as with the rational numbers. The decimal expansions can be gotten from classical equalities (*e.g.* "infinite" series, except computing only *partial* sums; a number such as e may be expressed by some finite representation of such an 'exact' formula, together with a computable function which describes how many terms of the series are required to get a correct evaluation of the first *k* decimal places.) Of course, what you can do finitistically with these representations is limited in the same way as described above with the rationals; specifically, you cannot always test equality.
|
There is a fragment of mathematics that is given by a set of axioms known as the [Peano axioms](http://en.wikipedia.org/wiki/Peano_axioms). Using these rules you can carry out a vast amount of mathematics relating to natural numbers. For example you can prove lots of theorems in number theory using these axioms. The Peano axioms make no reference to sets at all, whether finite or infinite. The only things that exist in this theory are naturals. You can't even form the set of all integers. You can only talk about the naturals themselves. So a vast amount of mathematics would work absolutely fine.
Even though Peano's axioms are about naturals, you can already use them to talk about finite sets. The idea is that any finite set could be encoded as a finite sequence of symbols which in turn could be represented as naturals using [Godel numbering](http://en.wikipedia.org/wiki/G%C3%B6del_numbering). So questions like "is this set a subset of that one?" could be turned into purely arithmetical statements about Godel numbers.
So I'm pretty sure that declaring that there is no infinite set would make little difference to people working within the system defined by Peano's axioms. We'd still have all of the natural numbers to work with, we just wouldn't be able to assemble them into a single entity, the set of all natural numbers.
On the other hand, there are theorems that make essential use of an infinite set. Like [Goodstein's theorem](http://en.wikipedia.org/wiki/Goodstein%27s_theorem). Without infinite sets (or a substitute of some sort) it would be impossible to prove this result.
So the overall result would be, I think, that you could still do lots of mathematics fine. The mathematics you could do wouldn't be all that weird. And you'd simply be depriving yourself of a useful proof technique.
By the way, you'd still be able to say many things about real numbers. A real number can be thought of as a Cauchy sequence. A [Cauchy sequence](http://en.wikipedia.org/wiki/Cauchy_sequence) is a certain type of sequence of rational numbers. So many statements about real numbers, when unpacked, are really statements about rational, and hence naturals, but in disguise.
Update: Uncovering precisely what parts of mathematics you need in order to prove things is a field known as [reverse mathematics](http://en.wikipedia.org/wiki/Reverse_mathematics). Hilbert, and others mathematicians, were interested in trying to prove as much mathematics as possible using finite methods. Although it was ultimately shown that you can't carry out all mathematics using finite methods, it's surprising how much you can. [Here](http://www.andrew.cmu.edu/user/avigad/Papers/elementary.pdf)'s a paper that talks about a system called EA which has no infinite sets. Amazingly we can use results from [analytic number theory](http://en.wikipedia.org/wiki/Analytic_number_theory) in EA. This is because propositions about analytic functions can be interpreted as statements about natural numbers.
|
501 |
An extreme form of constructivism is called *finitisim*. In this form, unlike the standard axiom system, infinite sets are not allowed. There are important mathematicians, such as Kronecker, who supported such a system. I can see that the natural numbers and rational numbers can easily defined in a finitist system, by easy adaptations of the standard definitions. But in order to do any significant mathematics, we need to have definitions for the irrational numbers that one is likely to encounter in practice, such as $e$ or $\sqrt{2}$. In the standard constructions, real numbers are defined as Dedekind cuts or Cauchy sequences, which are actually sets of infinite cardinality, so they are of no use here. My question is, how would a real number like those be defined in a finitist axiom system (Of course we have no hope to construct the entire set of real numbers, since that set is uncountably infinite).
After doing a little research I found a constructivist definition in Wikipedia <http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis> , but we need a finitist definition of a function for this definition to work (Because in the standard system, a function over the set of natural numbers is actually an infinite set).
So my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system?
*Original version of this question, [which had been closed during private beta](http://meta.math.stackexchange.com/questions/172/why-did-you-close-my-question-if-all-sets-were-finite), is as follows:*
>
> **If all sets were finite, how would mathematics be like?**
>
>
> If we replace the axiom that 'there
> exists an infinite set' with 'all sets
> are finite', how would mathematics be
> like? My guess is that, all the theory
> that has practical importance would
> still show up, but everything would be
> very very unreadable for humans. Is
> that true?
>
>
> We would have the natural numbers,
> athough the class of all natural
> numbers would not be a set. In the
> same sense, we could have the rational
> numbers. But could we have the real
> numbers? Can the standard
> constructions be adapted to this
> setting?
>
>
>
|
2010/07/22
|
['https://math.stackexchange.com/questions/501', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/145/']
|
Set theory with all sets finite has been studied, is a familiar theory in disguise, and is enough for most/all concrete real analysis.
Specifically, Zermelo-Fraenkel set theory with the Axiom of Infinity replaced by its negation (informally, "there is no infinite set") is equivalent to first-order Peano Arithmetic. Call this system *finite ZF*, the theory of hereditarily finite sets. Then under the Goedel arithmetic encoding of finite sets, Peano Arithmetic can prove all the theorems of Finite ZF, and under any of the standard constructions of integers from finite sets, Finite ZF proves all the theorems of Peano Arithmetic.
The implication is that theorems unprovable in PA involve intrinsically infinitary reasoning. Notably, finite ZF was used as an equivalent of PA in the Paris-Harrington paper "A Mathematical Incompleteness in Peano Arithmetic" which proved that their modification of the finite Ramsey theorem can't be proved in PA.
Real numbers and infinite sequences are not directly objects of the finite ZF universe, but there is a clear sense in which real (and complex, and functional) analysis can be performed in finite ZF or in PA. One can make statements about $\pi$ or any other explicitly defined real number, as theorems about a specific sequence of rational approximations ($\forall n P(n)$) and these can be formulated and proved using a theory of finite sets. PA can perform very complicated induction proofs, i.e., transfinite induction below $\epsilon\_0$. In practice this means any concrete real number calculation in ordinary mathematics. For the example of the prime number theorem, using complex analysis and the Riemann zeta function, see Gaisi Takeuti's *Two Applications of Logic to Mathematics*. More discussion of this in a MO thread and my posting there:
<https://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence>
<https://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence/31942#31942>
Proof theory in general and reverse mathematics in particular contain analyses of the logical strength of various theorems in mathematics (when made suitably concrete as statements about sequences of integers), and from this point of view PA, and its avatar finite set theory, are very powerful systems.
|
There is a fragment of mathematics that is given by a set of axioms known as the [Peano axioms](http://en.wikipedia.org/wiki/Peano_axioms). Using these rules you can carry out a vast amount of mathematics relating to natural numbers. For example you can prove lots of theorems in number theory using these axioms. The Peano axioms make no reference to sets at all, whether finite or infinite. The only things that exist in this theory are naturals. You can't even form the set of all integers. You can only talk about the naturals themselves. So a vast amount of mathematics would work absolutely fine.
Even though Peano's axioms are about naturals, you can already use them to talk about finite sets. The idea is that any finite set could be encoded as a finite sequence of symbols which in turn could be represented as naturals using [Godel numbering](http://en.wikipedia.org/wiki/G%C3%B6del_numbering). So questions like "is this set a subset of that one?" could be turned into purely arithmetical statements about Godel numbers.
So I'm pretty sure that declaring that there is no infinite set would make little difference to people working within the system defined by Peano's axioms. We'd still have all of the natural numbers to work with, we just wouldn't be able to assemble them into a single entity, the set of all natural numbers.
On the other hand, there are theorems that make essential use of an infinite set. Like [Goodstein's theorem](http://en.wikipedia.org/wiki/Goodstein%27s_theorem). Without infinite sets (or a substitute of some sort) it would be impossible to prove this result.
So the overall result would be, I think, that you could still do lots of mathematics fine. The mathematics you could do wouldn't be all that weird. And you'd simply be depriving yourself of a useful proof technique.
By the way, you'd still be able to say many things about real numbers. A real number can be thought of as a Cauchy sequence. A [Cauchy sequence](http://en.wikipedia.org/wiki/Cauchy_sequence) is a certain type of sequence of rational numbers. So many statements about real numbers, when unpacked, are really statements about rational, and hence naturals, but in disguise.
Update: Uncovering precisely what parts of mathematics you need in order to prove things is a field known as [reverse mathematics](http://en.wikipedia.org/wiki/Reverse_mathematics). Hilbert, and others mathematicians, were interested in trying to prove as much mathematics as possible using finite methods. Although it was ultimately shown that you can't carry out all mathematics using finite methods, it's surprising how much you can. [Here](http://www.andrew.cmu.edu/user/avigad/Papers/elementary.pdf)'s a paper that talks about a system called EA which has no infinite sets. Amazingly we can use results from [analytic number theory](http://en.wikipedia.org/wiki/Analytic_number_theory) in EA. This is because propositions about analytic functions can be interpreted as statements about natural numbers.
|
501 |
An extreme form of constructivism is called *finitisim*. In this form, unlike the standard axiom system, infinite sets are not allowed. There are important mathematicians, such as Kronecker, who supported such a system. I can see that the natural numbers and rational numbers can easily defined in a finitist system, by easy adaptations of the standard definitions. But in order to do any significant mathematics, we need to have definitions for the irrational numbers that one is likely to encounter in practice, such as $e$ or $\sqrt{2}$. In the standard constructions, real numbers are defined as Dedekind cuts or Cauchy sequences, which are actually sets of infinite cardinality, so they are of no use here. My question is, how would a real number like those be defined in a finitist axiom system (Of course we have no hope to construct the entire set of real numbers, since that set is uncountably infinite).
After doing a little research I found a constructivist definition in Wikipedia <http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis> , but we need a finitist definition of a function for this definition to work (Because in the standard system, a function over the set of natural numbers is actually an infinite set).
So my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system?
*Original version of this question, [which had been closed during private beta](http://meta.math.stackexchange.com/questions/172/why-did-you-close-my-question-if-all-sets-were-finite), is as follows:*
>
> **If all sets were finite, how would mathematics be like?**
>
>
> If we replace the axiom that 'there
> exists an infinite set' with 'all sets
> are finite', how would mathematics be
> like? My guess is that, all the theory
> that has practical importance would
> still show up, but everything would be
> very very unreadable for humans. Is
> that true?
>
>
> We would have the natural numbers,
> athough the class of all natural
> numbers would not be a set. In the
> same sense, we could have the rational
> numbers. But could we have the real
> numbers? Can the standard
> constructions be adapted to this
> setting?
>
>
>
|
2010/07/22
|
['https://math.stackexchange.com/questions/501', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/145/']
|
There is a fragment of mathematics that is given by a set of axioms known as the [Peano axioms](http://en.wikipedia.org/wiki/Peano_axioms). Using these rules you can carry out a vast amount of mathematics relating to natural numbers. For example you can prove lots of theorems in number theory using these axioms. The Peano axioms make no reference to sets at all, whether finite or infinite. The only things that exist in this theory are naturals. You can't even form the set of all integers. You can only talk about the naturals themselves. So a vast amount of mathematics would work absolutely fine.
Even though Peano's axioms are about naturals, you can already use them to talk about finite sets. The idea is that any finite set could be encoded as a finite sequence of symbols which in turn could be represented as naturals using [Godel numbering](http://en.wikipedia.org/wiki/G%C3%B6del_numbering). So questions like "is this set a subset of that one?" could be turned into purely arithmetical statements about Godel numbers.
So I'm pretty sure that declaring that there is no infinite set would make little difference to people working within the system defined by Peano's axioms. We'd still have all of the natural numbers to work with, we just wouldn't be able to assemble them into a single entity, the set of all natural numbers.
On the other hand, there are theorems that make essential use of an infinite set. Like [Goodstein's theorem](http://en.wikipedia.org/wiki/Goodstein%27s_theorem). Without infinite sets (or a substitute of some sort) it would be impossible to prove this result.
So the overall result would be, I think, that you could still do lots of mathematics fine. The mathematics you could do wouldn't be all that weird. And you'd simply be depriving yourself of a useful proof technique.
By the way, you'd still be able to say many things about real numbers. A real number can be thought of as a Cauchy sequence. A [Cauchy sequence](http://en.wikipedia.org/wiki/Cauchy_sequence) is a certain type of sequence of rational numbers. So many statements about real numbers, when unpacked, are really statements about rational, and hence naturals, but in disguise.
Update: Uncovering precisely what parts of mathematics you need in order to prove things is a field known as [reverse mathematics](http://en.wikipedia.org/wiki/Reverse_mathematics). Hilbert, and others mathematicians, were interested in trying to prove as much mathematics as possible using finite methods. Although it was ultimately shown that you can't carry out all mathematics using finite methods, it's surprising how much you can. [Here](http://www.andrew.cmu.edu/user/avigad/Papers/elementary.pdf)'s a paper that talks about a system called EA which has no infinite sets. Amazingly we can use results from [analytic number theory](http://en.wikipedia.org/wiki/Analytic_number_theory) in EA. This is because propositions about analytic functions can be interpreted as statements about natural numbers.
|
Finitism still allows you to use infinitary definitions of real numbers, because a finitist is content with finite *proofs* even if the concepts mentioned by those proofs would seem to require infinite sets. For example, a finitist would still recognize that "ZFC proves that every bounded nonempty set of reals has a least upper bound" even if the finitist does not accept that infinite sets exist.
Proofs in various infinitary systems are of interest to finitists because of conservation results. In this setting, a conservation result would show that if a sentence about the natural numbers of a certain form is provable in some infinitary system, the sentence is actually provable in a finitistic system. For example, there are finitistic proofs that if any $\Pi^0\_2$ sentence about the natural numbers is provable in the infinitary system $\text{WKL}\_0$ of second order arithmetic, that sentence is also provable in the finitistic system $\text{PRA}$ of primitive-recursive arithmetic.
Many consistency results are proven finitistically. For example, there is a finitistic proof that if ZF set theory without the axiom of choice is consistent, then ZFC set theory with the axiom of choice is also consistent. This proof studies infinitary systems of set theory, but the objects actually handled are finite formal proofs rather than infinite sets.
|
501 |
An extreme form of constructivism is called *finitisim*. In this form, unlike the standard axiom system, infinite sets are not allowed. There are important mathematicians, such as Kronecker, who supported such a system. I can see that the natural numbers and rational numbers can easily defined in a finitist system, by easy adaptations of the standard definitions. But in order to do any significant mathematics, we need to have definitions for the irrational numbers that one is likely to encounter in practice, such as $e$ or $\sqrt{2}$. In the standard constructions, real numbers are defined as Dedekind cuts or Cauchy sequences, which are actually sets of infinite cardinality, so they are of no use here. My question is, how would a real number like those be defined in a finitist axiom system (Of course we have no hope to construct the entire set of real numbers, since that set is uncountably infinite).
After doing a little research I found a constructivist definition in Wikipedia <http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis> , but we need a finitist definition of a function for this definition to work (Because in the standard system, a function over the set of natural numbers is actually an infinite set).
So my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system?
*Original version of this question, [which had been closed during private beta](http://meta.math.stackexchange.com/questions/172/why-did-you-close-my-question-if-all-sets-were-finite), is as follows:*
>
> **If all sets were finite, how would mathematics be like?**
>
>
> If we replace the axiom that 'there
> exists an infinite set' with 'all sets
> are finite', how would mathematics be
> like? My guess is that, all the theory
> that has practical importance would
> still show up, but everything would be
> very very unreadable for humans. Is
> that true?
>
>
> We would have the natural numbers,
> athough the class of all natural
> numbers would not be a set. In the
> same sense, we could have the rational
> numbers. But could we have the real
> numbers? Can the standard
> constructions be adapted to this
> setting?
>
>
>
|
2010/07/22
|
['https://math.stackexchange.com/questions/501', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/145/']
|
Set theory with all sets finite has been studied, is a familiar theory in disguise, and is enough for most/all concrete real analysis.
Specifically, Zermelo-Fraenkel set theory with the Axiom of Infinity replaced by its negation (informally, "there is no infinite set") is equivalent to first-order Peano Arithmetic. Call this system *finite ZF*, the theory of hereditarily finite sets. Then under the Goedel arithmetic encoding of finite sets, Peano Arithmetic can prove all the theorems of Finite ZF, and under any of the standard constructions of integers from finite sets, Finite ZF proves all the theorems of Peano Arithmetic.
The implication is that theorems unprovable in PA involve intrinsically infinitary reasoning. Notably, finite ZF was used as an equivalent of PA in the Paris-Harrington paper "A Mathematical Incompleteness in Peano Arithmetic" which proved that their modification of the finite Ramsey theorem can't be proved in PA.
Real numbers and infinite sequences are not directly objects of the finite ZF universe, but there is a clear sense in which real (and complex, and functional) analysis can be performed in finite ZF or in PA. One can make statements about $\pi$ or any other explicitly defined real number, as theorems about a specific sequence of rational approximations ($\forall n P(n)$) and these can be formulated and proved using a theory of finite sets. PA can perform very complicated induction proofs, i.e., transfinite induction below $\epsilon\_0$. In practice this means any concrete real number calculation in ordinary mathematics. For the example of the prime number theorem, using complex analysis and the Riemann zeta function, see Gaisi Takeuti's *Two Applications of Logic to Mathematics*. More discussion of this in a MO thread and my posting there:
<https://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence>
<https://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence/31942#31942>
Proof theory in general and reverse mathematics in particular contain analyses of the logical strength of various theorems in mathematics (when made suitably concrete as statements about sequences of integers), and from this point of view PA, and its avatar finite set theory, are very powerful systems.
|
Disclaimer: I am not a finitist --- but as a theoretical computer scientist, I have a certain sympathy for finitism. The following is the result of me openly speculating what an "official" finitist response would be, based on grounds of computability.
The short version is this: **(a)** It depends on what you mean by a 'number', but there's a reasonable approach which makes it reasonable to talk about finitistic approaches to real numbers; **(b)** What you can do finitisitically with numbers, real, rational, or otherwise, depends on how you represent those numbers.
1. **What is a number?** Is −1 a number? Is sqrt(2) a number? Is *i* = sqrt(−1) a number? What about quaternions? --- I'm going to completely ignore this question and suggest a pragmatic, formalist approach: a "number" is an element of a "number system"; and a "number system" is a collection of expressions which you can transform or describe properties of in some given ways (*i.e.* certain given arithmetic operations) and test certain properties (*e.g.* tests for equality, ordering, *etc.*) These expressions don't have to have a meaningful interpretation in terms of quantities or magnitudes as far as I'm concerned; *you* get to choose which operations/tests you care about.
A finitist would demand that any operation or property be described by an algorithm which provably terminates. That is, it isn't sufficient to prove existence or universality *a la* classical logic; existence proofs must be finite constructions --- of a "number", that is a representation in some "number system" --- and univserality must be shown by a computable test.
2. **Representation of numbers:** How we represent the numbers matters. A finitist should have no qualms about rational numbers: ratios which ultimately boil down to ordered pairs. Despite this, the decimal expansions of these numbers may be infinitely long: 1/3 = 0.33333... what's going on here?
Well, the issue is that we have two representations for the same number, one of which is finite in length (and allows us to perform computations) and another which is not finite in length. However, the decimal expansion can be easily expressed as a function: for all *k*, the *k*th decimal place after the point is '3'; so you can still characterize it precisely in terms of a finite rule.
What's important is that there exists **some** finite way to express the number. But the way in which we choose to *define* the number (as a part of system or numbers, using some way of expressing numbers) will affect what we can do with it...
there is now a question about what operations we can perform.
--- For rationals-as-ratios, we can add/subtract, multiply/divide, and test order/equality. So this representation is a very good one for rationals.
--- For rationals-as-decimal-expansions, we can still add/subtract and multiply/divide, by defining a new digit-function which describes how to compute the result from the decimal expansions; these will be messier than the representations as ratios. Order comparisons are still possible for *distinct* rationals; but you cannot test equality for arbitrary decimal-expansion representations, because you cannot necessarily verify that all decimal places of the difference |*a*−*b*| are 0. The best you can do in general is testing "equality up to precision ε", wherein you show that |*a*−*b*| < ε, for some desired precision ε. This is a number system which informally we may say has certain amount of "vagueness"; but it is in principle completely specified --- there's nothing wrong with this in principle. It's just a matter of how you wish to define your system of arithmetic.
3. **What representation of reals?** Obviously, because there are uncountably many real numbers, you cannot represent all real numbers even if you *aren't* a finitist. But we can still express some of them. The same is true if you're a finitist: you just don't have access to as many, and/or you're restricted in what you can do with them, according to what your representation can handle.
--- Algebraic irrational numbers such as sqrt(2) can be expressed simply like that: "sqrt(2)". There's nothing wrong with the expressions "sqrt(2) − 1" or "[1 + sqrt(5)]/2" --- they express quantities perfectly well. You can perform arithmetic operations on them perfectly well; and you can also perform ordering/equality tests by transforming them into a normal form of the type "[sum of integers and roots of integers]/[positive integer]"; if the difference of two quantities is zero, the normal form of the difference will just end up being '0'. For order comparisons, we can compute enough decimal places of each term in the sum to determine whether the result is positive or negative, a process which is guaranteed to terminate.
--- Numbers such as π and e can be represented by decimal expansions, and computed with in this form, as with the rational numbers. The decimal expansions can be gotten from classical equalities (*e.g.* "infinite" series, except computing only *partial* sums; a number such as e may be expressed by some finite representation of such an 'exact' formula, together with a computable function which describes how many terms of the series are required to get a correct evaluation of the first *k* decimal places.) Of course, what you can do finitistically with these representations is limited in the same way as described above with the rationals; specifically, you cannot always test equality.
|
501 |
An extreme form of constructivism is called *finitisim*. In this form, unlike the standard axiom system, infinite sets are not allowed. There are important mathematicians, such as Kronecker, who supported such a system. I can see that the natural numbers and rational numbers can easily defined in a finitist system, by easy adaptations of the standard definitions. But in order to do any significant mathematics, we need to have definitions for the irrational numbers that one is likely to encounter in practice, such as $e$ or $\sqrt{2}$. In the standard constructions, real numbers are defined as Dedekind cuts or Cauchy sequences, which are actually sets of infinite cardinality, so they are of no use here. My question is, how would a real number like those be defined in a finitist axiom system (Of course we have no hope to construct the entire set of real numbers, since that set is uncountably infinite).
After doing a little research I found a constructivist definition in Wikipedia <http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis> , but we need a finitist definition of a function for this definition to work (Because in the standard system, a function over the set of natural numbers is actually an infinite set).
So my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system?
*Original version of this question, [which had been closed during private beta](http://meta.math.stackexchange.com/questions/172/why-did-you-close-my-question-if-all-sets-were-finite), is as follows:*
>
> **If all sets were finite, how would mathematics be like?**
>
>
> If we replace the axiom that 'there
> exists an infinite set' with 'all sets
> are finite', how would mathematics be
> like? My guess is that, all the theory
> that has practical importance would
> still show up, but everything would be
> very very unreadable for humans. Is
> that true?
>
>
> We would have the natural numbers,
> athough the class of all natural
> numbers would not be a set. In the
> same sense, we could have the rational
> numbers. But could we have the real
> numbers? Can the standard
> constructions be adapted to this
> setting?
>
>
>
|
2010/07/22
|
['https://math.stackexchange.com/questions/501', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/145/']
|
Disclaimer: I am not a finitist --- but as a theoretical computer scientist, I have a certain sympathy for finitism. The following is the result of me openly speculating what an "official" finitist response would be, based on grounds of computability.
The short version is this: **(a)** It depends on what you mean by a 'number', but there's a reasonable approach which makes it reasonable to talk about finitistic approaches to real numbers; **(b)** What you can do finitisitically with numbers, real, rational, or otherwise, depends on how you represent those numbers.
1. **What is a number?** Is −1 a number? Is sqrt(2) a number? Is *i* = sqrt(−1) a number? What about quaternions? --- I'm going to completely ignore this question and suggest a pragmatic, formalist approach: a "number" is an element of a "number system"; and a "number system" is a collection of expressions which you can transform or describe properties of in some given ways (*i.e.* certain given arithmetic operations) and test certain properties (*e.g.* tests for equality, ordering, *etc.*) These expressions don't have to have a meaningful interpretation in terms of quantities or magnitudes as far as I'm concerned; *you* get to choose which operations/tests you care about.
A finitist would demand that any operation or property be described by an algorithm which provably terminates. That is, it isn't sufficient to prove existence or universality *a la* classical logic; existence proofs must be finite constructions --- of a "number", that is a representation in some "number system" --- and univserality must be shown by a computable test.
2. **Representation of numbers:** How we represent the numbers matters. A finitist should have no qualms about rational numbers: ratios which ultimately boil down to ordered pairs. Despite this, the decimal expansions of these numbers may be infinitely long: 1/3 = 0.33333... what's going on here?
Well, the issue is that we have two representations for the same number, one of which is finite in length (and allows us to perform computations) and another which is not finite in length. However, the decimal expansion can be easily expressed as a function: for all *k*, the *k*th decimal place after the point is '3'; so you can still characterize it precisely in terms of a finite rule.
What's important is that there exists **some** finite way to express the number. But the way in which we choose to *define* the number (as a part of system or numbers, using some way of expressing numbers) will affect what we can do with it...
there is now a question about what operations we can perform.
--- For rationals-as-ratios, we can add/subtract, multiply/divide, and test order/equality. So this representation is a very good one for rationals.
--- For rationals-as-decimal-expansions, we can still add/subtract and multiply/divide, by defining a new digit-function which describes how to compute the result from the decimal expansions; these will be messier than the representations as ratios. Order comparisons are still possible for *distinct* rationals; but you cannot test equality for arbitrary decimal-expansion representations, because you cannot necessarily verify that all decimal places of the difference |*a*−*b*| are 0. The best you can do in general is testing "equality up to precision ε", wherein you show that |*a*−*b*| < ε, for some desired precision ε. This is a number system which informally we may say has certain amount of "vagueness"; but it is in principle completely specified --- there's nothing wrong with this in principle. It's just a matter of how you wish to define your system of arithmetic.
3. **What representation of reals?** Obviously, because there are uncountably many real numbers, you cannot represent all real numbers even if you *aren't* a finitist. But we can still express some of them. The same is true if you're a finitist: you just don't have access to as many, and/or you're restricted in what you can do with them, according to what your representation can handle.
--- Algebraic irrational numbers such as sqrt(2) can be expressed simply like that: "sqrt(2)". There's nothing wrong with the expressions "sqrt(2) − 1" or "[1 + sqrt(5)]/2" --- they express quantities perfectly well. You can perform arithmetic operations on them perfectly well; and you can also perform ordering/equality tests by transforming them into a normal form of the type "[sum of integers and roots of integers]/[positive integer]"; if the difference of two quantities is zero, the normal form of the difference will just end up being '0'. For order comparisons, we can compute enough decimal places of each term in the sum to determine whether the result is positive or negative, a process which is guaranteed to terminate.
--- Numbers such as π and e can be represented by decimal expansions, and computed with in this form, as with the rational numbers. The decimal expansions can be gotten from classical equalities (*e.g.* "infinite" series, except computing only *partial* sums; a number such as e may be expressed by some finite representation of such an 'exact' formula, together with a computable function which describes how many terms of the series are required to get a correct evaluation of the first *k* decimal places.) Of course, what you can do finitistically with these representations is limited in the same way as described above with the rationals; specifically, you cannot always test equality.
|
Finitism still allows you to use infinitary definitions of real numbers, because a finitist is content with finite *proofs* even if the concepts mentioned by those proofs would seem to require infinite sets. For example, a finitist would still recognize that "ZFC proves that every bounded nonempty set of reals has a least upper bound" even if the finitist does not accept that infinite sets exist.
Proofs in various infinitary systems are of interest to finitists because of conservation results. In this setting, a conservation result would show that if a sentence about the natural numbers of a certain form is provable in some infinitary system, the sentence is actually provable in a finitistic system. For example, there are finitistic proofs that if any $\Pi^0\_2$ sentence about the natural numbers is provable in the infinitary system $\text{WKL}\_0$ of second order arithmetic, that sentence is also provable in the finitistic system $\text{PRA}$ of primitive-recursive arithmetic.
Many consistency results are proven finitistically. For example, there is a finitistic proof that if ZF set theory without the axiom of choice is consistent, then ZFC set theory with the axiom of choice is also consistent. This proof studies infinitary systems of set theory, but the objects actually handled are finite formal proofs rather than infinite sets.
|
501 |
An extreme form of constructivism is called *finitisim*. In this form, unlike the standard axiom system, infinite sets are not allowed. There are important mathematicians, such as Kronecker, who supported such a system. I can see that the natural numbers and rational numbers can easily defined in a finitist system, by easy adaptations of the standard definitions. But in order to do any significant mathematics, we need to have definitions for the irrational numbers that one is likely to encounter in practice, such as $e$ or $\sqrt{2}$. In the standard constructions, real numbers are defined as Dedekind cuts or Cauchy sequences, which are actually sets of infinite cardinality, so they are of no use here. My question is, how would a real number like those be defined in a finitist axiom system (Of course we have no hope to construct the entire set of real numbers, since that set is uncountably infinite).
After doing a little research I found a constructivist definition in Wikipedia <http://en.wikipedia.org/wiki/Constructivism_(mathematics)#Example_from_real_analysis> , but we need a finitist definition of a function for this definition to work (Because in the standard system, a function over the set of natural numbers is actually an infinite set).
So my question boils down to this: How can we define a function f over the natural numbers in a finitist axiom system?
*Original version of this question, [which had been closed during private beta](http://meta.math.stackexchange.com/questions/172/why-did-you-close-my-question-if-all-sets-were-finite), is as follows:*
>
> **If all sets were finite, how would mathematics be like?**
>
>
> If we replace the axiom that 'there
> exists an infinite set' with 'all sets
> are finite', how would mathematics be
> like? My guess is that, all the theory
> that has practical importance would
> still show up, but everything would be
> very very unreadable for humans. Is
> that true?
>
>
> We would have the natural numbers,
> athough the class of all natural
> numbers would not be a set. In the
> same sense, we could have the rational
> numbers. But could we have the real
> numbers? Can the standard
> constructions be adapted to this
> setting?
>
>
>
|
2010/07/22
|
['https://math.stackexchange.com/questions/501', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/145/']
|
Set theory with all sets finite has been studied, is a familiar theory in disguise, and is enough for most/all concrete real analysis.
Specifically, Zermelo-Fraenkel set theory with the Axiom of Infinity replaced by its negation (informally, "there is no infinite set") is equivalent to first-order Peano Arithmetic. Call this system *finite ZF*, the theory of hereditarily finite sets. Then under the Goedel arithmetic encoding of finite sets, Peano Arithmetic can prove all the theorems of Finite ZF, and under any of the standard constructions of integers from finite sets, Finite ZF proves all the theorems of Peano Arithmetic.
The implication is that theorems unprovable in PA involve intrinsically infinitary reasoning. Notably, finite ZF was used as an equivalent of PA in the Paris-Harrington paper "A Mathematical Incompleteness in Peano Arithmetic" which proved that their modification of the finite Ramsey theorem can't be proved in PA.
Real numbers and infinite sequences are not directly objects of the finite ZF universe, but there is a clear sense in which real (and complex, and functional) analysis can be performed in finite ZF or in PA. One can make statements about $\pi$ or any other explicitly defined real number, as theorems about a specific sequence of rational approximations ($\forall n P(n)$) and these can be formulated and proved using a theory of finite sets. PA can perform very complicated induction proofs, i.e., transfinite induction below $\epsilon\_0$. In practice this means any concrete real number calculation in ordinary mathematics. For the example of the prime number theorem, using complex analysis and the Riemann zeta function, see Gaisi Takeuti's *Two Applications of Logic to Mathematics*. More discussion of this in a MO thread and my posting there:
<https://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence>
<https://mathoverflow.net/questions/31846/is-the-riemann-hypothesis-equivalent-to-a-pi-1-sentence/31942#31942>
Proof theory in general and reverse mathematics in particular contain analyses of the logical strength of various theorems in mathematics (when made suitably concrete as statements about sequences of integers), and from this point of view PA, and its avatar finite set theory, are very powerful systems.
|
Finitism still allows you to use infinitary definitions of real numbers, because a finitist is content with finite *proofs* even if the concepts mentioned by those proofs would seem to require infinite sets. For example, a finitist would still recognize that "ZFC proves that every bounded nonempty set of reals has a least upper bound" even if the finitist does not accept that infinite sets exist.
Proofs in various infinitary systems are of interest to finitists because of conservation results. In this setting, a conservation result would show that if a sentence about the natural numbers of a certain form is provable in some infinitary system, the sentence is actually provable in a finitistic system. For example, there are finitistic proofs that if any $\Pi^0\_2$ sentence about the natural numbers is provable in the infinitary system $\text{WKL}\_0$ of second order arithmetic, that sentence is also provable in the finitistic system $\text{PRA}$ of primitive-recursive arithmetic.
Many consistency results are proven finitistically. For example, there is a finitistic proof that if ZF set theory without the axiom of choice is consistent, then ZFC set theory with the axiom of choice is also consistent. This proof studies infinitary systems of set theory, but the objects actually handled are finite formal proofs rather than infinite sets.
|
87,839 |
I have this image.I want to crop just triangle not its white background for logo im creating.How can i do it in illustrator or photoshop?[](https://i.stack.imgur.com/gLx3w.jpg)
|
2017/04/01
|
['https://graphicdesign.stackexchange.com/questions/87839', 'https://graphicdesign.stackexchange.com', 'https://graphicdesign.stackexchange.com/users/70543/']
|
If you want my opinion why your design doesn't work so well, I think it's because more of the contents lie outside the natural circle shape made by the original design - especially that bold "GAYPRIL" text.
You can see it here if I overlay a circle on both designs.
[](https://i.stack.imgur.com/r5wjD.png)
I think this could be improved if you ensure most of the design elements are within within the bounds of the circle, basically to improve the composition.
Something like this perhaps
[](https://i.stack.imgur.com/1vei2.jpg)
Also you might want to rethink that slight wavy warp on the GAYPRIL text. Might be better if it was unwarped. Also the R and A are very similar - from a distance it might look like GAYPAIL. Perhaps consider changing the font.
|
You have addressed the biggest issue in your design with the changes to your fonts, and it looks so much better. Regarding your further question about the images, the hearts aren't completely working with the feel of your design. The sun, the new fonts, and the contained circular shape give the logo a flat look. While the hearts are out of sync with the rest of the image. Possibly because they are overlapping, or perhaps they're just stylistically different. I suggest that you try a star (or two stars representing a couple???) or another simplistic and flat looking image in place of the hearts.
|
87,839 |
I have this image.I want to crop just triangle not its white background for logo im creating.How can i do it in illustrator or photoshop?[](https://i.stack.imgur.com/gLx3w.jpg)
|
2017/04/01
|
['https://graphicdesign.stackexchange.com/questions/87839', 'https://graphicdesign.stackexchange.com', 'https://graphicdesign.stackexchange.com/users/70543/']
|
If you want my opinion why your design doesn't work so well, I think it's because more of the contents lie outside the natural circle shape made by the original design - especially that bold "GAYPRIL" text.
You can see it here if I overlay a circle on both designs.
[](https://i.stack.imgur.com/r5wjD.png)
I think this could be improved if you ensure most of the design elements are within within the bounds of the circle, basically to improve the composition.
Something like this perhaps
[](https://i.stack.imgur.com/1vei2.jpg)
Also you might want to rethink that slight wavy warp on the GAYPRIL text. Might be better if it was unwarped. Also the R and A are very similar - from a distance it might look like GAYPAIL. Perhaps consider changing the font.
|
*I’ve already been working on this answer in my spare time and I wouldn’t want to see the effort go to waste, although it’s not strictly on-topic, anymore. Nevertheless I’m posting it here in the hope that it might by useful, both for others and for the OP in the more general case. I’m adressing the first version of the Gaypril logo.*
There are few hard and fast rules in design, if any, and to my knowledge none that hasn't been successfully broken by designers. So, if the question is “How to make my logo better?”, then I’d have to ask back: “Well, what do you want to achieve? What means do you want to employ?” Leading to a back and forth exchange that quickly starts to feel like working with a client.
However, if the question is: “Here’s the model logo whose effect I tried to emulate. Here’s my own logo attempt, where, as far as I can see, I did everything like in the model. Why is it still not working?” That’s easier to answer, because I can point to the features of the Canada Road Trip logo and explain why these make the logo “work.” And I can explain how those features are in fact *not* present in the Gaypril logo. The Gaypril logo could employ very different means to achieve visual cohesion.
**Nothing that I’m presenting here is in any way mandatory! I’m discussing the options realised in the Canada Road Trip logo that you took as your model. There are also numerous other options.**
Because that’s what this is about: visual cohesion. Or in other words: composition. In design, you have to establish the rules by which your design works. I recently wrote in another context that design is about creating problems for yourself and then solving them. It’s in some ways like solving a puzzle, except that while solving you also invent the rules by which the puzzle has to be solved. That makes it actually harder!
The impression of boredom doesn't generally come from a lack of “interesting” elements. Rather it stems more often than not from the fact that those element are put together in a manner that appears to the eye as arbitrary. Interest stems from the expectation of surprise, yes, but randomness isn't surprising. To take an extreme example: No two white noise patterns are exactly the same. And still nobody would look at them with any expectation to see an interesting placement of white or black pixels. If you want your work to be interesting you have to compose it in a way that the elements fit together meaningfully and provide a sense of unity in an interesting way.
This is harder to do the more heterogeneous the elements are that you want to compose. This is why beginners are often advised to keep it simple.
Please note that I’m not trying to discourage you! Or making the task seem exceptionally hard. On the contrary, I think you are on a good way and noticing that something’s not working is the first step towards improvement. The rest comes with practise and experience.
I should also point out that I don’t expect that my analysis of the Canada Road Trip is what its designer consciously had in mind, at least not everything. Many decisions in design are made intuitively. I’ll come back to that at the end.
Shape
-----
[](https://i.stack.imgur.com/M5l9O.png)
This logo has a clear circular form. Note how the mountains at the top protude. This is, because we perceive pictorial elements like these as a unity: The top line of the mountains (without the sunrise) are perceived as continuing the line of the circle. But since we perceive the mountains as unity, our eyes average their height. If the mountains where placed more towards the centre of the circle, they would appear as too low.
Note also how the “C” of “Canada” supports and continues the circle line. It, too, sticks out and has to, for the very same reason. The same is true, though to a lesser extent, for the letters “da”. In short, the form of the logo unambiguously evokes a circle and has little to counter that notion. The only element effectively going beyond the circular shape is the sunrise behind the mountains. But its lines carry much less visual emphasis, especially since the lines of the mountains are so much more dominant. And *because the logo is generally well composed* the slight disturbance of the circular shape appears as intentional and meaningful: It’s a well placed accent, so to say, the sunrise behind the logo itself.
[](https://i.stack.imgur.com/MLD9A.png)
Here the circle is fully supported only by the text at the bottom, which, in addition, is de-emphasized with a comparatively thin and wide-spaced font. The sun at the top has much more emphasis. Its vaguely star shaped at its upper outline (the rays) and the half circle of the sun doesn’t support the circle shape of the logo itself, but competes with it. In effect, the sun doesn’t just protude, it counters and escapes from the attempted circularity. The same, more importantly, goes for “Gaypril”: It clearly stands out as the most important visual feature. But not only does it not support the circle, and not only does it protude, it introduces a shape of its own: a parallelogram slightly warped to a wave form.
Nothing of these are a “problem” per se. You *can* have a counter shape in rhythmical contrast to the main shape. You *can* have elements “escaping” from the structure you are trying to create. You are creating your own rules and you are breaking your own rules.
>
> Breaking the rules is good. **But** before you can break the rules, you have to establish the rules.
>
>
>
Here, the logo doesn’t successfully establish the circle.
Grey Values and negative space
------------------------------
[](https://i.stack.imgur.com/ixGsB.png)
Look at the grey values. (I hope “grey values” is the correct English word.) The average of black and white in the various elements is evenly distributed. If I added more blur, the logo would eventually become evenly grey. Only the bear stands out as a blacker area. As does, to a lesser extend, the mountain at the top, thus acting as a counter weight to the bear.
White space between the elements is roughly evenly distributed. Meaning: White space is neutral and not intended as a dominant element of the composition. There are works where white space (better dubbed “negative space”, then) and its shape is just as much important as the foreground elements itself, for instance in traditional Japanese woodcut printing. This is not one of those works.
[](https://i.stack.imgur.com/QSkws.png)
As you can see, all the weight is at the top, with “Gaypril” having the most weight by a large margin and the sun coming second. Note in particular how little weight the bottom text in a half circle has. This contributes largely to the logo not establishing its intended circle shape.
I have a gut feeling that maybe you intuitively were aiming for this: A circle with the upper half being darker and the lower half lighter. If that’s the case, then it’s a good instinct. If the logo were clearer, were successful in establishing the circular shape, if it would give the overall impression of being thoroughly composed, then this could work very well. As it stands, there’s too much visual noise for this to take effect.
Repeating Distances
-------------------
[](https://i.stack.imgur.com/WEGRb.png)
To some degree, this is an elaboration on white space, but with a different twist. The distance between various elements is *roughly* equal, in some cases very roughly. But that’s alright. The eye as little for comparison. If there were mostly straight lines, then more precision might have been necessary.
Note how the designer tilted the bear in order to have its back and front paw in equal distance to the circle:
[](https://i.stack.imgur.com/5LZCz.png)
(The dotted line is the line through the circle’s centre, orthogonal to the baseline of “Canada Road Trip”.) Often you have to cheat a little. And you may! What matters is the naked eye of the spectator, not what you can measure with a ruler. It’s a judgement call with how much you can get away. Here, if you draw a line from the top of the mountain through the centre of circle, then the bear is *roughly* orthogonal to it. Maybe that contributes to the bear’s tilt looking good. Maybe it even establishes a secondary axis. I honestly don’t now. This is a matter of interpretation.
[](https://i.stack.imgur.com/GC7qw.png)
Here the various distances differ widely. Why am I measuring the radial distance? And not, for instance along the tilted vertical axis and perpendicular to it? Because of the comparison to the Canada logo, which works as a circle. Here it’s actually unclear, which leads to my second iteration over shape:
Shape, the second
-----------------
[](https://i.stack.imgur.com/RHeRb.png)
The Gaypril logo hints at a rectangular shape inside the attempted circle. Again I have a gut feeling that this might have been what you wanted on an intuitive level. And, again, a decision like that could very well work, if it were actually both clear by itself and not countered by conflicting visual clues.
Proportions
-----------
[](https://i.stack.imgur.com/BJOX3.png)
There’s a rhythm to the Canada logo: The heights measured a long the baseline of “Canada Road Trip” are *roughly* repeating. I have the golden ratio proportions overlayed in red and blue. Again, there are no straight lines, so a rough correspondence is more than enough to establish a sense of rhythm to the naked eye.
The fact that it’s *roughly* the golden ratio matters less than the fact that it’s a repeating proportion. For instance, 1:2 or 1:3 would work just as well in terms of composition, though in a sense the golden ratio is probably more “neutral” in this context.
>
> To my surprise, I learned from the internet that some people find the golden ratio dubious. There’s nothing magical about it, though: If you need a proportion that is smaller than 1:1 and larger than 1:2, yet is still distinguished enough from either to look intentional, then you cannot but end up at with what for all practical purposes is a “naked eye golden ratio”.
>
>
> For the most basic purposes the golden ratio is *roughly* 2:3. Like here where there are no straight lines providing a clear reference. Or, for instance, if you just want to divide a page into two parts. In such contexts, a more precise division isn’t discernible. Finer precisions like 1:1.618 become meaningful only when building up visual relations spanning multiple golden ratio divisions, since only then the golden ratio’s unique mathematical properties might come into play. The target audience might be relevant, though: If, for instance, you design a poster for an exhibition on Mies van der Rohe or for an auction of renaissance paintings or some such, then your target audience might be more sensitive.
>
>
>
This proportion is also present on a larger scale:
[](https://i.stack.imgur.com/SR6wb.png)
Again, you don’t *have* to use repeating proportions. But in the Canada logo it’s one aspect that provides a sense of intentionality in its composition.
Practising your Sense of Composition
------------------------------------
I mentioned that most designers probably make this kind of decision intuitively, often not consciously aware of it. But there is nothing mysterious about intuition. It comes with experience and it can be educated.
One very good exercise is what you just did: Trying to recreate something that you like. As you probably know, that takes time, though, and quantity does matter somewhat. Another good way to educate your sense for composition is to *draw* works that you like. Here’s an example from my sketch book:
[](https://i.stack.imgur.com/YSoVz.jpg)
[](https://i.stack.imgur.com/r3CNZ.jpg)
As you can see, I’m not really good at drawing, but it serves my purpose. I try to make a habit of drawing one work that I regard as masterful every day. Well, in reality it’s just every other day, or every third day, but I’m trying to be more disciplined. That drawing doesn’t have to be good. More importantly, the aim is not to *copy* the original. Nothing would be gained, if you just put tracing paper on the original. The goal is to *reconstruct* the work on your paper. It’s best to think of it as “thinking with your pencil”. It’s about discovering what visual relations matter for the composition. It’s also about committing options for composition to your subconsciousness.
|
87,839 |
I have this image.I want to crop just triangle not its white background for logo im creating.How can i do it in illustrator or photoshop?[](https://i.stack.imgur.com/gLx3w.jpg)
|
2017/04/01
|
['https://graphicdesign.stackexchange.com/questions/87839', 'https://graphicdesign.stackexchange.com', 'https://graphicdesign.stackexchange.com/users/70543/']
|
*I’ve already been working on this answer in my spare time and I wouldn’t want to see the effort go to waste, although it’s not strictly on-topic, anymore. Nevertheless I’m posting it here in the hope that it might by useful, both for others and for the OP in the more general case. I’m adressing the first version of the Gaypril logo.*
There are few hard and fast rules in design, if any, and to my knowledge none that hasn't been successfully broken by designers. So, if the question is “How to make my logo better?”, then I’d have to ask back: “Well, what do you want to achieve? What means do you want to employ?” Leading to a back and forth exchange that quickly starts to feel like working with a client.
However, if the question is: “Here’s the model logo whose effect I tried to emulate. Here’s my own logo attempt, where, as far as I can see, I did everything like in the model. Why is it still not working?” That’s easier to answer, because I can point to the features of the Canada Road Trip logo and explain why these make the logo “work.” And I can explain how those features are in fact *not* present in the Gaypril logo. The Gaypril logo could employ very different means to achieve visual cohesion.
**Nothing that I’m presenting here is in any way mandatory! I’m discussing the options realised in the Canada Road Trip logo that you took as your model. There are also numerous other options.**
Because that’s what this is about: visual cohesion. Or in other words: composition. In design, you have to establish the rules by which your design works. I recently wrote in another context that design is about creating problems for yourself and then solving them. It’s in some ways like solving a puzzle, except that while solving you also invent the rules by which the puzzle has to be solved. That makes it actually harder!
The impression of boredom doesn't generally come from a lack of “interesting” elements. Rather it stems more often than not from the fact that those element are put together in a manner that appears to the eye as arbitrary. Interest stems from the expectation of surprise, yes, but randomness isn't surprising. To take an extreme example: No two white noise patterns are exactly the same. And still nobody would look at them with any expectation to see an interesting placement of white or black pixels. If you want your work to be interesting you have to compose it in a way that the elements fit together meaningfully and provide a sense of unity in an interesting way.
This is harder to do the more heterogeneous the elements are that you want to compose. This is why beginners are often advised to keep it simple.
Please note that I’m not trying to discourage you! Or making the task seem exceptionally hard. On the contrary, I think you are on a good way and noticing that something’s not working is the first step towards improvement. The rest comes with practise and experience.
I should also point out that I don’t expect that my analysis of the Canada Road Trip is what its designer consciously had in mind, at least not everything. Many decisions in design are made intuitively. I’ll come back to that at the end.
Shape
-----
[](https://i.stack.imgur.com/M5l9O.png)
This logo has a clear circular form. Note how the mountains at the top protude. This is, because we perceive pictorial elements like these as a unity: The top line of the mountains (without the sunrise) are perceived as continuing the line of the circle. But since we perceive the mountains as unity, our eyes average their height. If the mountains where placed more towards the centre of the circle, they would appear as too low.
Note also how the “C” of “Canada” supports and continues the circle line. It, too, sticks out and has to, for the very same reason. The same is true, though to a lesser extent, for the letters “da”. In short, the form of the logo unambiguously evokes a circle and has little to counter that notion. The only element effectively going beyond the circular shape is the sunrise behind the mountains. But its lines carry much less visual emphasis, especially since the lines of the mountains are so much more dominant. And *because the logo is generally well composed* the slight disturbance of the circular shape appears as intentional and meaningful: It’s a well placed accent, so to say, the sunrise behind the logo itself.
[](https://i.stack.imgur.com/MLD9A.png)
Here the circle is fully supported only by the text at the bottom, which, in addition, is de-emphasized with a comparatively thin and wide-spaced font. The sun at the top has much more emphasis. Its vaguely star shaped at its upper outline (the rays) and the half circle of the sun doesn’t support the circle shape of the logo itself, but competes with it. In effect, the sun doesn’t just protude, it counters and escapes from the attempted circularity. The same, more importantly, goes for “Gaypril”: It clearly stands out as the most important visual feature. But not only does it not support the circle, and not only does it protude, it introduces a shape of its own: a parallelogram slightly warped to a wave form.
Nothing of these are a “problem” per se. You *can* have a counter shape in rhythmical contrast to the main shape. You *can* have elements “escaping” from the structure you are trying to create. You are creating your own rules and you are breaking your own rules.
>
> Breaking the rules is good. **But** before you can break the rules, you have to establish the rules.
>
>
>
Here, the logo doesn’t successfully establish the circle.
Grey Values and negative space
------------------------------
[](https://i.stack.imgur.com/ixGsB.png)
Look at the grey values. (I hope “grey values” is the correct English word.) The average of black and white in the various elements is evenly distributed. If I added more blur, the logo would eventually become evenly grey. Only the bear stands out as a blacker area. As does, to a lesser extend, the mountain at the top, thus acting as a counter weight to the bear.
White space between the elements is roughly evenly distributed. Meaning: White space is neutral and not intended as a dominant element of the composition. There are works where white space (better dubbed “negative space”, then) and its shape is just as much important as the foreground elements itself, for instance in traditional Japanese woodcut printing. This is not one of those works.
[](https://i.stack.imgur.com/QSkws.png)
As you can see, all the weight is at the top, with “Gaypril” having the most weight by a large margin and the sun coming second. Note in particular how little weight the bottom text in a half circle has. This contributes largely to the logo not establishing its intended circle shape.
I have a gut feeling that maybe you intuitively were aiming for this: A circle with the upper half being darker and the lower half lighter. If that’s the case, then it’s a good instinct. If the logo were clearer, were successful in establishing the circular shape, if it would give the overall impression of being thoroughly composed, then this could work very well. As it stands, there’s too much visual noise for this to take effect.
Repeating Distances
-------------------
[](https://i.stack.imgur.com/WEGRb.png)
To some degree, this is an elaboration on white space, but with a different twist. The distance between various elements is *roughly* equal, in some cases very roughly. But that’s alright. The eye as little for comparison. If there were mostly straight lines, then more precision might have been necessary.
Note how the designer tilted the bear in order to have its back and front paw in equal distance to the circle:
[](https://i.stack.imgur.com/5LZCz.png)
(The dotted line is the line through the circle’s centre, orthogonal to the baseline of “Canada Road Trip”.) Often you have to cheat a little. And you may! What matters is the naked eye of the spectator, not what you can measure with a ruler. It’s a judgement call with how much you can get away. Here, if you draw a line from the top of the mountain through the centre of circle, then the bear is *roughly* orthogonal to it. Maybe that contributes to the bear’s tilt looking good. Maybe it even establishes a secondary axis. I honestly don’t now. This is a matter of interpretation.
[](https://i.stack.imgur.com/GC7qw.png)
Here the various distances differ widely. Why am I measuring the radial distance? And not, for instance along the tilted vertical axis and perpendicular to it? Because of the comparison to the Canada logo, which works as a circle. Here it’s actually unclear, which leads to my second iteration over shape:
Shape, the second
-----------------
[](https://i.stack.imgur.com/RHeRb.png)
The Gaypril logo hints at a rectangular shape inside the attempted circle. Again I have a gut feeling that this might have been what you wanted on an intuitive level. And, again, a decision like that could very well work, if it were actually both clear by itself and not countered by conflicting visual clues.
Proportions
-----------
[](https://i.stack.imgur.com/BJOX3.png)
There’s a rhythm to the Canada logo: The heights measured a long the baseline of “Canada Road Trip” are *roughly* repeating. I have the golden ratio proportions overlayed in red and blue. Again, there are no straight lines, so a rough correspondence is more than enough to establish a sense of rhythm to the naked eye.
The fact that it’s *roughly* the golden ratio matters less than the fact that it’s a repeating proportion. For instance, 1:2 or 1:3 would work just as well in terms of composition, though in a sense the golden ratio is probably more “neutral” in this context.
>
> To my surprise, I learned from the internet that some people find the golden ratio dubious. There’s nothing magical about it, though: If you need a proportion that is smaller than 1:1 and larger than 1:2, yet is still distinguished enough from either to look intentional, then you cannot but end up at with what for all practical purposes is a “naked eye golden ratio”.
>
>
> For the most basic purposes the golden ratio is *roughly* 2:3. Like here where there are no straight lines providing a clear reference. Or, for instance, if you just want to divide a page into two parts. In such contexts, a more precise division isn’t discernible. Finer precisions like 1:1.618 become meaningful only when building up visual relations spanning multiple golden ratio divisions, since only then the golden ratio’s unique mathematical properties might come into play. The target audience might be relevant, though: If, for instance, you design a poster for an exhibition on Mies van der Rohe or for an auction of renaissance paintings or some such, then your target audience might be more sensitive.
>
>
>
This proportion is also present on a larger scale:
[](https://i.stack.imgur.com/SR6wb.png)
Again, you don’t *have* to use repeating proportions. But in the Canada logo it’s one aspect that provides a sense of intentionality in its composition.
Practising your Sense of Composition
------------------------------------
I mentioned that most designers probably make this kind of decision intuitively, often not consciously aware of it. But there is nothing mysterious about intuition. It comes with experience and it can be educated.
One very good exercise is what you just did: Trying to recreate something that you like. As you probably know, that takes time, though, and quantity does matter somewhat. Another good way to educate your sense for composition is to *draw* works that you like. Here’s an example from my sketch book:
[](https://i.stack.imgur.com/YSoVz.jpg)
[](https://i.stack.imgur.com/r3CNZ.jpg)
As you can see, I’m not really good at drawing, but it serves my purpose. I try to make a habit of drawing one work that I regard as masterful every day. Well, in reality it’s just every other day, or every third day, but I’m trying to be more disciplined. That drawing doesn’t have to be good. More importantly, the aim is not to *copy* the original. Nothing would be gained, if you just put tracing paper on the original. The goal is to *reconstruct* the work on your paper. It’s best to think of it as “thinking with your pencil”. It’s about discovering what visual relations matter for the composition. It’s also about committing options for composition to your subconsciousness.
|
You have addressed the biggest issue in your design with the changes to your fonts, and it looks so much better. Regarding your further question about the images, the hearts aren't completely working with the feel of your design. The sun, the new fonts, and the contained circular shape give the logo a flat look. While the hearts are out of sync with the rest of the image. Possibly because they are overlapping, or perhaps they're just stylistically different. I suggest that you try a star (or two stars representing a couple???) or another simplistic and flat looking image in place of the hearts.
|
27,553,515 |
I am currently working on a project which will use Entity Framework 6.1.1 and an Oracle 11g database backend. I will be accessing tables across multiple schemas some of which have foreign key relationships across schemas as well (look-up tables, enterprise data, etc...).
Traditionally we have used synonyms as a means of exposing these cross-schema tables to a particular login. My question is... how can I map these synonyms in EF6 using code first mapping? I have no problems mapping to tables directly within a single schema, but this of course won't be sufficient since my tables cross several schemas. So far my code first mappings do not recognize synonyms.
Has anyone been able to do code-first mappings to Oracle synonyms?
|
2014/12/18
|
['https://Stackoverflow.com/questions/27553515', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2701219/']
|
To have each column be a different color, all you have to do is set the `colorByPoint` property to `true`.
Reference:
* <http://api.highcharts.com/highcharts#plotOptions.column.colorByPoint>
Alternatively you can make each column a separate series, which gives you additional levels of control.
*OTOH, in the majority of cases, having each column a separate color serves no purpose except to clutter and confuse the data, and make the user work harder cognitively to interpret the chart.*
If you want to highlight a single column for a particular reason, you can do that by adding the fillColor property to the data array:
Something like:
```
data:[2,4,5,{y:9,fillColor:'rgba(204,0,0,.75)',note:'Wow, look at this one'},4,5,6]
```
|
I finally found a way to show more than 1 color for each column:
```
var charts1 = [];
var $containers1 = $('#container1');
var datasets1 = [{
name: 'Dalias',
data: [29]
},
{
name: 'Lilas',
data: [1]
},
{
name: 'Tulipanes',
data: [15]
}];
$('#container1').highcharts({
chart: {
type: 'column',
backgroundColor: 'transparent'
},
title: {
text: 'Montos pedidos por división'
},
tooltip: {
pointFormat: '<span style="color:{series.color};" />{series.name} </span>:<b>{point.y}</b>',
useHTML: true
},
plotOptions: {
column: {
pointPadding: 0.2,
borderWidth: 0
},
series : {
cursor: 'pointer',
point: {
events: {
/*click: function() {
verDetalle("Especialidades,"+ this.series.name);
}*/
}
}
}
},
credits:{
enabled: false
},
yAxis: {
min: 0,
title: {
text: ''
}
},
xAxis: {
categories: ['División']
},
series: datasets1
});
```
|
32,201 |
I am making a simple voltage regulator. The whole idea is to just use ADC to read voltage on output side and based on the result, adjust PWM power. I am using PWM on physical pin 5. That's the same pin as the one connected to Arduino pin 10 on this image below:

This means that if I try to flash the program, ATTiny will start to put PWM power into my Arduino. I don't want that to happen. How can I flash the program safely? Can I do something to prevent it from starting?
|
2016/12/12
|
['https://arduino.stackexchange.com/questions/32201', 'https://arduino.stackexchange.com', 'https://arduino.stackexchange.com/users/2955/']
|
>
> This means that if I try to flash the program, ATTiny will start to put PWM power into my Arduino. I don't want that to happen.
>
>
>
Why do you think that is a problem? The "PWM power" cannot be any higher than the supply voltage, and that is 5V. The Arduino has no problem with you providing a 5V PWM signal to an input pin.
The only time it could be a problem is if the pin you are sending the PWM to is set to an output, in which case you could risk overloading the pin. To get around that you just need to insert a small resistor between the ATTiny's pin and the Arduino's pin. 330Ω or so should do it. Just enough to limit the current to a safe value (< 20mA) but small enough that it won't interfere with the programming communication.
A well design schematic for an Arduino ICSP programmer would have had these resistors in all the data communication lines anyway.
|
That isn't a problem but if you **really** want to prevent ATtiny generating the PWM signal right after flashing the firmware, then you might add a jumper to some free µC input and put the while loop at the beginning of the program which reads that input and waits until you remove the jumper. For example, you could enable internall pull-up on some free input and put the jumper from that pin to GND. Then in the code you can just wait for the input to become high and then proceed with the rest of the code. Then you can just insert the jumper, flash the firmware, remove the connections between ATtiny and Arduino and then remove the jumper.
|
32,201 |
I am making a simple voltage regulator. The whole idea is to just use ADC to read voltage on output side and based on the result, adjust PWM power. I am using PWM on physical pin 5. That's the same pin as the one connected to Arduino pin 10 on this image below:

This means that if I try to flash the program, ATTiny will start to put PWM power into my Arduino. I don't want that to happen. How can I flash the program safely? Can I do something to prevent it from starting?
|
2016/12/12
|
['https://arduino.stackexchange.com/questions/32201', 'https://arduino.stackexchange.com', 'https://arduino.stackexchange.com/users/2955/']
|
>
> This means that if I try to flash the program, ATTiny will start to put PWM power into my Arduino. I don't want that to happen.
>
>
>
Why do you think that is a problem? The "PWM power" cannot be any higher than the supply voltage, and that is 5V. The Arduino has no problem with you providing a 5V PWM signal to an input pin.
The only time it could be a problem is if the pin you are sending the PWM to is set to an output, in which case you could risk overloading the pin. To get around that you just need to insert a small resistor between the ATTiny's pin and the Arduino's pin. 330Ω or so should do it. Just enough to limit the current to a safe value (< 20mA) but small enough that it won't interfere with the programming communication.
A well design schematic for an Arduino ICSP programmer would have had these resistors in all the data communication lines anyway.
|
The Arduino-as-ISP sketch will turn all the OUTPUT pins, connected to the ATTiny, [back to **INPUT**](https://github.com/arduino/Arduino/blob/2bfe164b9a5835e8cb6e194b928538a9093be333/build/shared/examples/11.ArduinoISP/ArduinoISP/ArduinoISP.ino#L433-L439). That way nothing bad can happen, when the ATTiny drives those pins HIGH of LOW.
|
5,213,670 |
I have three files; index.php, searchbar.php and search.php
now when i have search.php show its results on its own page its fine but when i try to include the search page in index.php i get nothing.
so i include the searchbox.php in index.php so i have a search bar, i then search for something and include the search.php page by using the $\_GET['p'] on the index.php but the search always come up blank, if i just leave search.php as its own page and dont try to include it then i get my results but id like for them to be included on the page they were searched from.
index.php
```
<?php
if (isset($_GET['p']) && $_GET['p'] != "") {
$p = $_GET['p'];
if (file_exists('include/'.$p.'.php')) {
@include ('include/'.$p.'.php');
} elseif (!file_exists('include/'.$p.'.php')) {
echo 'Page you are requesting doesn´t exist<br><br>';
}
} else {
@include ('news.php');
}
?>
```
searchbox.php
```
<div id="searchwrapper"><form action="?p=search" method="get">
<input type="text" class="searchbox" name="query" value="" id="query"/>
<input type="image" src="search.png" class="searchbox_submit" value="" ALT="Submit Form" id="submit"/>
</form>
</div>
```
search.php
```
<?php
include 'connect.php';
$searchTerms = $_GET['query'];
$query = mysql_query("SELECT * FROM misc WHERE itemname LIKE '%$searchTerms%' ORDER BY itemname ");
{
echo "<table border='1' cellpadding='2' cellspacing='0' width=608 id='misc' class='tablesorter'><thead>";
echo "<tr> <th> </th> <th>Item Name</th> <th>Desc.</th></tr></thead><tbody>";
// keeps getting the next row until there are no more to get
while($row = mysql_fetch_array( $query )) {
// Print out the contents of each row into a table
echo "<tr><td width=50>";
echo $row['image'];
echo "</td><td width=150>";
echo $row['itemname'];
echo "</td><td width=250>";
echo $row['desc'];
echo "</td></tr>";
}
echo "</tbody></table>";;
}
if (mysql_num_rows($query) == 0)
{
echo 'No Results';
}
?>
```
|
2011/03/06
|
['https://Stackoverflow.com/questions/5213670', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/647328/']
|
I think that I would probably prefer an approach like
```
In[1]:= Physics[find_, have_:{}] := Solve[
{d == vf*t - (a*t^2)/2 (* , etc *)} /. have, find]
In[2]:= Physics[d]
Out[2]= {{d -> 1/2 (-a t^2 + 2 t vf)}}
In[2]:= Physics[d, {t -> 9.7, vf -> -104.98, a -> -9.8}]
Out[2]= {{d -> -557.265}}
```
Where the `have` variables are given as a list of replacement rules.
As an aside, in these types of physics problems, a nice thing to do is define your physical constants like
```
N[g] = -9.8;
```
which produces a `NValues` for `g`. Then
```
N[tf] = 9.7;N[vf] = -104.98;
Physics[d, {t -> tf, vf -> vf, a -> g}]
%//N
```
produces
```
{{d->1/2 (-g tf^2+2 tf vf)}}
{{d->-557.265}}
```
|
You are at least approaching this problem reasonably. I see a fine general purpose function and I see you're getting results, which is what matters primarily. There is no 'correct' solution, since there might be a large range of acceptable solutions. In some scenario's some solutions may be preferred over others, for instance because of performance, while that might be the other way around in other scenarios.
The only slight problem I have with your example is the dubious parametername 'have'.
Why do you think this would be a wrong approach?
|
5,213,670 |
I have three files; index.php, searchbar.php and search.php
now when i have search.php show its results on its own page its fine but when i try to include the search page in index.php i get nothing.
so i include the searchbox.php in index.php so i have a search bar, i then search for something and include the search.php page by using the $\_GET['p'] on the index.php but the search always come up blank, if i just leave search.php as its own page and dont try to include it then i get my results but id like for them to be included on the page they were searched from.
index.php
```
<?php
if (isset($_GET['p']) && $_GET['p'] != "") {
$p = $_GET['p'];
if (file_exists('include/'.$p.'.php')) {
@include ('include/'.$p.'.php');
} elseif (!file_exists('include/'.$p.'.php')) {
echo 'Page you are requesting doesn´t exist<br><br>';
}
} else {
@include ('news.php');
}
?>
```
searchbox.php
```
<div id="searchwrapper"><form action="?p=search" method="get">
<input type="text" class="searchbox" name="query" value="" id="query"/>
<input type="image" src="search.png" class="searchbox_submit" value="" ALT="Submit Form" id="submit"/>
</form>
</div>
```
search.php
```
<?php
include 'connect.php';
$searchTerms = $_GET['query'];
$query = mysql_query("SELECT * FROM misc WHERE itemname LIKE '%$searchTerms%' ORDER BY itemname ");
{
echo "<table border='1' cellpadding='2' cellspacing='0' width=608 id='misc' class='tablesorter'><thead>";
echo "<tr> <th> </th> <th>Item Name</th> <th>Desc.</th></tr></thead><tbody>";
// keeps getting the next row until there are no more to get
while($row = mysql_fetch_array( $query )) {
// Print out the contents of each row into a table
echo "<tr><td width=50>";
echo $row['image'];
echo "</td><td width=150>";
echo $row['itemname'];
echo "</td><td width=250>";
echo $row['desc'];
echo "</td></tr>";
}
echo "</tbody></table>";;
}
if (mysql_num_rows($query) == 0)
{
echo 'No Results';
}
?>
```
|
2011/03/06
|
['https://Stackoverflow.com/questions/5213670', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/647328/']
|
Let me show some advanges of Simon's approach:

|
You are at least approaching this problem reasonably. I see a fine general purpose function and I see you're getting results, which is what matters primarily. There is no 'correct' solution, since there might be a large range of acceptable solutions. In some scenario's some solutions may be preferred over others, for instance because of performance, while that might be the other way around in other scenarios.
The only slight problem I have with your example is the dubious parametername 'have'.
Why do you think this would be a wrong approach?
|
5,213,670 |
I have three files; index.php, searchbar.php and search.php
now when i have search.php show its results on its own page its fine but when i try to include the search page in index.php i get nothing.
so i include the searchbox.php in index.php so i have a search bar, i then search for something and include the search.php page by using the $\_GET['p'] on the index.php but the search always come up blank, if i just leave search.php as its own page and dont try to include it then i get my results but id like for them to be included on the page they were searched from.
index.php
```
<?php
if (isset($_GET['p']) && $_GET['p'] != "") {
$p = $_GET['p'];
if (file_exists('include/'.$p.'.php')) {
@include ('include/'.$p.'.php');
} elseif (!file_exists('include/'.$p.'.php')) {
echo 'Page you are requesting doesn´t exist<br><br>';
}
} else {
@include ('news.php');
}
?>
```
searchbox.php
```
<div id="searchwrapper"><form action="?p=search" method="get">
<input type="text" class="searchbox" name="query" value="" id="query"/>
<input type="image" src="search.png" class="searchbox_submit" value="" ALT="Submit Form" id="submit"/>
</form>
</div>
```
search.php
```
<?php
include 'connect.php';
$searchTerms = $_GET['query'];
$query = mysql_query("SELECT * FROM misc WHERE itemname LIKE '%$searchTerms%' ORDER BY itemname ");
{
echo "<table border='1' cellpadding='2' cellspacing='0' width=608 id='misc' class='tablesorter'><thead>";
echo "<tr> <th> </th> <th>Item Name</th> <th>Desc.</th></tr></thead><tbody>";
// keeps getting the next row until there are no more to get
while($row = mysql_fetch_array( $query )) {
// Print out the contents of each row into a table
echo "<tr><td width=50>";
echo $row['image'];
echo "</td><td width=150>";
echo $row['itemname'];
echo "</td><td width=250>";
echo $row['desc'];
echo "</td></tr>";
}
echo "</tbody></table>";;
}
if (mysql_num_rows($query) == 0)
{
echo 'No Results';
}
?>
```
|
2011/03/06
|
['https://Stackoverflow.com/questions/5213670', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/647328/']
|
I think that I would probably prefer an approach like
```
In[1]:= Physics[find_, have_:{}] := Solve[
{d == vf*t - (a*t^2)/2 (* , etc *)} /. have, find]
In[2]:= Physics[d]
Out[2]= {{d -> 1/2 (-a t^2 + 2 t vf)}}
In[2]:= Physics[d, {t -> 9.7, vf -> -104.98, a -> -9.8}]
Out[2]= {{d -> -557.265}}
```
Where the `have` variables are given as a list of replacement rules.
As an aside, in these types of physics problems, a nice thing to do is define your physical constants like
```
N[g] = -9.8;
```
which produces a `NValues` for `g`. Then
```
N[tf] = 9.7;N[vf] = -104.98;
Physics[d, {t -> tf, vf -> vf, a -> g}]
%//N
```
produces
```
{{d->1/2 (-g tf^2+2 tf vf)}}
{{d->-557.265}}
```
|
Let me show some advanges of Simon's approach:

|
37,163,656 |
You have a big list and the goal is to retrieve this new list below
Today:
```
number color brand size tiresize
-----------------------------------------
1 blue d 5 6
2 blue d 5 6
3 red b 3 3
4 red b 3 3
etc....
```
Goal:
```
number color brand size tiresize
-----------------------------------------
blue d 5 6
red b 3 3
```
The goal is to retrieve a list that is distincted and you remove the "number".
THis sample is a smal one and in reality the list has about 26 datamember.
I was thinking about distinct() but it takes account to all datamember and I dont want to take account to the column "number"
WHen you retrieve the new list, the request is to use the same class before with the distincted list.
```
public Car
{
public int number
public string color
public string brand
public string size
public string tiresize
}
```
Thank you!
|
2016/05/11
|
['https://Stackoverflow.com/questions/37163656', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/484390/']
|
I think you can use [`groupby`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) with [`transform`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html):
```
import pandas as pd
import numpy as np
df = pd.DataFrame([[1,1,3],
[1,1,9],
[1,1,np.nan],
[2,2,8],
[2,1,4],
[2,2,np.nan],
[2,2,5]]
, columns=list('ABC'))
print df
A B C
0 1 1 3.0
1 1 1 9.0
2 1 1 NaN
3 2 2 8.0
4 2 1 4.0
5 2 2 NaN
6 2 2 5.0
df['C'] = df.groupby(['A', 'B'])['C'].transform(lambda x: x.fillna( x.mean() ))
print df
A B C
0 1 1 3.0
1 1 1 9.0
2 1 1 6.0
3 2 2 8.0
4 2 1 4.0
5 2 2 6.5
6 2 2 5.0
```
|
```
[df[i].fillna(df[i].mean(),inplace=True) for i in df.columns ]
```
This fills then NAN from column C with 5.8 which is the mean of columns 'C'
```
Output
print df
A B C
0 1 1 3.0
1 1 1 9.0
2 1 1 5.8
3 2 2 8.0
4 2 1 4.0
5 2 2 5.8
6 2 2 5.0
```
|
39,326,512 |
I have the following two calculation using Math.round(...):
```
double x = 0.57145732;
x = x * 100;
x = Math.round(x * 10);
x = x / 10;
```
If I now print the value of x it will show me: 57.1.
```
double x = 0.57145732;
x = (Math.round((x * 100) * 10)) / 10;
// x = (Math.round(x * 1000)) / 10; //Also gives me 57.0.
```
If I now print the value of x it will show me: 57.0.
Why is there this difference in the outcome?
|
2016/09/05
|
['https://Stackoverflow.com/questions/39326512', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/587261/']
|
The `Math.round()` method returns an **integer** (of type `long` - as pointed out by [Ole V.V](https://stackoverflow.com/users/5772882/ole-v-v)). It's usually thought to return a `float` or `double` which gives rise to confusions as these.
In the second calculation,
```
Math.round((x * 100) * 10)
```
returns `571`. Now, this value and `10` both are integers (571 is long, 10 is int). So when the calculation takes the form
```
x = 571 / 10
```
where x is double, `571/10` returns `57` instead of `57.1` since it is `int`. Then, `57` is converted to double and it becomes `57.0`
If you do
```
x = (double)Math.round((x * 100) * 10) / 10.0;
```
its value becomes `57.1`.
---
**Edit**: There are two versions of the `Math.round()` function. The one you used accepts a double (since x is double) and returns `long`. In your case, **implicit type casting** spares you the trouble of considering the precise little details.
|
This because Math.round() returns an int. If you do this step-by-step (as in the first example), you assign the result of Math.round() to a float value. The following calculation uses then a float division.
In the second example, you let the JVM decide which types to use (and it uses an integer division as the intermediate step). This is why the precision gets lost.
|
39,326,512 |
I have the following two calculation using Math.round(...):
```
double x = 0.57145732;
x = x * 100;
x = Math.round(x * 10);
x = x / 10;
```
If I now print the value of x it will show me: 57.1.
```
double x = 0.57145732;
x = (Math.round((x * 100) * 10)) / 10;
// x = (Math.round(x * 1000)) / 10; //Also gives me 57.0.
```
If I now print the value of x it will show me: 57.0.
Why is there this difference in the outcome?
|
2016/09/05
|
['https://Stackoverflow.com/questions/39326512', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/587261/']
|
The reason of the difference is that in the second formula you're making a division of two integer. in order to have the same result you have to add a cast to double:
```
double x = 0.57145732;
x = (double)(Math.round((x * 100) * 10)) / 10;
```
|
This because Math.round() returns an int. If you do this step-by-step (as in the first example), you assign the result of Math.round() to a float value. The following calculation uses then a float division.
In the second example, you let the JVM decide which types to use (and it uses an integer division as the intermediate step). This is why the precision gets lost.
|
39,326,512 |
I have the following two calculation using Math.round(...):
```
double x = 0.57145732;
x = x * 100;
x = Math.round(x * 10);
x = x / 10;
```
If I now print the value of x it will show me: 57.1.
```
double x = 0.57145732;
x = (Math.round((x * 100) * 10)) / 10;
// x = (Math.round(x * 1000)) / 10; //Also gives me 57.0.
```
If I now print the value of x it will show me: 57.0.
Why is there this difference in the outcome?
|
2016/09/05
|
['https://Stackoverflow.com/questions/39326512', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/587261/']
|
The difference is between
```
x = Math.round(571.45732) / 10;
```
and
```
x = Math.round(571.45732);
x = x / 10;
```
Since `round(double)` returns a long, in the first case you divide a long by an int, giving the long 57. Converting back to double leads to 57.0. The second case is equivalent to
```
x = ((double)Math.round(571.45732)) / 10;
```
where a double is divided by an int, resulting in 57.1.
|
This because Math.round() returns an int. If you do this step-by-step (as in the first example), you assign the result of Math.round() to a float value. The following calculation uses then a float division.
In the second example, you let the JVM decide which types to use (and it uses an integer division as the intermediate step). This is why the precision gets lost.
|
39,326,512 |
I have the following two calculation using Math.round(...):
```
double x = 0.57145732;
x = x * 100;
x = Math.round(x * 10);
x = x / 10;
```
If I now print the value of x it will show me: 57.1.
```
double x = 0.57145732;
x = (Math.round((x * 100) * 10)) / 10;
// x = (Math.round(x * 1000)) / 10; //Also gives me 57.0.
```
If I now print the value of x it will show me: 57.0.
Why is there this difference in the outcome?
|
2016/09/05
|
['https://Stackoverflow.com/questions/39326512', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/587261/']
|
The `Math.round()` method returns an **integer** (of type `long` - as pointed out by [Ole V.V](https://stackoverflow.com/users/5772882/ole-v-v)). It's usually thought to return a `float` or `double` which gives rise to confusions as these.
In the second calculation,
```
Math.round((x * 100) * 10)
```
returns `571`. Now, this value and `10` both are integers (571 is long, 10 is int). So when the calculation takes the form
```
x = 571 / 10
```
where x is double, `571/10` returns `57` instead of `57.1` since it is `int`. Then, `57` is converted to double and it becomes `57.0`
If you do
```
x = (double)Math.round((x * 100) * 10) / 10.0;
```
its value becomes `57.1`.
---
**Edit**: There are two versions of the `Math.round()` function. The one you used accepts a double (since x is double) and returns `long`. In your case, **implicit type casting** spares you the trouble of considering the precise little details.
|
The reason of the difference is that in the second formula you're making a division of two integer. in order to have the same result you have to add a cast to double:
```
double x = 0.57145732;
x = (double)(Math.round((x * 100) * 10)) / 10;
```
|
39,326,512 |
I have the following two calculation using Math.round(...):
```
double x = 0.57145732;
x = x * 100;
x = Math.round(x * 10);
x = x / 10;
```
If I now print the value of x it will show me: 57.1.
```
double x = 0.57145732;
x = (Math.round((x * 100) * 10)) / 10;
// x = (Math.round(x * 1000)) / 10; //Also gives me 57.0.
```
If I now print the value of x it will show me: 57.0.
Why is there this difference in the outcome?
|
2016/09/05
|
['https://Stackoverflow.com/questions/39326512', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/587261/']
|
The `Math.round()` method returns an **integer** (of type `long` - as pointed out by [Ole V.V](https://stackoverflow.com/users/5772882/ole-v-v)). It's usually thought to return a `float` or `double` which gives rise to confusions as these.
In the second calculation,
```
Math.round((x * 100) * 10)
```
returns `571`. Now, this value and `10` both are integers (571 is long, 10 is int). So when the calculation takes the form
```
x = 571 / 10
```
where x is double, `571/10` returns `57` instead of `57.1` since it is `int`. Then, `57` is converted to double and it becomes `57.0`
If you do
```
x = (double)Math.round((x * 100) * 10) / 10.0;
```
its value becomes `57.1`.
---
**Edit**: There are two versions of the `Math.round()` function. The one you used accepts a double (since x is double) and returns `long`. In your case, **implicit type casting** spares you the trouble of considering the precise little details.
|
The difference is between
```
x = Math.round(571.45732) / 10;
```
and
```
x = Math.round(571.45732);
x = x / 10;
```
Since `round(double)` returns a long, in the first case you divide a long by an int, giving the long 57. Converting back to double leads to 57.0. The second case is equivalent to
```
x = ((double)Math.round(571.45732)) / 10;
```
where a double is divided by an int, resulting in 57.1.
|
39,326,512 |
I have the following two calculation using Math.round(...):
```
double x = 0.57145732;
x = x * 100;
x = Math.round(x * 10);
x = x / 10;
```
If I now print the value of x it will show me: 57.1.
```
double x = 0.57145732;
x = (Math.round((x * 100) * 10)) / 10;
// x = (Math.round(x * 1000)) / 10; //Also gives me 57.0.
```
If I now print the value of x it will show me: 57.0.
Why is there this difference in the outcome?
|
2016/09/05
|
['https://Stackoverflow.com/questions/39326512', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/587261/']
|
The reason of the difference is that in the second formula you're making a division of two integer. in order to have the same result you have to add a cast to double:
```
double x = 0.57145732;
x = (double)(Math.round((x * 100) * 10)) / 10;
```
|
The difference is between
```
x = Math.round(571.45732) / 10;
```
and
```
x = Math.round(571.45732);
x = x / 10;
```
Since `round(double)` returns a long, in the first case you divide a long by an int, giving the long 57. Converting back to double leads to 57.0. The second case is equivalent to
```
x = ((double)Math.round(571.45732)) / 10;
```
where a double is divided by an int, resulting in 57.1.
|
66,111,346 |
I have an observable that has objects coming into it. I want to get a property from each object that matches a filter and create a single comma separate string over all the emissions that has the single property I want. How can I accomplish this?
What I've tried:
```
data.pipe(
map(item => item.map(d => d.source === "SomeSource").join(",")
))
```
>
> Data is of the following type: Observable<MyEntity[]>
>
>
>
>
> Data = [{id:2,name:'MyName1',sourc:'SomeSource1'},{id:2,name:'MyName2',sourc:'SomeSource2'},{id:3,name:'MyName3',sourc:'SomeSource3'}]
>
>
>
However, this results in an observable string, but I just want a string in this format: `MyName1,MyName2,MyName3`.
|
2021/02/09
|
['https://Stackoverflow.com/questions/66111346', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1991118/']
|
Consider two points, `a` and `b`.
```py
a = [1,2]
b = [3,4]
```
When we zip them we get:
```py
print(list(zip(a, b))) # [[1,3], [2,4]]
```
We can see that the first element of each are paired together, and similarly for the second element of each. This is just how zip works; I suspect this makes sense for you. If those are (x,y) points, then we've just grouped the x's and y's together.
Now; consider the signature of `plt.plot(x, y, ...)`. It expects the first argument to be all the x's, and the second argument to be all the y's. Well, the zip just grouped those together for us! We can use the `*` operator to spread those over the first two arguments. Notice that these are equivalent operations:
```py
p = list(zip(a, b))
plt.plot(*p)
plt.plot(p[0], p[1])
```
Side note: to expand to more points we just add the extra points into the zip:
```py
a = [1, 2]
b = [3, 4]
c = [5, 6]
print(list(zip(a, b, c))) # [[1, 3, 5], [2, 4, 6]]
plt.plot(*zip(a, b, c)) # plots the 3 points
```
|
`*` inside a function call converts a list (or other iterable) into a `*args` kind of argument.
`zip` with several lists iterates through them pairing up elements:
```
In [1]: list(zip([1,2,3],[4,5,6]))
Out[1]: [(1, 4), (2, 5), (3, 6)]
```
If we define a list:
```
In [2]: alist = [[1,2,3],[4,5,6]]
In [3]: list(zip(alist))
Out[3]: [([1, 2, 3],), ([4, 5, 6],)]
```
That `zip` didn't do much. But if we star it:
```
In [4]: list(zip(*alist))
Out[4]: [(1, 4), (2, 5), (3, 6)]
```
Check the `zip` docs - see the `*args`:
```
In [5]: zip?
Init signature: zip(self, /, *args, **kwargs)
Docstring:
zip(*iterables) --> A zip object yielding tuples until an input is exhausted.
>>> list(zip('abcdefg', range(3), range(4)))
[('a', 0, 0), ('b', 1, 1), ('c', 2, 2)]
The zip object yields n-length tuples, where n is the number of iterables
passed as positional arguments to zip(). The i-th element in every tuple
comes from the i-th iterable argument to zip(). This continues until the
shortest argument is exhausted.
Type: type
Subclasses:
```
`*` could also be used with a function like `def foo(arg1, arg2, arg3):...`
In `plt.plot(*zip(X[j], X[i]), color='black')`, `plot` has signature like `plot(x, y, kwargs)`. I don't think this is any different from
```
plt.plot(X[j], X[i], color='black')
```
but I'd have to actually test some code.
edit
----
```
def foo(x,y):
print(x,y)
In [11]: X = np.arange(10).reshape(5,2)
In [12]: foo(X[1],X[0])
[2 3] [0 1]
In [13]: foo(*zip(X[1],X[0]))
(2, 0) (3, 1)
```
`list(*zip(...))` is a list version of a matrix transpose.
|
37,214 |
Is there any formal notation for dealing with lists, rather than sets?
e.g. if I have a set $X=\{x\_1,\dots,x\_n\}$ and I want to add a new item to the set, say $x\_{n+1}$, I can say "Let $X = X \cup \{x\_{n+1}\}$" and it is clearly understood that I want to add $x\_{n+1}$ to my set.
However, if $X$ is not a set but rather a list, or tuple (i.e. the elements are ordered and duplicates are allowed), is there any way of indicating that I am adding an element to the end of the list?
e.g. given $X=(x\_1,\dots,x\_n)$, how do I say add an element to $X$ such that $X=(x\_1,\dots,x\_n,x\_{n+1})$? i.e. how do I formally denote appending an element to $X$?
|
2011/05/05
|
['https://math.stackexchange.com/questions/37214', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/13/']
|
I don't think there is any standard notation.
One alternative would be to not use $(a,b)$ for ordered pairs but $a \times b$, which is the notation suggested by category theory. The $\times$ allows you to sweep lots of assocativity isomorphisms under the rug: it looks perfectly natural to write $(a \times b) \times c = a \times (b \times c) = a \times b \times c$, but not $((a,b),c) = (a,(b,c)) = (a,b,c)$.
Then if you have an $n$-tuple $x$ in $X^n$, you can write $x \times a$ for the $(n+1)$-tuple in $X^{n+1}$ obtained by appending $a$.
|
In addition to the answers mentioned above, I would like to stress that any list can be expressed as a set.
Formally, we can define a list to be a function, where the domain is a subset of the natural numbers. We can then express the function as a set of ordered pairs $(x,y)$, where $x$ is the input and $y$ is the output of the function.
Using this approach has the upshot that you retain *all* of the neat set notation and the reader will be familiar with the notation used in your work. On the other hand, the notation is a little (but not overwhelmingly) clunky.
As an example, if we have a list $<y\_{1},...,y\_{n}>$ of real numbers, this corresponds to a function $$f:\{1,...,n\}\rightarrow\mathbb{R},$$ where $f(1) = y\_{1}$, etc. We could then represent this list as the set $$Y =\{\hspace{3pt}(x,f(x)):x\in \{1,...,n\} \hspace{3pt}\}.$$
Consequently, to append an element to $Y$, we could consider $$Y\cup \{(n+1,y\_{n+1})\}$$
To remove the clutter from this approach, we can then slightly abuse notation by defining
$$S\cup \{y\_{n+1}\} = S\cup \{(n+1,y\_{n+1})\}.$$
|
37,214 |
Is there any formal notation for dealing with lists, rather than sets?
e.g. if I have a set $X=\{x\_1,\dots,x\_n\}$ and I want to add a new item to the set, say $x\_{n+1}$, I can say "Let $X = X \cup \{x\_{n+1}\}$" and it is clearly understood that I want to add $x\_{n+1}$ to my set.
However, if $X$ is not a set but rather a list, or tuple (i.e. the elements are ordered and duplicates are allowed), is there any way of indicating that I am adding an element to the end of the list?
e.g. given $X=(x\_1,\dots,x\_n)$, how do I say add an element to $X$ such that $X=(x\_1,\dots,x\_n,x\_{n+1})$? i.e. how do I formally denote appending an element to $X$?
|
2011/05/05
|
['https://math.stackexchange.com/questions/37214', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/13/']
|
What you call a list is formally known as sequence. There was a question [which symbol is for sequence concatenation](https://math.stackexchange.com/questions/298648/is-there-a-common-symbol-for-concatenating-two-finite-sequences). Unfortunately there is no accepted answer. Symbols `⋅`, `⌒` (commentator actually used u2322, "frown" symbol but it's resisting my attempt to copy it) and `∥` are mentioned in comments.
According the [Wikipedia article](http://en.wikipedia.org/wiki/Concatenation_%28mathematics%29) `∥` is an operator for concatenation of numbers (doesn't specify which set of numbers, probably ℕ) but doesn't say much about sequences. The same symbol is in my opinion more commonly used for parallelism so it may confuse the reader.
I haven't seen `⌒`symbol before but commentators agree about it.
|
In addition to the answers mentioned above, I would like to stress that any list can be expressed as a set.
Formally, we can define a list to be a function, where the domain is a subset of the natural numbers. We can then express the function as a set of ordered pairs $(x,y)$, where $x$ is the input and $y$ is the output of the function.
Using this approach has the upshot that you retain *all* of the neat set notation and the reader will be familiar with the notation used in your work. On the other hand, the notation is a little (but not overwhelmingly) clunky.
As an example, if we have a list $<y\_{1},...,y\_{n}>$ of real numbers, this corresponds to a function $$f:\{1,...,n\}\rightarrow\mathbb{R},$$ where $f(1) = y\_{1}$, etc. We could then represent this list as the set $$Y =\{\hspace{3pt}(x,f(x)):x\in \{1,...,n\} \hspace{3pt}\}.$$
Consequently, to append an element to $Y$, we could consider $$Y\cup \{(n+1,y\_{n+1})\}$$
To remove the clutter from this approach, we can then slightly abuse notation by defining
$$S\cup \{y\_{n+1}\} = S\cup \{(n+1,y\_{n+1})\}.$$
|
24,750,593 |
I try to use `panoramaGL` framework and try to add it to my static library. So I've imported it to the project, add `CoreGraphics` framework but have an issue `Unknown type name 'CGFloat'` in PLStructs.h. When I Cmd+click on the `CGFloat` in Xcode - I go to the `CGBase.h` in `CoreGraphics` framework. Try to clean the project and replace the frameworks - the result is the same. Waiting for your help.
|
2014/07/15
|
['https://Stackoverflow.com/questions/24750593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2042311/']
|
The solution is simple:
```
#import <UIKit/UIKit.h>
```
|
The same problem came for me in Cocos2D.
The solution is
1. Go to build settings. In **Architectures** field you might have "Standard architectures (armv7, armv7s, arm64).
2. The main cause for the problem is arm64. So the best way is to use "**armv7**" in the field.
3. We keep the standard architecture as is n "**valid architectures**"
Hope it helps.
|
24,750,593 |
I try to use `panoramaGL` framework and try to add it to my static library. So I've imported it to the project, add `CoreGraphics` framework but have an issue `Unknown type name 'CGFloat'` in PLStructs.h. When I Cmd+click on the `CGFloat` in Xcode - I go to the `CGBase.h` in `CoreGraphics` framework. Try to clean the project and replace the frameworks - the result is the same. Waiting for your help.
|
2014/07/15
|
['https://Stackoverflow.com/questions/24750593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2042311/']
|
The solution is simple:
```
#import <UIKit/UIKit.h>
```
|
You actually don't need to import the full `UIKit`. This is enough:
```
#import <CoreGraphics/CoreGraphics.h>
```
|
24,750,593 |
I try to use `panoramaGL` framework and try to add it to my static library. So I've imported it to the project, add `CoreGraphics` framework but have an issue `Unknown type name 'CGFloat'` in PLStructs.h. When I Cmd+click on the `CGFloat` in Xcode - I go to the `CGBase.h` in `CoreGraphics` framework. Try to clean the project and replace the frameworks - the result is the same. Waiting for your help.
|
2014/07/15
|
['https://Stackoverflow.com/questions/24750593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2042311/']
|
The solution is simple:
```
#import <UIKit/UIKit.h>
```
|
Also don't need a full CoreGraphics.h. This is enough:
```
#import <CoreGraphics/CGBase.h>
```
|
24,750,593 |
I try to use `panoramaGL` framework and try to add it to my static library. So I've imported it to the project, add `CoreGraphics` framework but have an issue `Unknown type name 'CGFloat'` in PLStructs.h. When I Cmd+click on the `CGFloat` in Xcode - I go to the `CGBase.h` in `CoreGraphics` framework. Try to clean the project and replace the frameworks - the result is the same. Waiting for your help.
|
2014/07/15
|
['https://Stackoverflow.com/questions/24750593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2042311/']
|
You actually don't need to import the full `UIKit`. This is enough:
```
#import <CoreGraphics/CoreGraphics.h>
```
|
The same problem came for me in Cocos2D.
The solution is
1. Go to build settings. In **Architectures** field you might have "Standard architectures (armv7, armv7s, arm64).
2. The main cause for the problem is arm64. So the best way is to use "**armv7**" in the field.
3. We keep the standard architecture as is n "**valid architectures**"
Hope it helps.
|
24,750,593 |
I try to use `panoramaGL` framework and try to add it to my static library. So I've imported it to the project, add `CoreGraphics` framework but have an issue `Unknown type name 'CGFloat'` in PLStructs.h. When I Cmd+click on the `CGFloat` in Xcode - I go to the `CGBase.h` in `CoreGraphics` framework. Try to clean the project and replace the frameworks - the result is the same. Waiting for your help.
|
2014/07/15
|
['https://Stackoverflow.com/questions/24750593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2042311/']
|
Also don't need a full CoreGraphics.h. This is enough:
```
#import <CoreGraphics/CGBase.h>
```
|
The same problem came for me in Cocos2D.
The solution is
1. Go to build settings. In **Architectures** field you might have "Standard architectures (armv7, armv7s, arm64).
2. The main cause for the problem is arm64. So the best way is to use "**armv7**" in the field.
3. We keep the standard architecture as is n "**valid architectures**"
Hope it helps.
|
24,750,593 |
I try to use `panoramaGL` framework and try to add it to my static library. So I've imported it to the project, add `CoreGraphics` framework but have an issue `Unknown type name 'CGFloat'` in PLStructs.h. When I Cmd+click on the `CGFloat` in Xcode - I go to the `CGBase.h` in `CoreGraphics` framework. Try to clean the project and replace the frameworks - the result is the same. Waiting for your help.
|
2014/07/15
|
['https://Stackoverflow.com/questions/24750593', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2042311/']
|
Also don't need a full CoreGraphics.h. This is enough:
```
#import <CoreGraphics/CGBase.h>
```
|
You actually don't need to import the full `UIKit`. This is enough:
```
#import <CoreGraphics/CoreGraphics.h>
```
|
73,724 |
I have a lot of computers in my network and i need to get info about the software and hardware installed on all of them Is there any software to make such network inventory and audit?
|
2009/11/21
|
['https://superuser.com/questions/73724', 'https://superuser.com', 'https://superuser.com/users/-1/']
|
If you want to gather inventory/audit information **programmatically**, then use [WMI](http://msdn.microsoft.com/en-us/library/aa394582(VS.85).aspx).
WMI has a good .NET interface that is readibly available from within Visual Studio 2008 as a collection of library classes. PowerShell also exposes this interface for scripting.
|
You can try Spiceworks, it is free software with promo, also I’ve heard that the network inventory software by Clearapps is wide spread among sysadmins. It’s not free, but has more wide functionality. Or you can just google search and find everything here:
<http://www.google.com/search?hl=en&source=hp&q=pc+inventory+software&aq=f&oq=&aqi=g4g-m2>
|
73,724 |
I have a lot of computers in my network and i need to get info about the software and hardware installed on all of them Is there any software to make such network inventory and audit?
|
2009/11/21
|
['https://superuser.com/questions/73724', 'https://superuser.com', 'https://superuser.com/users/-1/']
|
If you want to gather inventory/audit information **programmatically**, then use [WMI](http://msdn.microsoft.com/en-us/library/aa394582(VS.85).aspx).
WMI has a good .NET interface that is readibly available from within Visual Studio 2008 as a collection of library classes. PowerShell also exposes this interface for scripting.
|
you can try [OCS Inventory](http://www.ocsinventory-ng.org/) which is an open source software which allows to do that.
|
67,794,186 |
I'm looking to create a custom, reusable Angular Material component for a slide toggle. I want to be able to pass different click functions into it and get the MatSlideToggleChange object back. Instead I get 'undefined'.
Here is my custom component ts and html files:
```
<mat-slide-toggle
[id]="toggleId"
color="primary"
labelPosition="after"
[required]="isRequired"
[formControlName]="controlName"
(change)="toggleClick.emit($event)"
[tabIndex]="tabNumber"
[disabled]="isDisabled"
[matTooltip]="toolTip"
>{{toggleText}}</mat-slide-toggle>
export class CustomSlideToggleComponent {
@Input() public toolTip: string = '';
@Input() public isDisabled: boolean = false;
@Input() public isRequired: boolean = false;
@Input() public toggleText: string = '';
@Input() public tabNumber: number = null;
@Input() public toggleId: string;
@Input() public controlName: string = '';
@Output() public readonly toggleClick = new EventEmitter<MatSlideToggleChange>();
public constructor() { }
}
```
I've tried to implement it this way using this html:
```
<custom-slide-toggle
[toggleId]="'enable_user'"
[controlName]="'enableUser'"
[toggleClick]="onEnableUser(toggleEvent)"
[tabNumber]="5"
[toggleText]="'A custom slide toggle'"
></custom-slide-toggle>
```
and this in the typescript:
```
public toggleEvent: MatSlideToggleChange;
public onEnableUser($event: MatSlideToggleChange) {
console.log('toggle was clicked' + $event);
}
```
When I click on the slide toggle $event always comes back 'undefined'. How can I use different click functions with this custom component?
|
2021/06/01
|
['https://Stackoverflow.com/questions/67794186', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2026659/']
|
I see two potential issues with your usage of the `custom-slide-toggle`:
1. `toggleClick` should be binded with `(toggleClick)` as it's only an `Output`:
```html
(toggleClick)="onEnableUser(toggleEvent)"
```
2. `toggleEvent` is not the event variable, it should be `$event`:
```html
(toggleClick)="onEnableUser($event)"
```
|
Event binding is with parentheses so i think that
```
(toggleClick)="onEnableUser(toggleEvent)"
```
should work.
|
1,590,623 |
Reading This Article: [Digital Trends CMD Commands](https://www.digitaltrends.com/computing/how-to-use-command-prompt/).
I Found A command Called Finger.
Trying it out on My System, Windows 10
Why won't it let me Finger anyone..
Says `Connect: Connection Refused?`
[](https://i.stack.imgur.com/tRIqP.jpg)
Not Sure If i'm entering the Commands right but just thought it was a funny command.
If anyone else has an update or full comprehensive list of CMD commands please could you add it in the comments.
[everything you need to know about tcp ips finger utility](https://www.techrepublic.com/article/everything-you-need-to-know-about-tcp-ips-finger-utility/)
Partially Related Questions found here on StackOverflow with the keywords: CMD & finger..
[Capturing Username from Grunt Shell Command](https://stackoverflow.com/questions/26682312/capturing-username-from-grunt-shell-command)
[Get only name and login from cmd finger -s](https://stackoverflow.com/questions/3893190/how-to-get-only-name-and-login-from-cmd-finger-s)
[Implement an interactive shell over SSH in Python](https://stackoverflow.com/questions/35821184/implement-an-interactive-shell-over-ssh-in-python-using-paramiko/36948840#36948840)
Will update the question in a minute with some more resources.
|
2020/10/04
|
['https://superuser.com/questions/1590623', 'https://superuser.com', 'https://superuser.com/users/1018010/']
|
The funny name is not Windows-specific – the same program has existed under the same name in many operating systems of the past. "Finger" is one of the earliest ARPAnet services (along with Telnet, FTP, SMTP &c).
* [History of the Finger protocol](http://www.rajivshah.com/Case_Studies/Finger/Finger.htm)
* [RFC 742](https://www.rfc-editor.org/rfc/rfc742) containing a few usage examples from 1977
### Invocation
Your first problem is that the program expects a username and/or an Internet address (hostname or IP address) but you're giving it random numbers instead. Pay attention to the help text that it shows:
```
finger [-l] [user]@host
```
This means that `-l` and the username are optional, but the "@host" is mandatory and it actually has to indicate a host (much like ping or telnet also expect a host).
Note that the `-l` option is a lowercase L (indicating "long output"), *not* the number one.
On some other operating systems, `finger` also has a local mode where you can run it without any hostname and it'll directly collect information about local users. However, the Windows version will not do that – it will always try to connect to the 'fingerd' service running on localhost:79, which Windows systems simply don't have (hence the "Connection refused" message).
(Yes, this actually contradicts the help text somewhat – if Windows *had* a finger service listening on port 79, then the '@host' would really be optional.)
You can instead try the Windows-specific `quser` or `qwinsta` commands to get a general idea of how it would work. (They can even query a remote Windows system kinda like finger, although this too is disabled by default on non-server Windows versions.)
### Practical use
The second problem is that nearly all systems on the internet *no longer provide* the "finger" service, due to reasons of security (it reveals usernames) and privacy (it shows when a user is active and even which IP address they're connecting from). Basically it's a relic from the 1980s Internet.
But a few public systems (mainly "retro computing" sites) still deliberately provide this service, so you can still try it out:
* `finger @ssh.tilde.club` (basic Linux machine at the "tilde.club" social network)
* `finger @athena.dialup.mit.edu` (MIT's Linux cluster)
* `finger @samba.acc.umu.se` (Linux cluster at Umeå University)
* `finger @up.update.uu.se` (1970s [ITS](https://en.m.wikipedia.org/wiki/Incompatible_Timesharing_System) running in an emulator – you should use the `-l` option with this one)
* `finger @telehack.com` ([a game](http://telehack.com/telehack.html) that simulates 80s networks)
* `finger @nullroute.eu.org` (an entirely custom "cluster" fingerd)
In all of those commands, the @host can be prefixed with someone's username if you know it, e.g. `finger [email protected]`. The initial output will show you which usernames are currently logged in.
Some systems will automatically activate "long" mode whenever a username is given, but with some other systems you still need the `-l` option to get detailed output.
---
>
> The other system also has to be running Unix.
>
>
>
The remote system is not required to be Unix – OpenVMS, ITS, Cisco IOS also had this service, and it's possible to write one for Windows as well *(which I have done)*.
Because the protocol is very simple (just ASCII over TCP), some systems host a custom finger service that provides other kinds of data, such as weather information – e.g. CMU had a vending machine and MIT used to have a "bathroom occupancy" service ([see also](https://www.ucc.asn.au/services/drink.ucc)). Among new or surviving services, `finger [location]@graph.no` will give you a weather forecast.
|
That articles information on finger is a bit if a nonsense. Finger is a fairly simple command that is disabled pretty much everywhere because it is more a security threat then a benefit. It was commonly deployed in the early Internet (prior to mass users and security concerns) and does not use encryption or security.
In order for system to work the remote system needs to be running a finger daemon (normally fingerd) - As systems are not running this you get a message when you try use the command.
Likewise the syntax requires a hostname - not a number, so your client is trying to connect to a nonexistent device which is the specific cause of the failure you are seeing.
|
22,224,840 |
I want to use mechanize to log into a page and retrieve some information. But however I try to authenticate It just fails with Error code **HTTP 401**, as you can see below:
```
r = br.open('http://intra')
File "bui...e\_mechanize.py", line 203, in open
File "bui...g\mechanize\_mechanize.py", line 255,
in _mech_openmechanize._response.httperror_seek_wrapper: HTTP Error 401: Unauthorized
```
This is my code so far:
```
import mechanize
import cookielib
# Browser
br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv(True)
# br.set_handle_gzip(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
# Follows refresh 0 but not hangs on refresh > 0
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
# If the protected site didn't receive the authentication data you would
# end up with a 410 error in your face
br.add_password('http://intra', 'myusername', 'mypassword')
# User-Agent (this is cheating, ok?)
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
# Open some site, let's pick a random one, the first that pops in mind:
# r = br.open('http://google.com')
r = br.open('http://intra')
html = r.read()
# Show the source
print html
```
What am I doing wrong? visiting `http://intra` (internal page) with e.g. chrome, it pops open a windows and asks for username/password once and then all is good.
The dialogue which pops open looks like this:

|
2014/03/06
|
['https://Stackoverflow.com/questions/22224840', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/576671/']
|
After tons of reaserch I managed to find out the reason behind this.
Find of all the site uses a so called [NTLM authentication](http://hc.apache.org/httpclient-legacy/authentication.html#Authentication_Schemes), which is not supported by mechanize.
This can help to find out the authentication mechanism of a site:
```
wget -O /dev/null -S http://www.the-site.com/
```
So the code was modified a little bit:
```
import sys
import urllib2
import mechanize
from ntlm import HTTPNtlmAuthHandler
print("LOGIN...")
user = sys.argv[1]
password = sys.argv[2]
url = sys.argv[3]
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, url, user, password)
# create the NTLM authentication handler
auth_NTLM = HTTPNtlmAuthHandler.HTTPNtlmAuthHandler(passman)
browser = mechanize.Browser()
handlersToKeep = []
for handler in browser.handlers:
if not isinstance(handler,
(mechanize._http.HTTPRobotRulesProcessor)):
handlersToKeep.append(handler)
browser.handlers = handlersToKeep
browser.add_handler(auth_NTLM)
response = browser.open(url)
response = browser.open("http://www.the-site.com")
print(response.read())
```
and finally mechanize needs to be patched, as mentioned [here](https://stackoverflow.com/questions/13649964/python-mechanize-with-ntlm-getting-attributeerror-httpresponse-instance-has-no):
```
--- _response.py.old 2013-02-06 11:14:33.208385467 +0100
+++ _response.py 2013-02-06 11:21:41.884081708 +0100
@@ -350,8 +350,13 @@
self.fileno = self.fp.fileno
else:
self.fileno = lambda: None
- self.__iter__ = self.fp.__iter__
- self.next = self.fp.next
+
+ if hasattr(self.fp, "__iter__"):
+ self.__iter__ = self.fp.__iter__
+ self.next = self.fp.next
+ else:
+ self.__iter__ = lambda self: self
+ self.next = lambda self: self.fp.readline()
def __repr__(self):
return '<%s at %s whose fp = %r>' % (
```
|
@theAlse : did you need to separately handle session cookies? I used your approach to authenticate against the SSO server but when I access the main site (ServiceNow) on the second "browser.open" call I still get a 401:Unauthorized error.
I tacked on a debug message on the mechanize \_response.py file to show the URL being visited and I was surprised that there is a secondary SSO server.
```
$ python s3.py
LOGIN...
[_DEBUG] Visiting https://sso.intra.client.com
[_DEBUG] Got past the first open statement.
[_DEBUG] Visiting https://clienteleitsm.service-now.com
[_DEBUG] Visiting <Request for https://ssointra.web.ipc.us.client.com/ssofedi/public/saml2sso?SAMLRequest=lVLB--snipped--&RelayState=https%3a%2f%2fclienteleitsm.service-now.com%2fnavpage.do>
[_DEBUG] Visiting <Request for https://ssointra.web.ipc.us.client.com/ssofedi/redirectjsp/FederationRedirectWDA.jsp?SAMLRequest=lVLBb--snipped--&SMPORTALURL=https%3A%2F%2Fssointra.web.ipc.us.client.com%2Fssofedi%2Fpublic%2Fsaml2sso>
[_DEBUG] Visiting <Request for https://ssointra.web.ipc.us.client.com/SSOI/ntlm/RedirectToWDA.jsp?TYPE=33554433&REALMOID=--snipped--%3D%26RelayState%3dhttps$%3a$%2f$%2fclienteleitsm%2eservice-now%2ecom$%2fnavpage%2edo%26SMPORTALURL%3dhttps$%3A$%2F$%2Fssointra%2eweb%2eipc%2eus%2eclient%2ecom$%2Fssofedi$%2Fpublic$%2Fsaml2sso>
[_DEBUG] Visiting <Request for https://ssointra.web.ipc.us.client.com/SSOI/ntlm/WDAProtectedPage.jsp?Target=HTTPS://ssointra.--snipped--&RelayState=https%3A%2F%2Fclienteleitsm.service-now.com%2Fnavpage.do&SMPORTALURL=https%3A%2F%2Fssointra.web.ipc.us.client.com%2Fssofedi%2Fpublic%2Fsaml2sso>
[_DEBUG] Visiting <Request for https://sso.intra.client.com/siteminderagent/ntlm/creds.ntc?CHALLENGE=&SMAGENTNAME=--snipped--https$%3A$%2F$%2Fssointra%2eweb%2eipc%2eus%2eclient%2ecom$%2Fssofedi$%2Fpublic$%2Fsaml2sso>
[Client-specific page about invalid username and password credential combination follows]
<HTML>
...
</HTML>
```
I already snipped a lot of the redirect URLS after the third debug line. The random strings are actually unique as when I put them in a browser I get an error page. However if I do it in an IE browser I dont even see the redirect pages.
Thanks.
|
23,517,225 |
I have an MVC 5 / Bootstrap application. On one of the pages, I have a number of fields all bound to the model associated with the page. However, I also have a simple unordered list which always starts out empty and the user can then add items to it. They do this by entering some text into a type ahead field. Once the user finds what he/she is looking for, he/she can click a button and have it added to the unordered list. Any number of items can be added to the list. All of this works fine.
My question is how I can get the contents of the unordered list posted back to the server along with the rest of the form contents, since the unordered list isn't part of the model?
|
2014/05/07
|
['https://Stackoverflow.com/questions/23517225', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/163534/']
|
Here is one way to skin this cat:
A) Add a collection to your model (which really should be a ViewModel, and not a domain model) to hold those items
B) In your button's click handler, create a hidden input field that conforms to the ASP.Net Wire Format: <http://www.hanselman.com/blog/ASPNETWireFormatForModelBindingToArraysListsCollectionsDictionaries.aspx>
If you had a collection of orders, you should end up generating controls like this:
```
<input type="hidden" name="Orders[0].Id" value="1" />
<input type="hidden" name="Orders[1].Id" value="2" />
```
Note sequential ordering is important, if you start removing items, you'll need to re-sequence your name values.
|
There couple of ways to work it out.
If you don't want to add to model, what I would prefer to do you can:
1. Directly access item that were posted via `Controller.Request` property;
2. You can post this items separately via Ajax request, and handle them in different controller action.
|
40,522,008 |
Note: This is an opinionated question. I m asking this as I was unable to find proper articles covering my concern.
PHP (alone or with a framework like laravel) can be used for both backend and frontend (with templating engines like Blade,Smarty,etc) development.
My concern is:
1. Is it good to use templating engine and create views in PHP?
2. Use PHP just as a backend tech and create APIs, let the frontend be built in any other language (like Angular,React,etc) chosen by the front end developer.
3. If I use templating-engine, is my application getting too tightly coupled between frontend tech choices and backend tech choices?
PS: I hope my concern is clear, if not I will explain it in an elaborated way.
|
2016/11/10
|
['https://Stackoverflow.com/questions/40522008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5193722/']
|
Your response is proper but your parsing is not proper. So first of all add GSON in your gradle file.
`compile 'com.google.code.gson:gson:2.4'`
Now use below your code for parsing your response
```
try {
JSONArray array = new JSONArray("put your response here");
Gson gson = new Gson();
for (int i = 0 ; i <array.length();i++)
{
SurvivorZAMQuestionnaire obj = new SurvivorZAMQuestionnaire();
obj.add(gson.fromJson(array.getJSONObject(i).toString(),SurvivorZAMQuestionnaire.class));
}
} catch (JSONException e) {
e.printStackTrace();
}
```
Add your obj in list and show it :)
|
The Error clearly states that the Gson accepts `JsonObject` not `JsonArray`. In your case you can put the response `JsonArray` into a `JsonObject` with a key for that `JsonArray` and give that key as `annotation` in `SurvivorZAMQuestionList`. By this way you can easily sort this problem.
Hope this is Helpful :)
|
40,522,008 |
Note: This is an opinionated question. I m asking this as I was unable to find proper articles covering my concern.
PHP (alone or with a framework like laravel) can be used for both backend and frontend (with templating engines like Blade,Smarty,etc) development.
My concern is:
1. Is it good to use templating engine and create views in PHP?
2. Use PHP just as a backend tech and create APIs, let the frontend be built in any other language (like Angular,React,etc) chosen by the front end developer.
3. If I use templating-engine, is my application getting too tightly coupled between frontend tech choices and backend tech choices?
PS: I hope my concern is clear, if not I will explain it in an elaborated way.
|
2016/11/10
|
['https://Stackoverflow.com/questions/40522008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5193722/']
|
Your response is proper but your parsing is not proper. So first of all add GSON in your gradle file.
`compile 'com.google.code.gson:gson:2.4'`
Now use below your code for parsing your response
```
try {
JSONArray array = new JSONArray("put your response here");
Gson gson = new Gson();
for (int i = 0 ; i <array.length();i++)
{
SurvivorZAMQuestionnaire obj = new SurvivorZAMQuestionnaire();
obj.add(gson.fromJson(array.getJSONObject(i).toString(),SurvivorZAMQuestionnaire.class));
}
} catch (JSONException e) {
e.printStackTrace();
}
```
Add your obj in list and show it :)
|
You can try to use GSON library -> `compile 'com.google.code.gson:gson:2.8.0'`
```
List<SurvivorZAMQuestionnaire> survivorZAMQuestionnaires;
...
Gson gson = new Gson();
Type listType = new TypeToken<List<SurvivorZAMQuestionnaire>>(){}.getType();
survivorZAMQuestionnaires = gson.fromJson(jsonString, listType);
```
`Type` is instance of `java.lang.reflect.Type;`
|
40,522,008 |
Note: This is an opinionated question. I m asking this as I was unable to find proper articles covering my concern.
PHP (alone or with a framework like laravel) can be used for both backend and frontend (with templating engines like Blade,Smarty,etc) development.
My concern is:
1. Is it good to use templating engine and create views in PHP?
2. Use PHP just as a backend tech and create APIs, let the frontend be built in any other language (like Angular,React,etc) chosen by the front end developer.
3. If I use templating-engine, is my application getting too tightly coupled between frontend tech choices and backend tech choices?
PS: I hope my concern is clear, if not I will explain it in an elaborated way.
|
2016/11/10
|
['https://Stackoverflow.com/questions/40522008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5193722/']
|
Your response is proper but your parsing is not proper. So first of all add GSON in your gradle file.
`compile 'com.google.code.gson:gson:2.4'`
Now use below your code for parsing your response
```
try {
JSONArray array = new JSONArray("put your response here");
Gson gson = new Gson();
for (int i = 0 ; i <array.length();i++)
{
SurvivorZAMQuestionnaire obj = new SurvivorZAMQuestionnaire();
obj.add(gson.fromJson(array.getJSONObject(i).toString(),SurvivorZAMQuestionnaire.class));
}
} catch (JSONException e) {
e.printStackTrace();
}
```
Add your obj in list and show it :)
|
Parse Your Json this way,
```
List<SurvivorZAMQuestionnaire> survivorZAMQuestionnaires = new Gson().fromJson(json, new TypeToken<List<SurvivorZAMQuestionnaire>>() {
}.getType());
```
|
40,522,008 |
Note: This is an opinionated question. I m asking this as I was unable to find proper articles covering my concern.
PHP (alone or with a framework like laravel) can be used for both backend and frontend (with templating engines like Blade,Smarty,etc) development.
My concern is:
1. Is it good to use templating engine and create views in PHP?
2. Use PHP just as a backend tech and create APIs, let the frontend be built in any other language (like Angular,React,etc) chosen by the front end developer.
3. If I use templating-engine, is my application getting too tightly coupled between frontend tech choices and backend tech choices?
PS: I hope my concern is clear, if not I will explain it in an elaborated way.
|
2016/11/10
|
['https://Stackoverflow.com/questions/40522008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5193722/']
|
Your response is proper but your parsing is not proper. So first of all add GSON in your gradle file.
`compile 'com.google.code.gson:gson:2.4'`
Now use below your code for parsing your response
```
try {
JSONArray array = new JSONArray("put your response here");
Gson gson = new Gson();
for (int i = 0 ; i <array.length();i++)
{
SurvivorZAMQuestionnaire obj = new SurvivorZAMQuestionnaire();
obj.add(gson.fromJson(array.getJSONObject(i).toString(),SurvivorZAMQuestionnaire.class));
}
} catch (JSONException e) {
e.printStackTrace();
}
```
Add your obj in list and show it :)
|
Do a gradle dependency of gson.
```
compile 'com.google.code.gson:gson:2.4'
```
Update code like this;
```
private void parseJSON(String jsonMessage) throws JSONException {
if (jsonMessage.startsWith("[")) {
Type type = new TypeToken<List<SurvivorZAMQuestionnaire>>()
}.getType();
List<SurvivorZAMQuestionnaire> survivorZAMQuestionnaireList =new Gson().fromJson
(jsonMessage, type);
Log.d("Print List",survivorZAMQuestionnaireList.toString());
}
}
```
|
40,522,008 |
Note: This is an opinionated question. I m asking this as I was unable to find proper articles covering my concern.
PHP (alone or with a framework like laravel) can be used for both backend and frontend (with templating engines like Blade,Smarty,etc) development.
My concern is:
1. Is it good to use templating engine and create views in PHP?
2. Use PHP just as a backend tech and create APIs, let the frontend be built in any other language (like Angular,React,etc) chosen by the front end developer.
3. If I use templating-engine, is my application getting too tightly coupled between frontend tech choices and backend tech choices?
PS: I hope my concern is clear, if not I will explain it in an elaborated way.
|
2016/11/10
|
['https://Stackoverflow.com/questions/40522008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5193722/']
|
Your response is proper but your parsing is not proper. So first of all add GSON in your gradle file.
`compile 'com.google.code.gson:gson:2.4'`
Now use below your code for parsing your response
```
try {
JSONArray array = new JSONArray("put your response here");
Gson gson = new Gson();
for (int i = 0 ; i <array.length();i++)
{
SurvivorZAMQuestionnaire obj = new SurvivorZAMQuestionnaire();
obj.add(gson.fromJson(array.getJSONObject(i).toString(),SurvivorZAMQuestionnaire.class));
}
} catch (JSONException e) {
e.printStackTrace();
}
```
Add your obj in list and show it :)
|
You need List of objects not only object, because your JSON contain list of objects. [How to Parse JSON Array in Android with Gson](https://stackoverflow.com/a/8371455/3529309)
|
40,522,008 |
Note: This is an opinionated question. I m asking this as I was unable to find proper articles covering my concern.
PHP (alone or with a framework like laravel) can be used for both backend and frontend (with templating engines like Blade,Smarty,etc) development.
My concern is:
1. Is it good to use templating engine and create views in PHP?
2. Use PHP just as a backend tech and create APIs, let the frontend be built in any other language (like Angular,React,etc) chosen by the front end developer.
3. If I use templating-engine, is my application getting too tightly coupled between frontend tech choices and backend tech choices?
PS: I hope my concern is clear, if not I will explain it in an elaborated way.
|
2016/11/10
|
['https://Stackoverflow.com/questions/40522008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5193722/']
|
The Error clearly states that the Gson accepts `JsonObject` not `JsonArray`. In your case you can put the response `JsonArray` into a `JsonObject` with a key for that `JsonArray` and give that key as `annotation` in `SurvivorZAMQuestionList`. By this way you can easily sort this problem.
Hope this is Helpful :)
|
You can try to use GSON library -> `compile 'com.google.code.gson:gson:2.8.0'`
```
List<SurvivorZAMQuestionnaire> survivorZAMQuestionnaires;
...
Gson gson = new Gson();
Type listType = new TypeToken<List<SurvivorZAMQuestionnaire>>(){}.getType();
survivorZAMQuestionnaires = gson.fromJson(jsonString, listType);
```
`Type` is instance of `java.lang.reflect.Type;`
|
40,522,008 |
Note: This is an opinionated question. I m asking this as I was unable to find proper articles covering my concern.
PHP (alone or with a framework like laravel) can be used for both backend and frontend (with templating engines like Blade,Smarty,etc) development.
My concern is:
1. Is it good to use templating engine and create views in PHP?
2. Use PHP just as a backend tech and create APIs, let the frontend be built in any other language (like Angular,React,etc) chosen by the front end developer.
3. If I use templating-engine, is my application getting too tightly coupled between frontend tech choices and backend tech choices?
PS: I hope my concern is clear, if not I will explain it in an elaborated way.
|
2016/11/10
|
['https://Stackoverflow.com/questions/40522008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5193722/']
|
The Error clearly states that the Gson accepts `JsonObject` not `JsonArray`. In your case you can put the response `JsonArray` into a `JsonObject` with a key for that `JsonArray` and give that key as `annotation` in `SurvivorZAMQuestionList`. By this way you can easily sort this problem.
Hope this is Helpful :)
|
Parse Your Json this way,
```
List<SurvivorZAMQuestionnaire> survivorZAMQuestionnaires = new Gson().fromJson(json, new TypeToken<List<SurvivorZAMQuestionnaire>>() {
}.getType());
```
|
40,522,008 |
Note: This is an opinionated question. I m asking this as I was unable to find proper articles covering my concern.
PHP (alone or with a framework like laravel) can be used for both backend and frontend (with templating engines like Blade,Smarty,etc) development.
My concern is:
1. Is it good to use templating engine and create views in PHP?
2. Use PHP just as a backend tech and create APIs, let the frontend be built in any other language (like Angular,React,etc) chosen by the front end developer.
3. If I use templating-engine, is my application getting too tightly coupled between frontend tech choices and backend tech choices?
PS: I hope my concern is clear, if not I will explain it in an elaborated way.
|
2016/11/10
|
['https://Stackoverflow.com/questions/40522008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5193722/']
|
The Error clearly states that the Gson accepts `JsonObject` not `JsonArray`. In your case you can put the response `JsonArray` into a `JsonObject` with a key for that `JsonArray` and give that key as `annotation` in `SurvivorZAMQuestionList`. By this way you can easily sort this problem.
Hope this is Helpful :)
|
Do a gradle dependency of gson.
```
compile 'com.google.code.gson:gson:2.4'
```
Update code like this;
```
private void parseJSON(String jsonMessage) throws JSONException {
if (jsonMessage.startsWith("[")) {
Type type = new TypeToken<List<SurvivorZAMQuestionnaire>>()
}.getType();
List<SurvivorZAMQuestionnaire> survivorZAMQuestionnaireList =new Gson().fromJson
(jsonMessage, type);
Log.d("Print List",survivorZAMQuestionnaireList.toString());
}
}
```
|
40,522,008 |
Note: This is an opinionated question. I m asking this as I was unable to find proper articles covering my concern.
PHP (alone or with a framework like laravel) can be used for both backend and frontend (with templating engines like Blade,Smarty,etc) development.
My concern is:
1. Is it good to use templating engine and create views in PHP?
2. Use PHP just as a backend tech and create APIs, let the frontend be built in any other language (like Angular,React,etc) chosen by the front end developer.
3. If I use templating-engine, is my application getting too tightly coupled between frontend tech choices and backend tech choices?
PS: I hope my concern is clear, if not I will explain it in an elaborated way.
|
2016/11/10
|
['https://Stackoverflow.com/questions/40522008', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5193722/']
|
The Error clearly states that the Gson accepts `JsonObject` not `JsonArray`. In your case you can put the response `JsonArray` into a `JsonObject` with a key for that `JsonArray` and give that key as `annotation` in `SurvivorZAMQuestionList`. By this way you can easily sort this problem.
Hope this is Helpful :)
|
You need List of objects not only object, because your JSON contain list of objects. [How to Parse JSON Array in Android with Gson](https://stackoverflow.com/a/8371455/3529309)
|
13,803,059 |
I need to run `knit2html` on the command line using `Rscript`. I tried the following code and it works
```
Rscript -e "(knitr::knit2html(text = '## good', fragment.only = TRUE))"
```
However, when I introduce R code chunks (or anything involving backticks), the process hangs. So the following does NOT work
```
Rscript -e "(knitr::knit2html(text = '## good\n `r 1 + 1`',fragment.only = T))"
```
For the purpose of my use, I only have access to the contents and hence can NOT pass a file to `knit2html`, which I know will work.
My question is how do I make this work. I know the problem is the backticks and I have tried looking at escaping them, but nothing seems to be working.
|
2012/12/10
|
['https://Stackoverflow.com/questions/13803059', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/235349/']
|
Android 2.3
```
// display the data
String baseUrl = "";
String mimeType = "text/html";
String encoding = "UTF-8";
html = sb.toString();
String historyUrl = "";
webViewDataViewer.loadDataWithBaseURL(baseUrl, html, mimeType, encoding, historyUrl);
```
|
The % symbol does not load in Android 2.2 webview
It has to be encoded.
|
63,768 |
Fluids exert hydrostatic pressure because their molecules hit each other or the immersed body, but why is that at a greater depth pressure is higher when molecules are the same ?
Assume density of fluid is uniform throughout the liquid.
|
2013/05/08
|
['https://physics.stackexchange.com/questions/63768', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']
|
Your statement:
>
> Assume density of fluid is same throughout.
>
>
>
conflicts with your actual question
>
> But why is that at a greater depth pressure is higher when molecules are the same?
>
>
>
If we imposed the very strict and non-physical constraint that the density of the fluid was uniform and isotropic, then we would have not variance in pressure what so ever - but as I have said this is non-physical.
What happens in reality is as follows... Each molecule in a given fluid has a mass. Gravity acts on this mass (lets assume vertically, and a column of perfectly stacked molecules) to produce a weight for each molecule. If we take the limiting case where all molecules are stationary then it should be easy to convince yourself that at some depth within the fluid a given molecule has a force acting upon it purely from gravity $Mg$ [$M$ is the molecules mass] plus the weight of all those molecules above it $\Sigma\_{i}m\_{i}g$ [where $m\_{i}$ is the mass of the ith molecule], but also a reaction force provided by the contact with the molecule directly below it $R$, where
$$R = Mg + \Sigma\_{i}m\_{i}g,$$
for some arbitrary stationary molecule. From this it should be clear that the absolute force acting on this molecule (in the vertical direction, for our ideal column) increases with the depth of the molecule. This is essentially what causes pressure in a fluid, in increases the fluid density with depth.
*Note. Of course the above is very much simplified as there will be much more going on at the molecular level, but this example should provide you with a basic Newtonian view with which to build from.*
I hope this helps.
|
In short: because the weight does it so.
Imagine a situation where several people are walking on each other in a small room: those that are at the top don't feel any discomfort whereas those at the bottom are crunched by the weight of the one above them. It's quite the same for the molecules.
|
63,768 |
Fluids exert hydrostatic pressure because their molecules hit each other or the immersed body, but why is that at a greater depth pressure is higher when molecules are the same ?
Assume density of fluid is uniform throughout the liquid.
|
2013/05/08
|
['https://physics.stackexchange.com/questions/63768', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/-1/']
|
Your statement:
>
> Assume density of fluid is same throughout.
>
>
>
conflicts with your actual question
>
> But why is that at a greater depth pressure is higher when molecules are the same?
>
>
>
If we imposed the very strict and non-physical constraint that the density of the fluid was uniform and isotropic, then we would have not variance in pressure what so ever - but as I have said this is non-physical.
What happens in reality is as follows... Each molecule in a given fluid has a mass. Gravity acts on this mass (lets assume vertically, and a column of perfectly stacked molecules) to produce a weight for each molecule. If we take the limiting case where all molecules are stationary then it should be easy to convince yourself that at some depth within the fluid a given molecule has a force acting upon it purely from gravity $Mg$ [$M$ is the molecules mass] plus the weight of all those molecules above it $\Sigma\_{i}m\_{i}g$ [where $m\_{i}$ is the mass of the ith molecule], but also a reaction force provided by the contact with the molecule directly below it $R$, where
$$R = Mg + \Sigma\_{i}m\_{i}g,$$
for some arbitrary stationary molecule. From this it should be clear that the absolute force acting on this molecule (in the vertical direction, for our ideal column) increases with the depth of the molecule. This is essentially what causes pressure in a fluid, in increases the fluid density with depth.
*Note. Of course the above is very much simplified as there will be much more going on at the molecular level, but this example should provide you with a basic Newtonian view with which to build from.*
I hope this helps.
|
First, think of this in terms of psi:
This is a bit simplified, but: When you are standing at sea level, under S.T.P (Standard Temperature and Pressure), you have a column of air some 120,000 feet high pushing on you. That weighs 14.7 pounds. (for a column that has a cross-section of one square inch.)
That is the pressure we feel, every day. It is what our bodies are accustomed to.
Now, think in the same terms, when you are under the water. In addition to that column of air, you also have a column of water--however deep you are-pushing down on you. If you are 33 feet deep, a square-inch cross-section of water will weigh as much as the column of air above it... Which is why at 33 feet, you are now experiencing 2 Atmospheres of pressure. (Atmosphere being the equivalent of what you feel at sea level at STP)
|
119,756 |
I am on a wireless network trying to play CS go with my friends. They all are in the same room with me. How can I create a server so I can play with my friends? In CS 1.6 we launch a hlds file which is in the cs 1.6 directory. We create a server through launching this hlds file. But in Counter Strike: Global Offensive I can't find hlds file, so now I don't know how to setup or create a server.
|
2013/06/09
|
['https://gaming.stackexchange.com/questions/119756', 'https://gaming.stackexchange.com', 'https://gaming.stackexchange.com/users/49842/']
|
Valve keeps the documentation for installing CS:GO servers on the [Valve Developer Wiki](https://developer.valvesoftware.com/wiki/Counter-Strike:_Global_Offensive_Dedicated_Servers). The docs don't present a direct step-by-step procedure, though, so I'll try to assist.
1. Download [SteamCMD](https://developer.valvesoftware.com/wiki/SteamCMD) (look for the "Windows zip file" link)
2. Unzip SteamCMD into a folder on your computer.
3. Double click steamcmd.exe
4. Wait while steamcmd gets the latest patches
5. At the `Steam>` prompt, type `login anonymous` to get connected to Steam
6. At the `Steam>` prompt, type `force_install_dir cs_go` to set the directory where CS:GO will be installed to. In this case, it's the folder cs\_go in the SteamCMD folder
7. At the `Steam>` prompt, type `app_update 740 validate` to initiate the dedicated server download.
8. Wait for the files to download (the time depends on Steam server load and your internet connection)
Once the download is complete, you can close SteamCMD.
To launch the server:
1. Open a command prompt (On Win7, you can open the Windows menu and type cmd.exe)
2. Get to the directory where you installed the dedicated server (ie, `cd C:\Users\YOUR USERNAME\Downloads\steamcmd_win32\cs_go`)
3. Launch the server exe with a command like: `srcds -game csgo -console -usercon +game_type 0 +game_mode 0 +mapgroup mg_bomb +map de_dust`
The actual server options will vary depending on what type of CS:GO game server you want to run. More examples are [here](https://developer.valvesoftware.com/wiki/Counter-Strike:_Global_Offensive_Dedicated_Servers) and complete srcds docs are [here](https://developer.valvesoftware.com/wiki/Command_Line_Options#Source_Games).
Running the server consumes some resources, so if you've got a PC that won't be playing the game, you can use that to run the server. If your PC is powerful enough, you might be able to run both the server and the game on the same machine. However, that will depend on a number of factors that are specific to your setup.
|
have you tried adding the -ip command line parameter with the LAN ip of the server?
If you activated the gamemodes\_server, are you sure it does not have any syntax error in there?
I've been setting up csgo servers since May 2013 and those were the most common issues.
|
9,434 |
I'm completely new to this, so I don't know if I'm doing something dumb or my regulator is broken. Here's what I'm doing:
Here's what I'm doing:
1. I connected [this regulator](http://www.beveragefactory.com/draftbeer/regulator/double/premium_double_gauge_542.html) to a 5 lb. CO2 cylinder.
2. Closed the blue output valve.
3. Turned the regulator knob all the way down (left, i.e. counter-clockwise, i.e. out, i.e. towards the minus sign).
4. Opened the valve on the CO2 cylinder.
5. As expected, at this point the input (high-pressure) guage quickly goes up to about 600 psi.
6. NOT expected: the output (low-pressure) guage quickly goes up to 50-60 PSI (the upper limit), and the relief valve starts sputtering and releasing CO2 quickly.
7. Finally, as expected, when I shut off the cylinder, the input pressure slowly drops to zero (and then, if I hold open the relief valve or open the output, so does the output pressure.)
I can't find any configuration of knobs and valves that doesn't result in the output pressure spiking and the relief valve opening. Am I missing something? Is the regulator broken? Something else entirely?
NOTE: If I open the output, and open the regulator a little bit (so it's just releasing CO2 into the room) then the regulator knob does control the flow rate, as expected. Turning the knob to the left reduces the flow to a fairly low rate. If I then block the flow by closing the output valve, the output pressure builds to 60 PSI in about a second and the relief valve opens.
|
2013/02/21
|
['https://homebrew.stackexchange.com/questions/9434', 'https://homebrew.stackexchange.com', 'https://homebrew.stackexchange.com/users/3159/']
|
It sounds like what you're doing is correct. (And I guess you've tried turning it all the way to the right - clockwise?)
The relief valve can be quite sensitive on some regulators, causing it to fire a little prematurely, so it might have been that, but for the fact that you say the dial jumps to 60 psi.
I would double check that the relief lock isn't engaged - this will cause the relief valve to be open all the time. Try turning the ring fastened at the end of the relief valve.
If you still can't regulate the pressure with the knob, then the regulator isn't living up to it's name! Sounds like you have a broken regulator.
|
CO2 Regulator knobs are counter intuitive for 1st time users.
When you "close it" like a faucet clockwise in fact you are adjusting a screw/pushing a pin that allow more CO2 flow.
Short: Try twisting all the way counter clockwise.
If it still fails, have your reg checked.
|
4,658,963 |
Is this a right code from the point of view of memory management?
```
NSEntityDescription *description = [NSEntityDescription
entityForName:@"Event" inManagedObjectContext:managedObjectContext];
NSFetchRequest *eventRequest = [[[NSFetchRequest alloc] init] autorelease];
[eventRequest setEntity:description];
[description release];
NSPredicate *eventPredicate = [NSPredicate predicateWithFormat:
@"(event == %@)", [item objectForKey:@"event"]];
[eventRequest setPredicate:eventPredicate];
```
Or i need to release description and eventPredicate?
Thanks
|
2011/01/11
|
['https://Stackoverflow.com/questions/4658963', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/258863/']
|
Looking at that code, the only object you own is the `eventRequest`. It is being autoreleased so you don't need to release it again.
From what I can see, based on naming convention, all the other objects aren't owned, so you don't need to release them.
The line `[description release];` will likely cause a crash for you somewhere down the line.
|
You dont't need any releases for that code. You should read [Apple's documentation](http://developer.apple.com/library/mac/#documentation/cocoa/conceptual/MemoryMgmt/MemoryMgmt.html) to find out why.
|
11,906,750 |
I want to scroll to the bottom of my tableview that contains custom cells.
Here is the code I am using to scroll:
```
NSIndexPath *lastMessage = [NSIndexPath indexPathForRow:[self.conversation.messages count]-1 inSection:0];
[self.messageTable scrollToRowAtIndexPath:lastMessage atScrollPosition:UITableViewScrollPositionTop animated:YES];
```
This scrolls the view; however only the very top of the last cell is visible with ~3/4 of the cell still below the fold that I have to manually scroll down to. Any ideas on how to fix this?
|
2012/08/10
|
['https://Stackoverflow.com/questions/11906750', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/482255/']
|
Turned out to be a timing issue. The tableview hadn't fully rendered yet when I called that method from viewdidload (contentsize of 0). Calling this method in viewDidAppear works brilliantly though.
|
It seems like the UITableView is confused about how big your cells are. Set the `rowHeight` property on the UITableView to the height of your custom cell.
|
7,917,076 |
I tried the `CONVERT(TIME,sample_datetime)`, but my software does not recognize TIME as a type.
How do I take sample `datetime` and extract the time from it in one variable and the day of the week from it in another variable?
|
2011/10/27
|
['https://Stackoverflow.com/questions/7917076', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/971115/']
|
Using [DATEPART()](http://msdn.microsoft.com/en-us/library/ms174420.aspx) function:
```
// returns 4
SELECT DATEPART(day, '2010-09-04 11:22:33')
// returns 7
SELECT DATEPART(dw, '2010-09-04 11:22:33')
// returns 11:22:33
SELECT CAST(DATEPART(HOUR, '2010-09-04 11:22:33') AS VARCHAR(2)) + ':'
+ CAST(DATEPART(MINUTE, '2010-09-04 11:22:33') AS VARCHAR(2)) + ':'
+ CAST(DATEPART(SECOND, '2010-09-04 11:22:33')AS VARCHAR(2))
```
Regarding TIME data type - it introduced since Sql Server 2008:
```
SELECT CAST('2010-09-04 11:22:33' as Time)
```
|
you didnt specify sql server version but
`select datepart(dw,yourdate)` should do it.
|
7,917,076 |
I tried the `CONVERT(TIME,sample_datetime)`, but my software does not recognize TIME as a type.
How do I take sample `datetime` and extract the time from it in one variable and the day of the week from it in another variable?
|
2011/10/27
|
['https://Stackoverflow.com/questions/7917076', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/971115/']
|
Using [DATEPART()](http://msdn.microsoft.com/en-us/library/ms174420.aspx) function:
```
// returns 4
SELECT DATEPART(day, '2010-09-04 11:22:33')
// returns 7
SELECT DATEPART(dw, '2010-09-04 11:22:33')
// returns 11:22:33
SELECT CAST(DATEPART(HOUR, '2010-09-04 11:22:33') AS VARCHAR(2)) + ':'
+ CAST(DATEPART(MINUTE, '2010-09-04 11:22:33') AS VARCHAR(2)) + ':'
+ CAST(DATEPART(SECOND, '2010-09-04 11:22:33')AS VARCHAR(2))
```
Regarding TIME data type - it introduced since Sql Server 2008:
```
SELECT CAST('2010-09-04 11:22:33' as Time)
```
|
DATEPART or DATENAME with DW will work for the day of the week depending on which format you need
```
SELECT DATEPART(DW, GETDATE())
SELECT DATENAME(DW, GETDATE())
```
You can convert the datetime to a varchar with specific formatting to get just the time
```
SELECT CONVERT(VARCHAR, GETDATE(), 14)
```
2008 has a TIME datatype, but it doesn't sound like you're running that
|
74,357,690 |
According to the Serverless [documentation](https://www.serverless.com/framework/docs/guides/parameters#), I should be able to define params within the dashboard/console. But when I navigate there, the inputs are disabled:
[](https://i.stack.imgur.com/gllCD.png)
I've tried following the [instructions](https://www.serverless.com/framework/docs/guides/parameters#cli-parameters) to update via CLI, with: `serverless deploy --param="domain=myapp.com" --param="key=value"`. The deploy runs successfully (I get a `✔ Service deployed to...` message with no errors), but nothing appears in my dashboard. Likewise, when I run a command to check whether there are any params stored: `serverless param list`, I get
```
Running "serverless" from node_modules
No parameters stored
```
|
2022/11/08
|
['https://Stackoverflow.com/questions/74357690', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11664580/']
|
Passing `param` flags will not upload the parameters to Dashboard/Console, it will only expose them in your configuration so you can access them with `${param:<param-name>}`. To my best knowledge, it is not possible to set Dashboard parameters with CLI, you need to set them manually via UI.
|
It was a permissions problem. The owner of the account updated the permissions and I was able to update the inputs.
|
48,646,089 |
I want to split this df into bins based on the variable Quality. However, it is extremely right skewed
```
TSI2 YRI Chromosome Quality
a1 0.03829518 0.050231431 22 0.860
a2 0.03110103 0.010192455 22 0.938
a3 0.03141379 0.060045625 22 0.848
```
This is a hist of Quality.
[](https://i.stack.imgur.com/7yPPx.png)
ll of the ways I have tried to bin the data so far have resulted in bins with very different numbers of samples in each.
```
totalResults$groups = cut(totalResults$Quality, 10)
```
Is there a way to force the bins to have even numbers of samples in each?
thanks
|
2018/02/06
|
['https://Stackoverflow.com/questions/48646089', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5784757/']
|
Applying the `FileStream` approach - as already mentioned - use the `FileStream` [constructor](https://learn.microsoft.com/en-us/dotnet/api/system.io.filestream.-ctor?view=netframework-4.8#System_IO_FileStream__ctor_System_String_System_IO_FileMode_System_IO_FileAccess_System_IO_FileShare_System_Int32_) that accepts a `bufferSize` argument, which specifies the amount of bytes being read into memory.
*(You can overrule the default value (`4096`) to fit your environment.)*
```
public FileStream(string path, FileMode mode, FileAccess access, FileShare share, int bufferSize);
```
>
> **bufferSize**:
>
> A positive System.Int32 value greater than 0 indicating
> the buffer size.
>
> The default buffer size is 4096.
>
>
>
```
public IActionResult GetFile()
{
var filePath = @"c:\temp\file.mpg"; // Your path to the audio file.
var bufferSize = 1024;
var fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read, bufferSize);
return File(fileStream, "audio/mpeg");
}
```
Note that there's no need to dispose the `fileStream`; the `File` method takes care of this.
---
To clarify:
When passing in a `FileStream`, its content is being read in chunks (matching the configured buffersize).
Concrete, this means that its `Read` method (`int Read (byte[] array, int offset, int count)`) gets executed repeatedly untill all bytes have been read, ensuring that no more than the given number of bytes are stored in memory.
So the scalability is within the less memory usage, as memory is a resource that can come under pressure if the size of the file is high, especially in combination with a high read frequency (of this or of other files)
which might cause out of memory problems.
|
*Posting as a community wiki, since it doesn't technically answer the question, but suggested code won't work as a comment.*
You can return a stream directly from `FileResult`, so there's no need to manually read from it. In fact, your code doesn't actually "stream", since you're basically reading the whole stream into memory, and then returning the `byte[]` at the end. Instead, just do:
```
using (FileStream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
return File(fileStream, "audio/mpeg");
}
```
Or even simpler, just return the file path, and let `FileResult` handle it completely:
```
return File(System.IO.File.OpenRead(filePath), "audio/mpeg");
```
|
2,005,378 |
Let $p$ be a prime number and $A$ be a commutative ring with unity. We say that $A$ has characteristic $p$ if $p\cdot 1\_A=0$. I would like to know if you could have a ring $A$ with all residue fields (= $\operatorname{Frac}(A/\mathfrak{p}$) with $\mathfrak{p}$ a prime ideal) of characteristic $p$ but $A$ itself not being of characteristic $p$.
|
2016/11/08
|
['https://math.stackexchange.com/questions/2005378', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/217745/']
|
You can take $\mathbb{Z}/4\mathbb{Z}.$ This has characteristic $4$, the only prime is $(2)$ and the residue field $\mathbb{F}\_2$ is of characteristic $2$.
EDIT:
You may also say something positive (but not really surprising either):
>
> If $A$ is an integral domain such that all the residue fields are of equal characteristic $p$, then $\mathrm{char}(A)=p$.
>
>
>
This is because of the prime ideal $(0)$: in this case, we have $A \subseteq \mathrm{Frac}(A)=\mathrm{Frac}(A/(0)),$ and the claim follows.
|
Have you think in $\mathbb{Z}\_{(p)}=\{\frac{a}{b}\mid p\nmid b\}$?. This ring has charateristic $0$. But it is a local ring, with unique maximal ideal $p\mathbb{Z}\_{(p)}$. So its residue field $\mathbb{Z}\_{(p)}/p\mathbb{Z}\_{(p)}$ is isomorphic to $\mathbb{Z}\_p$ the integeres modulo p.
|
236,953 |
Can anyone tell me how to tell if the predictors I am using are collinear and can not be used in a `geeglm` model? What is the value and is calculating the correlation the correct way of determining it?
|
2016/09/26
|
['https://stats.stackexchange.com/questions/236953', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/115530/']
|
You can start by looking at the Pearson pariwise correlations to get the strength and direction of the linear association between any two (continuous) predictors. This can give you some insights about the data. In R you can use:
`cor(dat[,names(dat)], use ="pairwise", method = "pearson")`
However, there is no exact threshold at which we can say that collinearity is too high (unless, of course, the Pearson correlation coefficient equals 1). If you try to fit a linear model, pairwise correlations are not the sole problem, we can have collinearity between more than two variables… We commonly evaluate multicollinearity through Variance Inflation Factors (VIFs). In R, after fitting the model we can use `vif(model)` from the package ‘car’. This gives the correlations between each predictor and all the other predictors used in the model. The rule of thumb is that VIF should not be larger than 10. If so, you remove the variable having the highest VIF, re-run the model and check again the VIF.
It may be sometimes that the predictor that you need to remove (according to VIF) is your predictor of interest (this can happen when your aim is not prediction, but rather to identify how certain predictors affect the outcome variable). In that case, you keep your predictor and look at the Pearson pairwise correlation matrix to identify which predictors are highly correlated with your main predictor and remove them one by one while checking VIF.
|
If what you are after is a list of covariates that are not collinear, you can use `lm()` to do the job for you.
Here is an example (with simulated data):
```
# Simulate data
x1 <- runif(100)
x2 <- runif(100)
x3 <- x1 + x2
y <- x1+x2+rnorm(100)
dat <- data.frame(y,x1,x2,x3)
# Run lm() on your data
reg <- lm(y~.,dat)
# Here is the list!
intersect(names(dat),names(reg$coef[!is.na(reg$coef)]))
```
|
33,487,368 |
Simple question that I have no idea of the answer to.
Is there a way to use just one xaml page (at least one "main" page) to emulate multiple pages. i.e the same as:
```
this.Frame.Navigate(typeof(Page2), null);
```
*but* using only one page?
Thanks so much, any help is appreciated.
|
2015/11/02
|
['https://Stackoverflow.com/questions/33487368', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2463166/']
|
>
> Currently when creating a FormData object, a checked checkbox is added with a value of "on", and an unchecked checkbox is not passed at all.
>
>
>
`on` is only used if the checkbox is missing a `value` attribute
>
> Do I have to hack in some hidden inputs to properly set checkboxes
>
>
>
No. That *is* properly handling checkboxes. It is how they have worked in forms since the form element was added to HTML.
Test for the presence or absence of the checkbox in the code that handles it.
|
Try this:
```
var checkbox = $("#myForm").find("input[type=checkbox]");
$.each(checkbox, function(key, val) {
formData.append($(val).attr('name'), this.is(':checked'))
});
```
It always adds the field to `FormData` with either a value of `true` when checked, or `false` when unchecked.
|
33,487,368 |
Simple question that I have no idea of the answer to.
Is there a way to use just one xaml page (at least one "main" page) to emulate multiple pages. i.e the same as:
```
this.Frame.Navigate(typeof(Page2), null);
```
*but* using only one page?
Thanks so much, any help is appreciated.
|
2015/11/02
|
['https://Stackoverflow.com/questions/33487368', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2463166/']
|
>
> Currently when creating a FormData object, a checked checkbox is added with a value of "on", and an unchecked checkbox is not passed at all.
>
>
>
`on` is only used if the checkbox is missing a `value` attribute
>
> Do I have to hack in some hidden inputs to properly set checkboxes
>
>
>
No. That *is* properly handling checkboxes. It is how they have worked in forms since the form element was added to HTML.
Test for the presence or absence of the checkbox in the code that handles it.
|
I took a slightly different approach from the existing answers. I created my form data variable the standard jQuery way inside my form submit event handler:
```
var form = $(this).get(0);
var formData = new FormData(form);
```
Based on [Quentin's answer](https://stackoverflow.com/a/33487482), saying that it is only set to 'on' when the `value` attribute is not available, I just added a document event on change to all checkboxes to set the value when the user checks or unchecks the input.
```
$(document).on("change", "input[type='checkbox']", function () {
var value = $(this).prop('checked');
$(this).val(value);
});
```
When my form data object is created in the above way, all checked checkboxes now have the value of 'true' rather than 'on'.
This worked quite nicely for my purposes and seems to me to be a pretty simple fix. Interestingly, having `value='false'` doesn't do anything as the input is just ignored and not added to the form data object if it is not checked. But obviously that still works.
|
33,487,368 |
Simple question that I have no idea of the answer to.
Is there a way to use just one xaml page (at least one "main" page) to emulate multiple pages. i.e the same as:
```
this.Frame.Navigate(typeof(Page2), null);
```
*but* using only one page?
Thanks so much, any help is appreciated.
|
2015/11/02
|
['https://Stackoverflow.com/questions/33487368', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2463166/']
|
Try this:
```
var checkbox = $("#myForm").find("input[type=checkbox]");
$.each(checkbox, function(key, val) {
formData.append($(val).attr('name'), this.is(':checked'))
});
```
It always adds the field to `FormData` with either a value of `true` when checked, or `false` when unchecked.
|
I took a slightly different approach from the existing answers. I created my form data variable the standard jQuery way inside my form submit event handler:
```
var form = $(this).get(0);
var formData = new FormData(form);
```
Based on [Quentin's answer](https://stackoverflow.com/a/33487482), saying that it is only set to 'on' when the `value` attribute is not available, I just added a document event on change to all checkboxes to set the value when the user checks or unchecks the input.
```
$(document).on("change", "input[type='checkbox']", function () {
var value = $(this).prop('checked');
$(this).val(value);
});
```
When my form data object is created in the above way, all checked checkboxes now have the value of 'true' rather than 'on'.
This worked quite nicely for my purposes and seems to me to be a pretty simple fix. Interestingly, having `value='false'` doesn't do anything as the input is just ignored and not added to the form data object if it is not checked. But obviously that still works.
|
326,104 |
[](https://i.stack.imgur.com/Rdip2.jpg)Recently I got stuck witht the following problem.
Imagine we have uniform a magnetic field which induction points upwards. The fields strength is steadily decreasing. If we put an iron coil perpendicular to the magnetic induction vector, then, obviously, there will be electric current induced in the coil.
However, as I understand, the coil itself is only a 'marker' that displays the electric field lines that actually make the electrons move. It means that the elcetric field is there even when there is no coil.
Now the problem:
I can imagine some coils being close to each other. It will essentially mean, that it in one of them the current will go one way and in the other - the opposite. How can this possibly be?
I looked at [this answer](https://physics.stackexchange.com/questions/8279/what-is-the-direction-of-the-induced-e-field-from-a-changing-uniform-magnetic-fi) as it is phrased very close to what I want and still I couldn't get the idea. Could the answer be presented in more layman terms .
|
2017/04/12
|
['https://physics.stackexchange.com/questions/326104', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/89976/']
|
Since the Maxwell's equations are linear partial differential equations, you can compute the magnetic field due to multiple sources by superposition.
A really important application relies on the superposition principle for magnetic fields is the Biot–Savart law i.e. the fact that the magnetic field is a vector sum of the field created by each infinitesimal section of the wire individually.

$$\mathrm d\vec B = \frac{\mu\_0}{4\pi}\frac{I \; \mathrm d\vec l \times \vec R}{R^3}$$
|
You are correct, they follow superposition. The magnetic field is a vector field, and so they follow a vector sum when they are in a superposition.
Maxwell's equations are linear ($\nabla \times$ and $\nabla \dot{}$ are linear operators) and it follows that solutions ($E$ and $B$) obey the superposition principle.
|
3,200,569 |
**Short Version:**
How can it be geometrically shown that non-singular 2D linear transformations take circles to ellipses?
*(Also, its probably important to state I'd prefer an explanation that doesn't use SVD, as I don't really understand it yet...although I see it everywhere)*
**Long Version:**
Let's use the definition of an ellipse as being a circle stretched in two perpendicular directions. The two directions its stretched in will correspond to the two axes of the ellipse.
We begin by defining the circle as the endpoints of all possible 2D unit vectors. The ellipse *(or at least I"m TOLD it's an ellipse)* is the shape drawn by the endpoints of all these vectors after they have all been transformed in the same way *(aka multiplied by the same nonsingular matrix $A$)*.
1. For linear transformations represented by diagonal matrices, it's easy to see. We're just stretching the circle in the X and Y directions.
2. For linear transformations represented by symmetric matrices...its a little harder, but I can see the transformation because the eigenvectors of the symmetric matrix are perpendicular, and if we change to a basis where those eigenvectors are the basis vectors, the transformation can be represented by a diagonal matrix *(as for WHY symmetric matrices can be decomposed this way I don't yet really understand - but for the purpose of this question I'm just accepting that they can; I'm accepting that the eigenvectors of symmetric matrices are perpendicular to one another)*.
So, just like diagonal matrices, symmetric matrices also correspond to stretching a unit circle in perpendicular directions - but unless the symmetric matrix is diagonal, these are perpendicular directions different from the X and Y directions.
3. Buuut...what about for nonsymmetric matrices?
Thanks!
---
**EDIT:**
---------
I've now learned of the polar decomposition of any real matrix, and that provides a beautiful explanation for why any real matrix takes a circle to an ellipse!
$A=QS$, where $Q$ is an orthogonal matrix *(rotation)* and $S$ is a symmetric matrix *(stretching in the direction of the eigenvectors).*
The symmetric matrix will definitely correspond to making an ellipse *(since it scales in orthogonal directions, although perhaps not our regular $x$ and $y$ directions)* and all the orthonormal matrix will do is rotate this ellipse.
However, all the explanations I've seen so far that PROVE that polar decompositions of real matrices are **always** possible use an **algebraic** **explanation instead of a geometric one**...so they aren't really what I'm looking for.
Thanks again!
|
2019/04/24
|
['https://math.stackexchange.com/questions/3200569', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/493688/']
|
The equation of a circle is $x^2 + y^2 = r^2$, or in terms of vectors $(x,y) \pmatrix{x\cr y} = r^2$ An invertible linear transformation $T$ takes $\pmatrix{x\cr y}$ to $\pmatrix{X\cr Y} = T\pmatrix{x\cr y}$. Thus $\pmatrix{x\cr y\cr} = T^{-1} \pmatrix{X\cr Y}$, and $(x,y) = (X, Y) (T^{-1})^\top$. The equation becomes
$$(X, Y) (T^{-1})^\top T^{-1} \pmatrix{X\cr Y} = r^2 $$
Note that $(T^{-1})^\top T^{-1}$ is a real symmetric matrix, so it can be diagonalized, and its eigenvalues are positive.
|
Every real square matrix has a [polar decomposition](https://en.wikipedia.org/wiki/Polar_decomposition) into the product of an orthogonal matrix $U$ and a positive-semidefinite (symmetric) matrix $P$. If the original matrix is nonsingular, then $P$ is positive-definite. In 2-D, orthogonal matrices represent either rotations or reflections, which are both isometries, so they don’t affect the shape of the transformed circle. As you’ve mentioned, $P$ can be orthogonally diagonalized, so it represents a stretch in some set of perpendicular directions.
The existence of this decomposition is equivalent to the existence of the SVD, but can be shown without relying on the latter. In a similar vein, the SVD decomposes the matrix into the product of a rotation or reflection, a scaling, and another rotation or reflection.
You might also have a look at the [Steiner generation of an ellipse](https://en.wikipedia.org/wiki/Ellipse#Steiner_generation). This uses intersecting line segments drawn between points on the sides of a parallelogram to generate ellipses, including circles. Affine transformations preserve incidence relationships (the image of the intersection of a pair of lines is the intersection of the lines’ images) and maps paralellograms to parallelograms, so the image of an ellipse under an affine transformation is another ellipse.
|
3,200,569 |
**Short Version:**
How can it be geometrically shown that non-singular 2D linear transformations take circles to ellipses?
*(Also, its probably important to state I'd prefer an explanation that doesn't use SVD, as I don't really understand it yet...although I see it everywhere)*
**Long Version:**
Let's use the definition of an ellipse as being a circle stretched in two perpendicular directions. The two directions its stretched in will correspond to the two axes of the ellipse.
We begin by defining the circle as the endpoints of all possible 2D unit vectors. The ellipse *(or at least I"m TOLD it's an ellipse)* is the shape drawn by the endpoints of all these vectors after they have all been transformed in the same way *(aka multiplied by the same nonsingular matrix $A$)*.
1. For linear transformations represented by diagonal matrices, it's easy to see. We're just stretching the circle in the X and Y directions.
2. For linear transformations represented by symmetric matrices...its a little harder, but I can see the transformation because the eigenvectors of the symmetric matrix are perpendicular, and if we change to a basis where those eigenvectors are the basis vectors, the transformation can be represented by a diagonal matrix *(as for WHY symmetric matrices can be decomposed this way I don't yet really understand - but for the purpose of this question I'm just accepting that they can; I'm accepting that the eigenvectors of symmetric matrices are perpendicular to one another)*.
So, just like diagonal matrices, symmetric matrices also correspond to stretching a unit circle in perpendicular directions - but unless the symmetric matrix is diagonal, these are perpendicular directions different from the X and Y directions.
3. Buuut...what about for nonsymmetric matrices?
Thanks!
---
**EDIT:**
---------
I've now learned of the polar decomposition of any real matrix, and that provides a beautiful explanation for why any real matrix takes a circle to an ellipse!
$A=QS$, where $Q$ is an orthogonal matrix *(rotation)* and $S$ is a symmetric matrix *(stretching in the direction of the eigenvectors).*
The symmetric matrix will definitely correspond to making an ellipse *(since it scales in orthogonal directions, although perhaps not our regular $x$ and $y$ directions)* and all the orthonormal matrix will do is rotate this ellipse.
However, all the explanations I've seen so far that PROVE that polar decompositions of real matrices are **always** possible use an **algebraic** **explanation instead of a geometric one**...so they aren't really what I'm looking for.
Thanks again!
|
2019/04/24
|
['https://math.stackexchange.com/questions/3200569', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/493688/']
|
The equation of a circle is $x^2 + y^2 = r^2$, or in terms of vectors $(x,y) \pmatrix{x\cr y} = r^2$ An invertible linear transformation $T$ takes $\pmatrix{x\cr y}$ to $\pmatrix{X\cr Y} = T\pmatrix{x\cr y}$. Thus $\pmatrix{x\cr y\cr} = T^{-1} \pmatrix{X\cr Y}$, and $(x,y) = (X, Y) (T^{-1})^\top$. The equation becomes
$$(X, Y) (T^{-1})^\top T^{-1} \pmatrix{X\cr Y} = r^2 $$
Note that $(T^{-1})^\top T^{-1}$ is a real symmetric matrix, so it can be diagonalized, and its eigenvalues are positive.
|
The answers on this thread are quite insightful but I am attempting here a rather very geometric answer, as the OP demanded so. For this I am going to use another interesting geometric interpretation of linear transformation (which is easier to imagine)
An alternate geometric interpretation of (dimension-preserving) linear transformation is that it's a transformation such that *any line in original space is always transformed into a line* and origin is not shifted.
Now imagine a freshly made pizza base/crust. Even more, let's make grids on it.
Our job is now to transform it's shape such that all the grid lines (or any possible line) remain lines. So you see at best what we can do is \*stretch it with equal forces on opposite sides. You are free to choose where to stretch and Of course we can simply rotate it also after or before all the stretching.
It's not difficult to imagine that we can only get an ellipse (or bigger circle if we stretched from all sides with equal force).
Other interesting points:
(1): The direction of stretch is eigen vector and extent to which it is stretched is eigen value;
(2): Each rectangle from the gridlines is now parallelogram.
|
3,200,569 |
**Short Version:**
How can it be geometrically shown that non-singular 2D linear transformations take circles to ellipses?
*(Also, its probably important to state I'd prefer an explanation that doesn't use SVD, as I don't really understand it yet...although I see it everywhere)*
**Long Version:**
Let's use the definition of an ellipse as being a circle stretched in two perpendicular directions. The two directions its stretched in will correspond to the two axes of the ellipse.
We begin by defining the circle as the endpoints of all possible 2D unit vectors. The ellipse *(or at least I"m TOLD it's an ellipse)* is the shape drawn by the endpoints of all these vectors after they have all been transformed in the same way *(aka multiplied by the same nonsingular matrix $A$)*.
1. For linear transformations represented by diagonal matrices, it's easy to see. We're just stretching the circle in the X and Y directions.
2. For linear transformations represented by symmetric matrices...its a little harder, but I can see the transformation because the eigenvectors of the symmetric matrix are perpendicular, and if we change to a basis where those eigenvectors are the basis vectors, the transformation can be represented by a diagonal matrix *(as for WHY symmetric matrices can be decomposed this way I don't yet really understand - but for the purpose of this question I'm just accepting that they can; I'm accepting that the eigenvectors of symmetric matrices are perpendicular to one another)*.
So, just like diagonal matrices, symmetric matrices also correspond to stretching a unit circle in perpendicular directions - but unless the symmetric matrix is diagonal, these are perpendicular directions different from the X and Y directions.
3. Buuut...what about for nonsymmetric matrices?
Thanks!
---
**EDIT:**
---------
I've now learned of the polar decomposition of any real matrix, and that provides a beautiful explanation for why any real matrix takes a circle to an ellipse!
$A=QS$, where $Q$ is an orthogonal matrix *(rotation)* and $S$ is a symmetric matrix *(stretching in the direction of the eigenvectors).*
The symmetric matrix will definitely correspond to making an ellipse *(since it scales in orthogonal directions, although perhaps not our regular $x$ and $y$ directions)* and all the orthonormal matrix will do is rotate this ellipse.
However, all the explanations I've seen so far that PROVE that polar decompositions of real matrices are **always** possible use an **algebraic** **explanation instead of a geometric one**...so they aren't really what I'm looking for.
Thanks again!
|
2019/04/24
|
['https://math.stackexchange.com/questions/3200569', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/493688/']
|
Every real square matrix has a [polar decomposition](https://en.wikipedia.org/wiki/Polar_decomposition) into the product of an orthogonal matrix $U$ and a positive-semidefinite (symmetric) matrix $P$. If the original matrix is nonsingular, then $P$ is positive-definite. In 2-D, orthogonal matrices represent either rotations or reflections, which are both isometries, so they don’t affect the shape of the transformed circle. As you’ve mentioned, $P$ can be orthogonally diagonalized, so it represents a stretch in some set of perpendicular directions.
The existence of this decomposition is equivalent to the existence of the SVD, but can be shown without relying on the latter. In a similar vein, the SVD decomposes the matrix into the product of a rotation or reflection, a scaling, and another rotation or reflection.
You might also have a look at the [Steiner generation of an ellipse](https://en.wikipedia.org/wiki/Ellipse#Steiner_generation). This uses intersecting line segments drawn between points on the sides of a parallelogram to generate ellipses, including circles. Affine transformations preserve incidence relationships (the image of the intersection of a pair of lines is the intersection of the lines’ images) and maps paralellograms to parallelograms, so the image of an ellipse under an affine transformation is another ellipse.
|
The answers on this thread are quite insightful but I am attempting here a rather very geometric answer, as the OP demanded so. For this I am going to use another interesting geometric interpretation of linear transformation (which is easier to imagine)
An alternate geometric interpretation of (dimension-preserving) linear transformation is that it's a transformation such that *any line in original space is always transformed into a line* and origin is not shifted.
Now imagine a freshly made pizza base/crust. Even more, let's make grids on it.
Our job is now to transform it's shape such that all the grid lines (or any possible line) remain lines. So you see at best what we can do is \*stretch it with equal forces on opposite sides. You are free to choose where to stretch and Of course we can simply rotate it also after or before all the stretching.
It's not difficult to imagine that we can only get an ellipse (or bigger circle if we stretched from all sides with equal force).
Other interesting points:
(1): The direction of stretch is eigen vector and extent to which it is stretched is eigen value;
(2): Each rectangle from the gridlines is now parallelogram.
|
1,726,843 |
When reading a PHP book I wanted to try my own (continue) example.
I made the following code but it doesn't work although everything seems to be ok
```
$num2 = 1;
while ($num2 < 19)
{
if ($num2 == 15) {
continue;
} else {
echo "Continue at 15 (".$num2.").<br />";
$num2++;
}
}
```
The output is
```
Continue at 15 (1).
Continue at 15 (2).
Continue at 15 (3).
Continue at 15 (4).
Continue at 15 (5).
Continue at 15 (6).
Continue at 15 (7).
Continue at 15 (8).
Continue at 15 (9).
Continue at 15 (10).
Continue at 15 (11).
Continue at 15 (12).
Continue at 15 (13).
Continue at 15 (14).
Fatal error: Maximum execution time of 30 seconds exceeded in /var/www/php/continueandbreak.php on line 20
```
Line 20 is that line
```
if ($num2 == 15) {
```
Would you please tell me what's wrong with my example ?
I am sorry for such a Noob question
|
2009/11/13
|
['https://Stackoverflow.com/questions/1726843', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/210142/']
|
if you don't increment `$num2` before the `continue` you will get into an infinite loop;
```
$num2 = 0;
while ($num2 < 18)
{
$num2++;
if ($num2 == 15) {
continue;
} else {
echo "Continue at 15 (".$num2.").<br />";
}
}
```
|
You don't even need continue there, your code equivalent to;
```
$num2 = 1;
while ($num2 < 19){
if ($num2 != 15) {
echo "Continue at 15 (".$num2.").<br />";
$num2++;
}
}
```
If that's not what you're trying to achieve, you're using continue wrong.
|
1,726,843 |
When reading a PHP book I wanted to try my own (continue) example.
I made the following code but it doesn't work although everything seems to be ok
```
$num2 = 1;
while ($num2 < 19)
{
if ($num2 == 15) {
continue;
} else {
echo "Continue at 15 (".$num2.").<br />";
$num2++;
}
}
```
The output is
```
Continue at 15 (1).
Continue at 15 (2).
Continue at 15 (3).
Continue at 15 (4).
Continue at 15 (5).
Continue at 15 (6).
Continue at 15 (7).
Continue at 15 (8).
Continue at 15 (9).
Continue at 15 (10).
Continue at 15 (11).
Continue at 15 (12).
Continue at 15 (13).
Continue at 15 (14).
Fatal error: Maximum execution time of 30 seconds exceeded in /var/www/php/continueandbreak.php on line 20
```
Line 20 is that line
```
if ($num2 == 15) {
```
Would you please tell me what's wrong with my example ?
I am sorry for such a Noob question
|
2009/11/13
|
['https://Stackoverflow.com/questions/1726843', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/210142/']
|
if you don't increment `$num2` before the `continue` you will get into an infinite loop;
```
$num2 = 0;
while ($num2 < 18)
{
$num2++;
if ($num2 == 15) {
continue;
} else {
echo "Continue at 15 (".$num2.").<br />";
}
}
```
|
in php, use **foreach** for arrays and **for** for looping
```
for($num = 1; $num < 19; $num++) {
if ($num != 15) {
echo "Continue at 15 (" . $num . ") . <br />";
break;
}
}
```
|
1,726,843 |
When reading a PHP book I wanted to try my own (continue) example.
I made the following code but it doesn't work although everything seems to be ok
```
$num2 = 1;
while ($num2 < 19)
{
if ($num2 == 15) {
continue;
} else {
echo "Continue at 15 (".$num2.").<br />";
$num2++;
}
}
```
The output is
```
Continue at 15 (1).
Continue at 15 (2).
Continue at 15 (3).
Continue at 15 (4).
Continue at 15 (5).
Continue at 15 (6).
Continue at 15 (7).
Continue at 15 (8).
Continue at 15 (9).
Continue at 15 (10).
Continue at 15 (11).
Continue at 15 (12).
Continue at 15 (13).
Continue at 15 (14).
Fatal error: Maximum execution time of 30 seconds exceeded in /var/www/php/continueandbreak.php on line 20
```
Line 20 is that line
```
if ($num2 == 15) {
```
Would you please tell me what's wrong with my example ?
I am sorry for such a Noob question
|
2009/11/13
|
['https://Stackoverflow.com/questions/1726843', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/210142/']
|
You don't even need continue there, your code equivalent to;
```
$num2 = 1;
while ($num2 < 19){
if ($num2 != 15) {
echo "Continue at 15 (".$num2.").<br />";
$num2++;
}
}
```
If that's not what you're trying to achieve, you're using continue wrong.
|
in php, use **foreach** for arrays and **for** for looping
```
for($num = 1; $num < 19; $num++) {
if ($num != 15) {
echo "Continue at 15 (" . $num . ") . <br />";
break;
}
}
```
|
14,810,602 |
This is my code and it is not working correctly. I want to set `minDate` to the current date. How can I do it?
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
```
|
2013/02/11
|
['https://Stackoverflow.com/questions/14810602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305280/']
|
You can use the `minDate` property, like this:
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
minDate: 0, // 0 days offset = today
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
});
```
You can also specify a date, like this:
```
minDate: new Date(), // = today
```
|
**Set minDate to current date in jQuery Datepicker :**
```
$("input.DateFrom").datepicker({
minDate: new Date()
});
```
|
14,810,602 |
This is my code and it is not working correctly. I want to set `minDate` to the current date. How can I do it?
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
```
|
2013/02/11
|
['https://Stackoverflow.com/questions/14810602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305280/']
|
You can specify minDate as today by adding `minDate: 0` to the options.
```
$("input.DateFrom").datepicker({
minDate: 0,
...
});
```
**Demo**: <http://jsfiddle.net/2CZtV/>
**Docs**: <http://jqueryui.com/datepicker/#min-max>
|
Use this one :
```
onSelect: function(dateText) {
$("input#DateTo").datepicker('option', 'minDate', dateText);
}
```
This may be useful :
<http://jsfiddle.net/injulkarnilesh/xNeTe/>
|
14,810,602 |
This is my code and it is not working correctly. I want to set `minDate` to the current date. How can I do it?
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
```
|
2013/02/11
|
['https://Stackoverflow.com/questions/14810602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305280/']
|
You can specify minDate as today by adding `minDate: 0` to the options.
```
$("input.DateFrom").datepicker({
minDate: 0,
...
});
```
**Demo**: <http://jsfiddle.net/2CZtV/>
**Docs**: <http://jqueryui.com/datepicker/#min-max>
|
You can use the `minDate` property, like this:
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
minDate: 0, // 0 days offset = today
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
});
```
You can also specify a date, like this:
```
minDate: new Date(), // = today
```
|
14,810,602 |
This is my code and it is not working correctly. I want to set `minDate` to the current date. How can I do it?
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
```
|
2013/02/11
|
['https://Stackoverflow.com/questions/14810602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305280/']
|
You can use the `minDate` property, like this:
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
minDate: 0, // 0 days offset = today
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
});
```
You can also specify a date, like this:
```
minDate: new Date(), // = today
```
|
Use this one :
```
onSelect: function(dateText) {
$("input#DateTo").datepicker('option', 'minDate', dateText);
}
```
This may be useful :
<http://jsfiddle.net/injulkarnilesh/xNeTe/>
|
14,810,602 |
This is my code and it is not working correctly. I want to set `minDate` to the current date. How can I do it?
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
```
|
2013/02/11
|
['https://Stackoverflow.com/questions/14810602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305280/']
|
You can specify minDate as today by adding `minDate: 0` to the options.
```
$("input.DateFrom").datepicker({
minDate: 0,
...
});
```
**Demo**: <http://jsfiddle.net/2CZtV/>
**Docs**: <http://jqueryui.com/datepicker/#min-max>
|
**Set minDate to current date in jQuery Datepicker :**
```
$("input.DateFrom").datepicker({
minDate: new Date()
});
```
|
14,810,602 |
This is my code and it is not working correctly. I want to set `minDate` to the current date. How can I do it?
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
```
|
2013/02/11
|
['https://Stackoverflow.com/questions/14810602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305280/']
|
Use this one :
```
onSelect: function(dateText) {
$("input#DateTo").datepicker('option', 'minDate', dateText);
}
```
This may be useful :
<http://jsfiddle.net/injulkarnilesh/xNeTe/>
|
minDate property for current date works on for both -> minDate:"yy-mm-dd" or minDate:0
|
14,810,602 |
This is my code and it is not working correctly. I want to set `minDate` to the current date. How can I do it?
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
```
|
2013/02/11
|
['https://Stackoverflow.com/questions/14810602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305280/']
|
You can specify minDate as today by adding `minDate: 0` to the options.
```
$("input.DateFrom").datepicker({
minDate: 0,
...
});
```
**Demo**: <http://jsfiddle.net/2CZtV/>
**Docs**: <http://jqueryui.com/datepicker/#min-max>
|
minDate property for current date works on for both -> minDate:"yy-mm-dd" or minDate:0
|
14,810,602 |
This is my code and it is not working correctly. I want to set `minDate` to the current date. How can I do it?
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
```
|
2013/02/11
|
['https://Stackoverflow.com/questions/14810602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305280/']
|
Use this one :
```
onSelect: function(dateText) {
$("input#DateTo").datepicker('option', 'minDate', dateText);
}
```
This may be useful :
<http://jsfiddle.net/injulkarnilesh/xNeTe/>
|
**Set minDate to current date in jQuery Datepicker :**
```
$("input.DateFrom").datepicker({
minDate: new Date()
});
```
|
14,810,602 |
This is my code and it is not working correctly. I want to set `minDate` to the current date. How can I do it?
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
```
|
2013/02/11
|
['https://Stackoverflow.com/questions/14810602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305280/']
|
You can use the `minDate` property, like this:
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
minDate: 0, // 0 days offset = today
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
});
```
You can also specify a date, like this:
```
minDate: new Date(), // = today
```
|
I set starting date using this method, because aforesaid or other codes didn't work for me
```js
$(document).ready(function() {
$('#dateFrm').datepicker('setStartDate', new Date(yyyy, dd, MM));
});
```
|
14,810,602 |
This is my code and it is not working correctly. I want to set `minDate` to the current date. How can I do it?
```
$("input.DateFrom").datepicker({
changeMonth: true,
changeYear: true,
dateFormat: 'yy-mm-dd',
maxDate: 'today',
onSelect: function(dateText) {
$sD = new Date(dateText);
$("input#DateTo").datepicker('option', 'minDate', min);
}
```
|
2013/02/11
|
['https://Stackoverflow.com/questions/14810602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1305280/']
|
You can specify minDate as today by adding `minDate: 0` to the options.
```
$("input.DateFrom").datepicker({
minDate: 0,
...
});
```
**Demo**: <http://jsfiddle.net/2CZtV/>
**Docs**: <http://jqueryui.com/datepicker/#min-max>
|
can also use:
```
$("input.DateFrom").datepicker({
minDate: 'today'
});
```
|
55,409,656 |
I have a dataset in CSV file and all data is a numeric attribute, I want to apply k-Nearest Neighbors in my dataset
I have some error in my code I don't know who I can fix it.
code:
[enter image description here][1]
[enter image description here][2]
|
2019/03/29
|
['https://Stackoverflow.com/questions/55409656', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11273598/']
|
To add to `cglacet`'s answer - if one wants to detect whether a loop is running and adjust automatically (ie run `main()` on the existing loop, otherwise `asyncio.run()`), here is a snippet that may prove useful:
```py
# async def main():
# ...
try:
loop = asyncio.get_running_loop()
except RuntimeError: # 'RuntimeError: There is no current event loop...'
loop = None
if loop and loop.is_running():
print('Async event loop already running. Adding coroutine to the event loop.')
tsk = loop.create_task(main())
# ^-- https://docs.python.org/3/library/asyncio-task.html#task-object
# Optionally, a callback function can be executed when the coroutine completes
tsk.add_done_callback(
lambda t: print(f'Task done with result={t.result()} << return val of main()'))
else:
print('Starting new event loop')
result = asyncio.run(main())
```
|
I found the [`unsync`](https://github.com/alex-sherman/unsync) package useful for writing code that behaves the same way in a Python script and the Jupyter REPL.
```py
import asyncio
from unsync import unsync
@unsync
async def demo_async_fn():
await asyncio.sleep(0.1)
return "done!"
print(demo_async_fn().result())
```
|
55,409,656 |
I have a dataset in CSV file and all data is a numeric attribute, I want to apply k-Nearest Neighbors in my dataset
I have some error in my code I don't know who I can fix it.
code:
[enter image description here][1]
[enter image description here][2]
|
2019/03/29
|
['https://Stackoverflow.com/questions/55409656', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11273598/']
|
Just use this:
<https://github.com/erdewit/nest_asyncio>
```
import nest_asyncio
nest_asyncio.apply()
```
|
I found the [`unsync`](https://github.com/alex-sherman/unsync) package useful for writing code that behaves the same way in a Python script and the Jupyter REPL.
```py
import asyncio
from unsync import unsync
@unsync
async def demo_async_fn():
await asyncio.sleep(0.1)
return "done!"
print(demo_async_fn().result())
```
|
55,409,656 |
I have a dataset in CSV file and all data is a numeric attribute, I want to apply k-Nearest Neighbors in my dataset
I have some error in my code I don't know who I can fix it.
code:
[enter image description here][1]
[enter image description here][2]
|
2019/03/29
|
['https://Stackoverflow.com/questions/55409656', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11273598/']
|
To add to `cglacet`'s answer - if one wants to detect whether a loop is running and adjust automatically (ie run `main()` on the existing loop, otherwise `asyncio.run()`), here is a snippet that may prove useful:
```py
# async def main():
# ...
try:
loop = asyncio.get_running_loop()
except RuntimeError: # 'RuntimeError: There is no current event loop...'
loop = None
if loop and loop.is_running():
print('Async event loop already running. Adding coroutine to the event loop.')
tsk = loop.create_task(main())
# ^-- https://docs.python.org/3/library/asyncio-task.html#task-object
# Optionally, a callback function can be executed when the coroutine completes
tsk.add_done_callback(
lambda t: print(f'Task done with result={t.result()} << return val of main()'))
else:
print('Starting new event loop')
result = asyncio.run(main())
```
|
Just use this:
<https://github.com/erdewit/nest_asyncio>
```
import nest_asyncio
nest_asyncio.apply()
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.