text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
Combination of sObjects (Tuples) as Map Values
Is there a way in apex to create a Map<Integer, Tuple<SObject1, SObject2>> where the Integer is an index and the Tuple will hold the combination of SObjects. The process that I am trying to accomplish is where a trigger on SObject1 will try to create a record of type SObject2 and if SObject2 creation fails (DML exception), I want to be able to update the SObject1 status to error.
How: If the above map is possible, I can get the index of the Sobject2 record which failed and will be able to update SObject1 record at that index with the error status.
Is there a better way to do it? I am not sure if Tuples are allowed on the Salesforce platform. Any help is appreciated
A:
Apex does not have the concept of a tuple, at least not natively.
You could try to create a wrapper class that holds both SObjects, or use a Map<Integer, List<SObject>>. Both of those approaches end up running into the same issue though; given an instance of SObject2, you wouldn't be able to (efficiently) find the corresponding SObject1 instance. I mean, linear time isn't the worst we could do, but we can find that correlation in constant time.
You could tie them together using a Map<SObject2, SObject1> as mentioned in the comments on your question, but using SObjects as keys in a map is a dangerous game (if any field is changed by any amount, you'll lose the mapping, and I don't think you could get it back by undoing the change).
Instead, I'd recommend simply using 2 separate lists.
The first list holds your SObject1 records.
You'd iterate over that list to generate your SObject2 records.
The key here is that Lists are ordered collections. When you iterate over your first List, you start at index 0. When you add SObject2 to the second list, it too will be added starting at index 0. Assuming that you create only and exactly 1 SObject2 record per SObject1 record, your two lists will be in lock-step with one another without the need for keeping an explicit index.
Adrian, in the catch block of his example code, uses two methods from the DML Exception class getNumDml() and getDmlIndex(). Documentation on those can be found at the bottom of the documentation on built-in exception classes.
getNumDml() tells you how many failures you had, and getDmlIndex() tells you which index in the list you performed DML on was the cause of the exception.
Putting everything together, we get something like
// Trigger.new provides one of our required lists for us, so we only need to create
// a list for your SObject2 records.
List<SObject2> sobj2List = new List<SObject2>();
for(SObject1 record :Trigger.new){
// By virtue of iterating over a List, and creating a corresponding list, the two
// lists are automatically correlated by the inherent list index.
sobj2List.add(new SObject2(
// set fieldName = value pairs here, each name = value pair separated
// by a comma
));
}
try{
insert sobj2List;
} catch (DMLException e){
for(Integer i = 0; i < e.getNumDml()){
// We can use getDmlIndex with Trigger.new, and .addError to the corresponding
// record.
Trigger.new[e.getDmlIndex(i)].addError('Inserting corresponding SObject2 record failed');
}
}
This approach doesn't assume that your two SObjects have any lasting relationship between them. If your two SObjects have some relationship (Master-Detail, Lookup, other...), then Adrian's approach may be better.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Python: Test if an argument is an integer
I want to write a python script that takes 3 parameters. The first parameter is a string, the second is an integer, and the third is also an integer.
I want to put conditional checks at the start to ensure that the proper number of arguments are provided, and they are the right type before proceeding.
I know we can use sys.argv to get the argument list, but I don't know how to test that a parameter is an integer before assigning it to my local variable for use.
Any help would be greatly appreciated.
A:
str.isdigit() can be used to test if a string is comprised solely of numbers.
A:
More generally, you can use isinstance to see if something is an instance of a class.
Obviously, in the case of script arguments, everything is a string, but if you are receiving arguments to a function/method and want to check them, you can use:
def foo(bar):
if not isinstance(bar, int):
bar = int(bar)
# continue processing...
You can also pass a tuple of classes to isinstance:
isinstance(bar, (int, float, decimal.Decimal))
A:
If you're running Python 2.7, try importing argparse. Python 3.2 will also use it, and it is the new preferred way to parse arguments.
This sample code from the Python documentation page takes in a list of ints and finds either the max or the sum of the numbers passed.
import argparse
parser = argparse.ArgumentParser(description='Process some integers.')
parser.add_argument('integers', metavar='N', type=int, nargs='+',
help='an integer for the accumulator')
parser.add_argument('--sum', dest='accumulate', action='store_const',
const=sum, default=max,
help='sum the integers (default: find the max)')
args = parser.parse_args()
print(args.accumulate(args.integers))
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can I visit the UK as a tourist before my Student Visa starts?
I planned a trip to Europe the month before my student visa starts but wanted to stop in the UK for 2 weeks before I continue traveling through Europe. Would I be allowed to enter the UK as a tourist for 2 weeks, leave the UK to go to France (etc.) and the enter later on my Student Visa?
A:
This is permitted. However, if you are required to speak to a border officer when you arrive, they may be concerned that you are intending to start your studies early. You should bring with you an itinerary for your travels, and your tickets to France. In general, proof of onward travel isn't required when entering the UK as a visitor, but in your case it would be helpful, since the border officer might want some evidence that you will leave before starting your studies. There's no need to present this information unless you are asked for it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the mathematical reason behind the gradual increase in osmotic pressure?
Question:
Arrange the following in increasing order of osmotic pressure, $0.1~\mathrm{M}$ cane sugar, $0.1~\mathrm{M}$ $\ce{NaCl}$ solution, $0.1~\mathrm{M}$ $\ce{H2SO4}$.
Solution: As the number of particles/effective concentration of the solutions increase in the following order:
Cane sugar $(0.1~\mathrm{M})$ > $\ce{NaCl}$ $(0.2~\mathrm{M})$ > $\ce{H2SO4}$ $(0.3~\mathrm{M})$
So the osmotic pressure which is related to concentration by the relation $P=cRT$ varies as:
Cane sugar > $\ce{NaCl}$ > $\ce{H2SO4}$
Is the following reasoning correct?
A:
That formula for the osmotic pressure is known as the van 't Hoff law. It is described in some detail at the Wikipedia page for osmotic pressure, which also includes a brief (albeit incomplete and rather sloppy) derivation. To reach the van't Hoff law
$$\Pi = cRT$$
you have to approximate $\ln x_v$ by a Taylor series $(\ln x_v = \ln (1 - x_\mathrm{solute}) \approx -x_\mathrm{solute}$ which holds for small values of $x_\mathrm{solute})$ to get
$$\Pi = \frac{x_\mathrm{solute}RT}{V_\mathrm{m}}$$
The Wikipedia page uses $V$ to refer to the molar volume of the solvent (which should really be represented by $V_\mathrm{m}$).[1] In any case, $x_\mathrm{solute} = n_\mathrm{solute}/n_\mathrm{tot}$ and $V_\mathrm{m} \approx V/n_\mathrm{tot}$ such that
$$\frac{x_\mathrm{solute}}{V_\mathrm{m}} \approx \frac{n_\mathrm{solute}}{V} = c_\mathrm{solute}$$
and if the solute is a 1:1 electrolyte, such as $\ce{NaCl}$, then you have to multiply by two to account for the total number of solute particles.
I am not sure about the term "effective concentration" (I personally have never seen it, and if I had to, I would probably write something like the "total concentration of solute particles") but the logic used is correct as long as you are in the regime where the van 't Hoff law is applicable i.e. very dilute solvent.
Lastly you should probably use $<$ instead of $>$ in your ordering to be clear.
[1] In the actual derivation this should strictly be the partial molar volume of the solvent
$$\overline{V}_{\!\!\mathrm{m,solvent}} = \left(\frac{\partial V}{\partial n_\mathrm{solvent}}\right)_{\!n_\mathrm{solute}}$$
but this can to a good extent be approximated by the molar volume of the pure solvent.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
element inserted into an SVG not working
I'm dynamically inserting a <title> into my group in an SVG, however its effect is not working. The element gets added at the proper location and everything, yet my group doesn't get a tooltip. The same element inserted by hand into the SVG works. How come my dynamically-inserted one isn't?
function setuptooltip() {
var shadowlegs = document.getElementById('shadow-legs');
var title = document.createElement('title');
var titletext = document.createTextNode("Hi there and greetings!");
title.appendChild(titletext);
// get the first child of shadowlegs, so we can insert before it
var firchild = shadowlegs.firstChild;
// insert before the first child
shadowlegs.insertBefore(title, firchild);
}
Here's the code: http://jsfiddle.net/bYjva/
A:
You're not creating a proper SVG element, but a DOM element, you have to do
var title = document.createElementNS('http://www.w3.org/2000/svg', 'title');
FIDDLE
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Pointer arithmetic with variables not in an array
I have the following question; If a is an int array with 10 Element I can define pointers
int*b=&a[3];
int*c=&[2];
I can then do arithmetic operations with these pointers like int d=a-c; which will return the number of int values in the array between b and c. So my question is if I am also allowed to do such pointer arithmetic operations for any variables which may not be in an array. For example:
int a=10;
int b=20;
int*c=&a;
int* d=&b;
and then do int e=d-c; or int*e=c+1;
The reason I ask is that I have received conflicting information about whether this leads to undefined behaviour,
A:
[expr.add] standard draft:
When an expression that has integral type is added to or subtracted from a pointer, the result has the type
of the pointer operand. If the expression P points to element x[i] of an array object x with n elements,86
the expressions P + J and J + P (where J has the value j) point to the (possibly-hypothetical) element
x[i + j] if 0 ≤ i + j ≤ n; otherwise, the behavior is undefined. Likewise, the expression P - J points to the
(possibly-hypothetical) element x[i − j] if 0 ≤ i − j ≤ n; otherwise, the behavior is undefined.
When two pointers to elements of the same array object are subtracted, the type of the result is an
implementation-defined signed integral type; this type shall be the same type that is defined as std::ptrdiff_-
t in the header (21.2). If the expressions P and Q point to, respectively, elements x[i] and x[j]
of the same array object x, the expression P - Q has the value i − j; otherwise, the behavior is undefined.
[ Note: If the value i − j is not in the range of representable values of type std::ptrdiff_t, the behavior is
undefined. — end note ]
86) An object that is not an array element is considered to belong to a single-element array for this purpose; see 8.3.1. A
pointer past the last element of an array x of n elements is considered to be equivalent to a pointer to a hypothetical element
x[n] for this purpose; see 6.9.2.
c+1 is well defined, because it would be pointing one past the "single-element array" that the variable is treated as for the purpose of the quoted rule, and therefore satisfies 0 ≤ 0 + 1 ≤ 1. But it would not be well defined to indirect that pointer, since it past the end of that "array".
d-c has undefined behaviour.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I autoplay this pure CSS3 slideshow?
UPDATE: Issue here is (see current CSS), once the last (second) image comes up, the animation back to first image happens straight away with no delay. I'd expect the same delay for animation to second image to apply back to the first one, instead of it going straight back to the first image (at translationX(0)).
I have a slideshow as shown in the code below:
.slideshowcontainer {
width:800px;
height:400px;
margin-left:auto;
margin-right:auto;
margin-top:0px;
text-align:center;
overflow:hidden;
position:relative;
top:30px;
border-style:solid;
border-width:10px;
border-color:white;
border-radius:15px;
}
.imagecontainer {
width:1600px;
height:400px;
clear:both;
position:relative;
-webkit-transition:left 3s;
-moz-transition:left 3s;
-o-transition:left 3s;
-ms-transition:left 3s;
transition:left 3s;
animation:scroller 16s infinite;
}
@keyframes scroller {
0% {transform:translateX(0);}
31.25% {transform:translateX(0);}
50% {transform:translateX(-800px);}
81.25% {transform:translateX(-800px);}
100% {transform:translateX(0);}
}
.slideshowimage {
float:left;
margin:0px;
padding:0px;
position:relative;
}
#slideshowimage-1:target ~ .imagecontainer {
left:0px;
}
#slideshowimage-2:target ~ .imagecontainer {
left:-800px;
}
.buttoncontainer {
position:relative;
top:-20px;
}
.button {
display:inline-block;
height:10px;
width:10px;
border-radius:10px;
background-color:darkgray;
-webkit-transition:background-color 0.25s;
-moz-transition:background-color 0.25s;
-o-transition:background-color 0.25s;
-ms-transition:background-color 0.25s;
transition:background-color 0.25s;
}
.button:hover {
background-color:gray;
}
Further more, I'd like to ask if anyone knows why when I click the button for next image upon loading page, the image is displayed with no transition. The lack of transition happens only on the first click.
A:
You need to make some calculations at the animation keyframes
For example, since you have 2 images and want to see each image for 5 seconds and the slide from one to the other should last for 1 second you need a total of 12 seconds. So use animation:scroller 12s;.
For the actual keyframes each second is 100% / 12 = 8.33% of the animation.
@keyframes scroller {
0% {transform:translateX(0);}
41.6% {transform:translateX(0);} /*wait from 0% to 41%, which is 5 seconds*/
50% {transform:translateX(-800px);} /*slide for 1 second*/
91.6% {transform:translateX(-800px);} /*wait 5 seconds*/
100% {transform:translateX(0);} /* slide back for 1 second*/
}
.slideshowcontainer {
width:800px;
height:400px;
margin-left:auto;
margin-right:auto;
margin-top:0px;
text-align:center;
overflow:hidden;
position:relative;
top:30px;
border-style:solid;
border-width:10px;
border-color:white;
border-radius:15px;
}
.imagecontainer {
width:1600px;
height:400px;
clear:both;
position:relative;
-webkit-transition:left 3s;
-moz-transition:left 3s;
-o-transition:left 3s;
-ms-transition:left 3s;
transition:left 3s;
animation:scroller 12s;
}
@keyframes scroller {
0% {transform:translateX(0);}
41.6% {transform:translateX(0);} /*41% of 12seconds is 5second*/
50% {transform:translateX(-800px);} /*slide for 1 second*/
91.6% {transform:translateX(-800px);} /*wait 5 seconds*/
100% {transform:translateX(0);} /* slide back for 1 second*/
}
.slideshowimage {
float:left;
margin:0px;
padding:0px;
position:relative;
}
#slideshowimage-1:target ~ .imagecontainer {
left:0px;
}
#slideshowimage-2:target ~ .imagecontainer {
left:-800px;
}
.buttoncontainer {
position:relative;
top:-20px;
}
.button {
display:inline-block;
height:10px;
width:10px;
border-radius:10px;
background-color:darkgray;
-webkit-transition:background-color 0.25s;
-moz-transition:background-color 0.25s;
-o-transition:background-color 0.25s;
-ms-transition:background-color 0.25s;
transition:background-color 0.25s;
}
.button:hover {
background-color:gray;
}
<div class="slideshowcontainer">
<span id="slideshowimage-1"></span>
<span id="slideshowimage-2"></span>
<span id="slideshowimage-3"></span>
<div class="imagecontainer">
<img src="https://placehold.it/800x400" class="slideshowimage" style="width:800px;height:400px;">
<img src="https://placehold.it/800x400" class="slideshowimage" style="width:800px;height:400px;">
</div>
<div class="buttoncontainer">
<a href="#slideshowimage-1" class="button"></a>
<a href="#slideshowimage-2" class="button"></a>
</div>
</div>
if you want the autslide to go on for ever then use animation:scroller 12s infinite;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Get value from xml using xpath
I have word document converted to xml file, this is a part of this file:
<w:tc>
<w:tcPr>
<w:tcW w:w="2130" w:type="dxa"/>
</w:tcPr>
<w:p w:rsidR="00255D05" w:rsidRPr="00FF409F" w:rsidRDefault="00255D05" w:rsidP="00D041E7">
<w:pPr>
<w:rPr>
<w:rFonts w:hint="cs"/>
<w:sz w:val="36"/>
<w:szCs w:val="36"/>
<w:rtl/>
<w:lang w:bidi="ar-JO"/>
</w:rPr>
</w:pPr>
<w:r w:rsidRPr="00FF409F">
<w:rPr>
<w:rFonts w:hint="cs"/>
<w:sz w:val="36"/>
<w:szCs w:val="36"/>
<w:rtl/>
<w:lang w:bidi="ar-JO"/>
</w:rPr>
<w:t>myWantedText</w:t>
</w:r>
</w:p>
</w:tc>
I am trying to get the value of 'myWantedText', so far i have tried:
$xml = new SimpleXMLElement($fileContents);
foreach($xml->xpath('//w:t') as $t) {
var_dump($t);
}
but all i am getting is a punch of object(SimpleXMLElement)[2]
A:
You are lacking a namespace in the input XML and a declaration as Stuart Pointed out. Below is your XML, with the correct Word XML Namespace.
<?php
$str = <<<XML
<?xml version="1.0" standalone="yes"?>
<w:tc xmlns:w="http://schemas.microsoft.com/office/word/2003/wordml">
<w:tcPr>
<w:tcW w:w="2130" w:type="dxa"/>
</w:tcPr>
<w:p w:rsidR="00255D05" w:rsidRPr="00FF409F" w:rsidRDefault="00255D05" w:rsidP="00D041E7">
<w:pPr>
<w:rPr>
<w:rFonts w:hint="cs"/>
<w:sz w:val="36"/>
<w:szCs w:val="36"/>
<w:rtl/>
<w:lang w:bidi="ar-JO"/>
</w:rPr>
</w:pPr>
<w:r w:rsidRPr="00FF409F">
<w:rPr>
<w:rFonts w:hint="cs"/>
<w:sz w:val="36"/>
<w:szCs w:val="36"/>
<w:rtl/>
<w:lang w:bidi="ar-JO"/>
</w:rPr>
<w:t>myWantedText</w:t>
</w:r>
</w:p>
</w:tc>
XML;
$xml = new SimpleXMLElement($str);
$xml->registerXPathNamespace('w', 'http://schemas.microsoft.com/office/word/2003/wordml');
foreach($xml->xpath('//w:t') as $t) {
var_dump($t);
}
?>
Output:
object(SimpleXMLElement)#2 (1) {
[0]=>
string(12) "myWantedText"
}
You can see this working here: http://codepad.org/YRIO6uk3
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Google Sheets evaluates only first condition in array in SUMIFS statement
I found an example in Excel where one can use an array to set OR conditions. I want to do this in Google Sheets but I'm not sure how. Using the same syntax doesn't work.
SUM(SUMIFS({})
Excel University
Report
Sales 30,050 => formula: =SUM(SUMIFS($C$18:$C$28,$B$18:$B$28,{"Sales-Labor","Sales-Hardware","Sales-Software"}))
COS 21,136
Gross Profit 8,914
SG&A 2,054
Net Income 6,860
Data
Account Amount
Sales-Labor 15,050
Sales-Hardware 10,779
Sales-Software 4,221
COS-Labor 9,058
COS-Hardware 8,172
COS-Software 3,906
Supplies 256
Marketing 1,200
Trade shows 200
Telephone 299
Internet 99
If you pop the same values into Google Sheets and the same formula as marked above, you'll only get the value of the first criteria.
Source: http://www.excel-university.com/sumifs-with-or/
A:
A fairly simple solution is just to use SUMPRODUCT and ISNUMBER(MATCH)
=SUMPRODUCT(
$C$18:$C$28,
ISNUMBER(MATCH(
B18:B28,
{"Sales-Labor","Sales-Hardware","Sales-Software"},
0)))
Or with SUMIFS. You can make the delimiter something other than an empty string to avoid clashes.
=ArrayFormula(SUMIFS(
C18:C28,
FIND(B18:B28,
JOIN("",
{"Sales-Labor","Sales-Hardware","Sales-Software"})),
">0"))
|
{
"pile_set_name": "StackExchange"
}
|
Q:
updating mongoid gem to verison 5.0.0 and rails to 4.0.0, it is giving issue "Bundler could not find compatible versions for gem "railties"
while running Bundle install following error is coming
Bundler could not find compatible versions for gem "railties":
In Gemfile:
devise (~> 3.2.4) was resolved to 3.2.4, which depends on
railties (< 5, >= 3.2.6)
factory_girl_rails (~> 4.4.0) was resolved to 4.4.1, which depends on
railties (>= 3.0.0)
jquery-payment-rails was resolved to 0.0.1, which depends on
railties (~> 4.0.0)
jquery-rails (~> 3.0.0) was resolved to 3.0.4, which depends on
railties (< 5.0, >= 3.0)
rails (~> 4.0.0) was resolved to 4.0.0, which depends on
railties (= 4.0.0)
rspec-rails (~> 3.4.0) was resolved to 3.4.2, which depends on
railties (< 4.3, >= 3.0)
sass-rails (~> 3.2.3) was resolved to 3.2.3, which depends on
railties (~> 3.2.0.beta)
Gemfile :
ruby '2.2.2'
## Sinatra App Gems
gem 'sinatra', '~> 1.4.4'
gem 'sass', '~> 3.4.13'
gem 'sinatra-assetpack', '~> 0.3.1', :require => 'sinatra/assetpack'
gem 'sinatra-env', '~> 0.0.2'
## Rails App Gems
gem 'rails', '~> 4.0.0'
gem 'foreman', '~> 0.78.0'
gem 'puma', '~> 2.14.0'
gem 'simple_form', '~> 2.1.3'
gem 'simple_enum', '~> 1.6.0', :require => 'simple_enum/mongoid'
gem 'mongoid', '~> 5.0.0'
gem 'devise', '~> 3.4.1'
gem 'possessive', '~> 1.0.1'
gem 'american_date', '~> 1.1.0'
gem 'sht_rails', '~> 0.2.2'
gem 'version', '~> 1.0.0'
gem 'rdiscount', '~> 2.1.7'
gem 'ssl_enforcer', '~> 0.2.3'
#
gem 'sidekiq', '~> 3.5.0'
gem 'slim', '~> 3.0.2'
group :development do
gem 'capistrano', '~> 3.4.0'
gem 'capistrano-rvm', '~> 0.1.2'
gem 'capistrano-rails', '~> 1.1.3'
gem 'capistrano-bundler', '~> 1.1.4'
gem 'capistrano-foreman', github: 'koenpunt/capistrano-foreman'
gem 'spring'
gem 'spring-commands-rspec', '~> 1.0.4'
end
group :assets do
gem 'pusher_rails', '~> 1.0.1'
gem 'sass-rails', '~> 3.2.3'
gem 'coffee-rails', '~> 3.2.1'
gem 'jquery-rails', '~> 3.0.0'
gem 'bootstrap-sass', '~> 2.3.0.0'
gem 'font-awesome-sass-rails', '~> 3.0.2.2'
gem 'uglifier', '>= 1.0.3'
gem 'modernizr-rails', '~> 2.7.1'
gem 'jquery-payment-rails', '~> 0.0.1'
gem 'jquery-validation-rails', '~> 1.13.1'
end
group :development, :test do
gem 'test-unit', '~> 3.0'
gem 'rspec-rails', '~> 3.3.3'
gem 'factory_girl_rails', '~> 4.5.0'
gem 'mongoid-rspec', '~> 1.13.0'
gem 'guard-rspec', '~> 4.6.4'
gem 'simplecov', '~> 0.10.0', require: false
end
group :test do
gem 'database_cleaner', '~> 1.5.1'
gem 'faker', '~> 1.5.0'
end
I have removed gemfile.lock file also, still it is giving issue
A:
Ok, here's a little analysis on the situation on railties version requirements given the error given to you:
1) railties (>= 3.0.0)
2) railties (~> 4.0.0) => (< 4.1, >= 4.0.0)
3) railties (< 5.0, >= 3.0)
4) railties (= 4.0.0)
5) railties (< 4.3, >= 3.0)
6) railties (~> 3.2.0.beta) => (< 3.2.1, >= 3.2.0.beta)
All version dependencies here can coexist except 6) which comes from sass-rails gem. Try to update to latest version sass-rails and you'll be fine (5.0.4 is the latest release) since it uses railties (>= 4.0.0, < 5.0). Also, all sass-rails versions after 4.0.0 will work too since that's when the railties dependency changed - see this.
Update
I've checked your Gemfile and the minimum changes that you could do in order to get the desired mongo version is the following (changes are commented):
ruby '2.2.2'
## Sinatra App Gems
gem 'sinatra', '~> 1.4.4'
gem 'sass', '~> 3.4.13'
gem 'sinatra-assetpack', '~> 0.3.1', :require => 'sinatra/assetpack'
gem 'sinatra-env', '~> 0.0.2'
## Rails App Gems
gem 'rails', '~> 4.0.0'
gem 'foreman', '~> 0.78.0'
gem 'puma', '~> 2.14.0'
gem 'simple_form', '~> 3.0.0' # CHANGED
gem 'simple_enum', '~> 1.6.0', :require => 'simple_enum/mongoid'
gem 'mongoid', '~> 5.0.0'
gem 'devise', '~> 3.4.1'
gem 'possessive', '~> 1.0.1'
gem 'american_date', '~> 1.1.0'
gem 'sht_rails', '~> 0.2.2'
gem 'version', '~> 1.0.0'
gem 'rdiscount', '~> 2.1.7'
gem 'ssl_enforcer', '~> 0.2.3'
gem 'sidekiq', '~> 3.5.0'
gem 'slim', '~> 3.0.2'
group :development do
gem 'capistrano', '~> 3.4.0'
gem 'capistrano-rvm', '~> 0.1.2'
gem 'capistrano-rails', '~> 1.1.3'
gem 'capistrano-bundler', '~> 1.1.4'
gem 'capistrano-foreman', github: 'koenpunt/capistrano-foreman'
gem 'spring'
gem 'spring-commands-rspec', '~> 1.0.4'
end
group :assets do
gem 'pusher_rails', '~> 1.0.1'
gem 'sass-rails', '~> 4.0.1' # CHANGED
gem 'coffee-rails', '~> 4.0.0' # CHANGED
gem 'jquery-rails', '~> 3.0.0'
gem 'bootstrap-sass', '~> 2.3.0.0'
gem 'font-awesome-sass-rails', '~> 3.0.2.2'
gem 'uglifier', '>= 1.0.3'
gem 'modernizr-rails', '~> 2.7.1'
gem 'jquery-payment-rails', :git => 'https://github.com/thoughtbot/jquery-payment-rails.git', :ref => 'd401bf9' # CHANGED
gem 'jquery-validation-rails', '~> 1.13.1'
end
group :development, :test do
gem 'test-unit', '~> 3.0'
gem 'rspec-rails', '~> 3.4.0' # CHANGED
gem 'factory_girl_rails', '~> 4.5.0'
gem 'mongoid-rspec', '~> 3.0.0' # CHANGED
gem 'guard-rspec', '~> 4.6.4'
gem 'simplecov', '~> 0.10.0', require: false
end
group :test do
gem 'database_cleaner', '~> 1.5.1'
gem 'faker', '~> 1.5.0'
end
You should probably run bundle update after this changes, but be careful since this updates ALL gems according to your Gemfile.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Practical parameter file reading in java
I am starting with java and I am wondering which (text) file format I should use to read some parameter sets, such as:
Item1: // the item name is not important
- filename: item1.txt
- contentType: individual
- ...
Item2:
- filename: item2.txt
- contentType: group
- ...
...
The purpose is to give a list of files to be loaded into a DB, as well as some description of file content.
So my question is:
What practical parameter file format should I use?
And by practical I mean:
no (additional) external libraries required, so typically "standard" java and spring (the framework used)
low development cost: easy parsing of the loaded file content, such as:
List<Header> headers = read_file(headerFileName);
for(Header header : headers){
MyTable table = new MyTable(header.contentType);
table.loadFromFile(header.filename);
}
file format readability (yaml'd be nice, but it seems to require an external lib)
Note: this question is similar to What is the best practice for reading property files in Java EE?, but I don't know much about the java ecosystem so I cannot be sure (eg. I understood that spring is an alternative to JavaEE). Here I tried to be more precise on my needs, and in particular on the "shape" of the parameters.
A:
I recommend using XML files and using JAXB to load them.
Why? Because of the following pros:
Because it is awefully simple.
No external libraries are needed.
The config file is well readable (simple XML), it can be edited with any text editors or advanced XML editors.
It is flexible enough to add other data later on to the parameters.
Also very easy to modify/save parameters from code at runtime (see at the end).
Thanks to XML you don't have to worry about character encoding (like in case of properties files).
Modelling:
First you need to create classes to "model" your parameters:
class Parameters {
@XmlElement(name = "item")
public List<Item> items;
}
class Item {
public String fileName;
public String contentType;
}
Example input XML file:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<parameters>
<item>
<fileName>item1.txt</fileName>
<contentType>individual</contentType>
</item>
<item>
<fileName>item2.txt</fileName>
<contentType>group</contentType>
</item>
</parameters>
Loading the parameters
And this is how you can load it, it's only 1 method call:
Parameters p = JAXB.unmarshal(new File("params.xml"), Parameters.class);
for (Item item : p.items)
System.out.println(item.fileName + ": " + item.contentType);
Output:
item1.txt: individual
item2.txt: group
Alternative (simplified) XML input
To make the input XML file shorter, more easily readable, we can make the following change:
class Item {
@XmlAttribute
public String fileName;
@XmlAttribute
public String contentType;
}
Here we basically specified to store/read the data of an Item as XML attributes and not as child elements. With this modification the input XML:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<parameters>
<item fileName="item1.txt" contentType="individual" />
<item fileName="item2.txt" contentType="group" />
</parameters>
Modifying and Saving parameters at runtime
If we want to modify the parameters and save them at runtime, it is just as easy as loading them: one line only. Below I modify the first item, and I also create and add a new third item:
// Modify item #1
p.items.get(0).fileName = "item11.txt";
p.items.get(0).contentType = "short";
// Create and add a new item
Item item3 = new Item();
item3.fileName = "item3.txt";
item3.contentType = "newtype";
p.items.add(item3);
// Save the modified parameters: 1 line:
JAXB.marshal(p, new File("params-out.xml"));
Output of the modified parameters:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<parameters>
<item fileName="item11.txt" contentType="short"/>
<item fileName="item2.txt" contentType="group"/>
<item fileName="item3.txt" contentType="newtype"/>
</parameters>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Quicksort in C with unit testing framework
This is my quicksort
There are many like it but
This quicksort is mine.
So, quicksort, in C, with a big framework to test it six ways from Sunday. Passed the tests nicely, but there may be warts, or subtle mistakes I didn’t think of, or code that’s hard to follow, or just better ways to do things. Have at it.
EDIT: Forgot another issue: I’m not handling memory allocation errors gracefully in this code. Suggestions on how professional-level production code might handle them are welcome. I’m thinking that functions using malloc() should return a value to be checked, and set errno to ENOMEM.
My quicksort implementation is somewhat slower than the library function; that’s only to be expected; library code is optimized and I don’t try to pick a good pivot with median-of-3 or such. No need to critique that, I wanted to keep it simple.
/* Quicksort implementation and testing framework */
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
/* 0: use qsort correctly, to test the rest of the framework
* 1: mess up the sort sometimes, to test if sorting errors are caught
* 2: mess up the sentinels sometimes, to test if sentinel errors are
* caught
* 3: use my quicksort implementation, to test it */
#define TEST_TYPE 3
/* Stop testing after this many errors */
#define MAX_ERRORS 6
/* Set to 1 to print all pre-sort permutations */
#define VERBOSE 0
/* Max array length to test; more than 12 will take a long time */
#define MAXARRAY 10
/* Size of array for run_big_test() */
#define BIGTEST_SIZE 2000
/* Sentinels to detect buffer overruns */
#define SENTINEL_LEFT 111
#define SENTINEL_RIGHT -222
/* Used to count errors globally */
int err_ct = 0;
void run_tests(size_t N);
void run_big_test(void);
int error_check(size_t N, int *sorted);
void print_error(size_t N, int *to_sort, int *sorted);
void print_array(size_t len, int *arr);
int next_perm(int n, int *dest);
int cmp_int(const void *a, const void *b);
void quicksort(void *base, size_t nmemb, size_t size,
int (*cmp)(const void *, const void *));
void swap(void *a, void *b, size_t size);
void memquit(void);
int main(void)
{
size_t len;
srand(42);
for (len = 0; len <= MAXARRAY; ++len)
run_tests(len);
run_big_test();
return EXIT_SUCCESS;
}
void run_tests(size_t N)
{
/* Tests:
* 1. Sort all permutations of N distinct numbers.
* 2. Sort all permutations of N numbers with some repeated.
* 3. Sort an array of N numbers that are all the same (may catch
* infinite loops or recursion).
*/
int distinct[MAXARRAY];
int repeats[MAXARRAY] = {0, 0, 1, 2, 3, 3, 3, 4};
int perm[MAXARRAY];
int to_sort[MAXARRAY];
int sorted[MAXARRAY + 2];
int *dataset[2];
int i;
int test;
int retval;
if (N > MAXARRAY) {
fprintf(stderr, "run_tests(%lu) exceeds max array size.\n", N);
exit(EXIT_FAILURE);
}
for (i = 0; i < (int) N; ++i)
distinct[i] = i;
for (i = 2; i < (int) N; ++i)
if (repeats[i] == 0)
repeats[i] = 5;
dataset[0] = distinct;
dataset[1] = repeats;
for (test = 0; test < 2; ++test) {
while ((retval = next_perm((int) N, perm)) == 1) {
for (i = 0; i < (int) N; ++i)
to_sort[i] = dataset[test][perm[i]];
#if VERBOSE
print_array(N, to_sort);
putchar('\n');
#endif
sorted[0] = SENTINEL_LEFT;
memcpy(sorted + 1, to_sort, N * sizeof(int));
sorted[N + 1] = SENTINEL_RIGHT;
quicksort(sorted + 1, (size_t) N, sizeof(int), cmp_int);
if (error_check(N, sorted))
print_error(N, to_sort, sorted);
}
if (retval == -1)
memquit();
}
for (i = 0; i < (int) N; ++i)
to_sort[i] = 6;
#if VERBOSE
print_array(N, to_sort);
putchar('\n');
#endif
sorted[0] = SENTINEL_LEFT;
memcpy(sorted + 1, to_sort, N * sizeof(int));
sorted[N + 1] = SENTINEL_RIGHT;
quicksort(sorted + 1, (size_t) N, sizeof(int), cmp_int);
if (sorted[0] != SENTINEL_LEFT ||
sorted[N + 1] != SENTINEL_RIGHT ||
memcmp(sorted + 1, to_sort, N * sizeof(int)))
print_error(N, to_sort, sorted);
}
void run_big_test(void)
{
/* Create a long array of random numbers, sort it, check
* correctness. */
int *to_sort;
int *sorted;
int i;
to_sort = malloc(BIGTEST_SIZE * sizeof(int));
sorted = malloc((BIGTEST_SIZE + 2) * sizeof(int));
if (!to_sort || !sorted)
memquit();
for (i = 0; i < BIGTEST_SIZE; ++i)
to_sort[i] = rand() % (BIGTEST_SIZE * 4);
#if VERBOSE
print_array(BIGTEST_SIZE, to_sort);
putchar('\n');
#endif
sorted[0] = SENTINEL_LEFT;
memcpy(sorted + 1, to_sort, BIGTEST_SIZE * sizeof(int));
sorted[BIGTEST_SIZE + 1] = SENTINEL_RIGHT;
quicksort(sorted + 1, BIGTEST_SIZE, sizeof(int), cmp_int);
if (error_check(BIGTEST_SIZE, sorted))
print_error(BIGTEST_SIZE, to_sort, sorted);
}
int error_check(size_t N, int *sorted)
{
/* Check sentinels, check that sorted part is non-decreasing */
size_t i;
if (sorted[0] != SENTINEL_LEFT ||
sorted[N + 1] != SENTINEL_RIGHT)
return 1;
for (i = 2; i <= N; ++i)
if (sorted[i] < sorted[i - 1])
return 1;
return 0;
}
void print_error(size_t N, int *to_sort, int *sorted)
{
/* Print pre-sort and post-sort arrays to show where error occurred.
* Quit if MAX_ERRORS was reached. */
printf("Error: ");
print_array(N, to_sort);
printf(" -> ");
print_array(N + 2, sorted);
putchar('\n');
if (++err_ct >= MAX_ERRORS)
exit(EXIT_FAILURE);
}
void print_array(size_t len, int *arr)
{
/* Pretty-print array. No newline at end. */
char *sep = "";
size_t i;
putchar('(');
for (i = 0; i < len; ++i) {
printf("%s%d", sep, arr[i]);
sep = ", ";
}
putchar(')');
}
int next_perm(int passed_n, int *dest)
{
/* Generate permutations of [0, n) in lexicographic order.
*
* First call: Set up, generate first permutation, return 1.
*
* Subsequent calls: If possible, generate next permutation and
* return 1. If all permutations have been returned, clean up and
* return 0. "First call" status is reset and another series may be
* generated.
*
* Return -1 to indicate a memory allocation failure.
*
* Caller may alter the values in `dest` freely between calls, and
* may pass a different `dest` address each time. `n` is ignored
* after the first call.
*
* The function maintains static data; it can only keep track of one
* series of permutations at a time. */
static int *perm;
static int new_series = 1;
static int n;
int i, j;
if (new_series) {
/* Set up first permutation, return it. */
new_series = 0;
n = passed_n;
if ((perm = malloc((size_t) n * sizeof(int))) == NULL)
return -1;
for (i = 0; i < n; ++i)
perm[i] = dest[i] = i;
return 1;
}
/* Generate and return next permutation. First, find longest
* descending run on right. */
i = n - 2;
while (i >= 0 && perm[i] > perm[i+1])
--i;
/* If all of perm is descending, the previous call returned the last
* permutation. */
if (i < 0) {
free(perm);
new_series = 1;
return 0;
}
/* Find smallest value > perm[i] in descending run. */
j = n - 1;
while (perm[j] < perm[i])
--j;
/* Swap [i] and [j]; run will still be descending. */
perm[i] ^= perm[j];
perm[j] ^= perm[i];
perm[i] ^= perm[j];
/* Reverse the run, and we're done. */
for (++i, j = n - 1; i < j; ++i, --j) {
perm[i] ^= perm[j];
perm[j] ^= perm[i];
perm[i] ^= perm[j];
}
for (i = 0; i < n; ++i)
dest[i] = perm[i];
return 1;
}
int cmp_int(const void *a, const void *b)
{
/* Compatible with qsort. a and b are treated as pointers to int.
* Return value is:
* < 0 if *a < *b
* > 0 if *a > *b
* 0 if *a == *b
*/
const int *aa = a;
const int *bb = b;
return *aa - *bb;
}
#if TEST_TYPE == 0
/* Use qsort(3), correctly */
void quicksort(void *base, size_t nmemb, size_t size,
int (*cmp)(const void *, const void *))
{
qsort(base, nmemb, size, cmp);
}
#endif
#if TEST_TYPE == 1
/* Mess up the sort with probability 1/256 */
void quicksort(void *base, size_t nmemb, size_t size,
int (*cmp)(const void *, const void *))
{
int *ibase = base;
qsort(base, nmemb, size, cmp);
if (rand() % 256 == 0) {
ibase[0] ^= ibase[nmemb - 1];
ibase[nmemb - 1] ^= ibase[0];
ibase[0] ^= ibase[nmemb - 1];
}
}
#endif
#if TEST_TYPE == 2
/* Mess up one of the sentinels with probability 1/256 */
void quicksort(void *base, size_t nmemb, size_t size,
int (*cmp)(const void *, const void *))
{
int *ibase = base;
int i;
qsort(base, nmemb, size, cmp);
if (rand() % 256 == 0) {
i = (rand() % 2) ? -1 : (int) nmemb;
ibase[i] = 42;
}
}
#endif
#if TEST_TYPE == 3
/* Use my implementation */
void quicksort(void *base, size_t nmemb, size_t size,
int (*cmp)(const void *, const void *))
{
/* Sort array with quicksort algorithm. Pivot is always leftmost
* element. */
char *cbase = base;
char *p, *q;
if (nmemb < 2)
return;
/* p at element 1, just past pivot */
p = cbase + size;
/* q at last element */
q = cbase + (nmemb - 1) * size;
while (p <= q) {
/* Move p right until *p >= pivot */
while (p <= q && cmp(p, base) < 0)
p += size;
/* Move q left until *q < pivot */
while (p <= q && cmp(q, base) >= 0)
q -= size;
if (p < q)
swap(p, q, size);
}
/* After partitioning:
* Pivot is element 0
* p = q + 1 (in terms of elements)
* Case 1: some elements < pivot, some >= pivot
* =<<<<>>>> q is rightmost <, p is leftmost >
* Case 2: all elements < pivot
* =<<<<<<<< q is rightmost <, p is one past end
* Case 3: all elements >= pivot
* =>>>>>>>> q is =, p is leftmost >
*
* If not case 3:
* Swap pivot with q
* Recurse on 0 to q - 1
* Recurse on p to nmemb - 1
*
* Pivot is left out of both recursive calls, so size is always
* reduced by at least one and infinite recursion cannot occur.
*/
if (q != cbase) {
swap(base, q, size);
quicksort(base, (size_t) (q - cbase) / size, size, cmp);
}
quicksort(p, nmemb - (size_t) (p - cbase) / size, size, cmp);
}
#endif
void swap(void *a, void *b, size_t size)
{
static size_t bufsize = 0;
static char *buf = NULL;
if (size != bufsize) {
bufsize = size;
buf = realloc(buf, bufsize);
if (!buf)
memquit();
}
memcpy(buf, a, size);
memcpy(a, b, size);
memcpy(b, buf, size);
}
void memquit(void)
{
fprintf(stderr, "Memory allocation failure\n");
exit(EXIT_FAILURE);
}
A:
#define TEST_TYPE 3
It would be more informative to represent this as an enum, with meaningfully named entries for your four test types.
/* Set to 1 to print all pre-sort permutations */
#define VERBOSE 0
No reason not to represent this as an actual boolean using <stdbool.h>.
Since it seems like this is the only translation unit in your project, you should set all of your functions to be static.
Don't pre-declare your variables at the beginning of a function; this hasn't been needed for about 20 years. e.g. rewrite your main loop as:
for (size_t len = 0; len <= MAXARRAY; len++)
This especially applies to functions like run_tests, with a big pile of variables at the beginning.
In run_big_test, you should be freeing to_sort and sorted after you're done with them.
This:
i = n - 2;
while (i >= 0 && perm[i] > perm[i+1])
--i;
is better represented as a for loop:
for (int i = n-2; i >= 0; i--)
if (perm[i] <= perm[i+1])
break;
I suggest that you factor out your XOR swap into a function. The compiler will be smart enough to inline it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
embedded widget that relies of jquery, what if main page has jquery already loaded?
If I create a widget, where someon just links to:
www.example.com/my.js
which places a small widget on their website, and my widget relies on jquery, how can first check if is loaded already on the page?
what if 1.3.1 is loaded and I require 1.4.2?
A:
If I recall there is a function in jQuery "$().jquery" that returns the version. Or "jQuery.fn.jquery;"
To check for jQuery entirely do something like
if(typeof(jQuery) != "undefined")
{
Bla bla code
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Static Const Initialised Structure Array in C++ Class
I understand if I want a const array in a class namespace in C++ I cannot do:
class c
{
private:
struct p
{
int a;
int b;
};
static const p pp[2];
};
const c::p pp[2] = { {1,1},{2,2} };
int main(void)
{
class c;
return 0;
}
I must do:
class c
{
public:
struct p
{
int a;
int b;
};
static const p pp[2];
};
const c::p pp[2] = { {1,1},{2,2} };
int main(void)
{
class c;
return 0;
}
But this requires "p" and "pp" to be public, when I want them to be private. Is there no way in C++ to initialise private static arrays?
EDIT: -------------------
Thanks for the answers. In addition I want this class to be a library, header files only, for use by a main project. Including the following initialiser results in " multiple definition of " errors when included by multiple files.
const c::p c::pp[2] = { {1,1},{2,2} };
How can I solve this?
A:
Your first code snippet works fine. You just need to change it to:
const c::p c::pp[2] = { {1,1},{2,2} };
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Magento layer navigation possition
I want show first layer navigation then product list how i make this
A:
Try this,
.catalog-category-view .main.container {display: flex;flex-direction: column;}
.catalog-category-view .col-main {order: 2 !important;}
.catalog-category-view .col-left {order: 1 !important;}
I also face same problem so i fix in this way only use media query only for mobile or your device where ever you want
|
{
"pile_set_name": "StackExchange"
}
|
Q:
need javascript to automatically reload page when minute updates
I am using the following code in my website which displays the current time
function startTime() {
var today = new Date();
var h = today.getHours();
var m = today.getMinutes();
var s = today.getSeconds();
m = checkTime(m);
s = checkTime(s);
document.getElementById('time').innerHTML =
h + ":" + m;
var t = setTimeout(startTime, 500);
}
function checkTime(i) {
if (i < 10) {i = "0" + i}; // add zero in front of numbers < 10
return i;
}
i am also using the automatic refresher tag in my html which reloads page after every 60 seconds
<meta http-equiv="refresh" content="60">
what i want is whenever the time changes to next minute the page reloads
which means if current time is 14:05 and when it hits 14:06 the page reloads by reading this time change and NOT by 60 seconds interval from which the user opens the page.
A:
You can set timeout looking at the clock, just get the actual seconds and wait til 60 to reload:
var date = new Date();
setTimeout(function(){
window.location.reload(1);
},(60 - date.getSeconds())*1000)
Just put that at the head inside a script tag
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Hosting WCF Service that accesses SQL database onto IIS
I have a WCF service that accesses a SQL database to fetch data . I would like to deploy this service onto IIS . However , when i do this , my service is not able to access the database .
This is how my service accesses the DB
SqlConnection thisConnection = new SqlConnection(@"user id=SAIESH\Saiesh Natarajan;" +
"password=;server=SAIESH\\SQLEXPRESS;" +
"Trusted_Connection=yes;" +
"database=master; " +
"connection timeout=30");
I need to know what i should do to be able to access this DB from my WCF service hosted on IIS
A:
Under IIS your service usually will be executed under NETWORK SERVICE account. In your connection string you use trusted_connection=yes. So, you need grant access to NETWORK SERVICE account. But better solution is to change authentication scheme and use USERNAME/PASSWORD to connect to SQL server.
Actually here is similar question WCF Impersonation and SQL trusted connections?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Request for Tag cleanup
I've recently created both the multi-part and multi-part-films tags. One of these is clearly redundant but I don't have permissions to delete it.
Can someone do the honours?
A:
Tags are deleted automatically (after 24 hours, I think) when they don't have questions associated. So you shouldn't really worry about this. Wrong tags happen often enough and they don't need to be deleted manually, neither are users able to do this, I think (if moderators are even able to do it at all).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Importing classes from playground page into another page
Note: This is a different question than importing generic swift files (which can be done using the Sources folder).
I have a playground with 2 pages and I would like to use a protocol defined in the first page in the second page. I'll use an example of JSON conversion.
JSON.xcplaygroundpage
import Foundation
protocol JSONConvertible {
func jsonValue() -> String
}
JSONArray.xcplaygroundpage
import Foundation
//Undeclared type JSONConvertible
extension Array : JSONConvertible {
}
I have tried the following imports:
import MyPlayground
import MyPlayground.JSON
import JSON
import JSON.Contents (in finder the file name is actually Contents.swift)
I have also tried adding JSON.xcplaygroundpage into the Source folder of JSONArray as well as the Resources folder.
Note: I realize that I could put the protocol definition in a separate JSON.swift and include that in my project Sources folder. That doesn't really answer my question.
A:
This is working in Xcode 8.3.3.
For code common to multiple pages, put it in separate files under top-level Sources group. Be sure to have proper Swift access control keywords in the right places.
Note from http://help.apple.com/xcode/mac/8.2/#/devfa5bea3af:
...the auxiliary Swift source file must export it using the public keyword. This includes classes, methods, functions, variables, and protocols.
Common.swift:
public class DoIExist { public init(){ print("Woot!")} }
Then you can reference it in all of the other pages. Like this:
Page2:
//: [Previous](@previous)
let d = DoIExist()
//: [Next](@next)
You can see that it works because of the console output ("Woot!") and the view results gutter.
To achieve this, I followed the directions at the Apple Xcode documentation about Playgrounds. When Apple inevitably moves those docs and does not provide a forwarding link, read on for how I found it.
I searched for the clues found in the link in cocoapriest's answer: "ios recipes Playground Help Add Auxilliary Code to a Playground". This leads to a document Apple is currently calling "Xcode Help" in a chapter titled "Use playgrounds" in a section titled "Add auxiliary code to a playground".
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Does READ COMMITTED always start over after serialization failures while SERIALIZABLE simply fails?
On the PostgreSQL Concurrency With MVCC page, it says:
know what you’re thinking though: what about a two transactions updating the same row at the same time? This is where transaction isolation levels come in. Postgres basically supports two models that allow you to control how this situation should be handled. The default, READ COMMITTED, reads the row after the inital transaction has completed and then executes the statement. It basically starts over if the row changed while it was waiting. For instance, if you issue an UPDATE with a WHERE clause, the WHERE clause will rerun after the initial transaction commits, and the UPDATE takes place if the WHERE clause is still satisfied.
The docs seem to suggest that READ COMMITTED is still subject to failures and should be retried.
Can READ COMMITTED be set to indefinitely retry with the same atomicity as SERIAZLIZABLE?
A:
Can READ COMMITTED be set to indefinitely retry with the same atomicity as SERIAZLIZABLE?
No.
READ COMMITTED doesn't retry. Neither does SERIALIZABLE. The application is expected to retry transactions that suffer deadlocks, serialization failures, etc.
That description in docs is very misleading, I'll raise it on the docs list. PostgreSQL doesn't "start over" in READ COMMITTED at all, like some other DBs (e.g. Oracle) do. Instead it waits until the row it's waiting for commits or rolls back. If the other tx commits PostgreSQL reads the updated row, checks that it still matches any WHERE clause or other predicate, and then continues execution. The details are a bit arcane, see EvalPlanQual in the sources.
In either READ COMMITTED or SERIALIZATION the application should be able to re-issue a transaction. READ COMMITTED transactions can fail in ways that will succeed on retry, including:
Deadlock detection due to lock upgrades, lock ordering issues, etc
Administrative query cancel requests
Administrative shutdown/restart of the DB
Unplanned crash/restart of the DB
Connectivity interruption
... etc
SERIALIZABLE just adds some more failure cases.
Well written applications will cope with query failures and re-issue the transaction.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Text editor to search/replace newlines and tabs
A time-saving feature I often used while editing large text files (the kind used for batch processes of data exchanges, aka, "flat files" for feeds between multiple systems) was Notepad++'s extended Find/Change function where you can specify certain characters (such as tab, space, line feed, carriage return) in both the Find function and the Change function.
This is the detail on how it works:
Open the find/replace dialog. At the bottom will be some Search mode
options. Select "Extended (\n \r \t \0 \x...)" In either the Find
what or the Replace with field entries, you can use the following
escapes:
\n new line (LF)
\r carriage return (CR)
\s space character
\t tab character
This would make it very easy to edit lists of information, changing files from comma to tab delimited, or files with spaces between into comma delimited, as well being easy to go from spaces to tabs (or visa versa).
Anyone happen to know which text editors running on Mac OS have this feature (or have plugins to add this functionality)?
A:
Text Edit which is available out of the box supports searching for special characters:
Access "find" functionality with Cmd-F
Click the small magnifying glass at the beginning of the entry field and select "Insert Pattern" (last entry)
Pick your pattern (and repeat if necessary)
Move the cursor to the replacement field to select a pattern there (or just copy/paste from the search field)
If you do a lot of text-based file processing it might help to spend some time to learn about basic Unix tools like awk, sed and friends. This would make thinks a lot easier on the long run.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Drawable Resource in Mapbox Android
I have some icons as drawable resources that I would like to put in specific locations in a MapBox map in android studio, but I don´t know how.
I have tried converting my resource files into bitmaps, and then convert those bitmaps into strings so as to fill the "withIconImage" method of SymbolOptions class.(I know it works with defined strings such as "airport", "fire-station").
Can someone help me?
Thank you!
A:
This example from the Mapbox Android documentation shows how to add a local drawable resource from an Android application to your Mapbox map as a SymbolLayer. The initSpaceStationSymbolLayer helper method specifically takes care of this:
private void initSpaceStationSymbolLayer(@NonNull Style style) {
style.addImage(
"space-station-icon-id",
BitmapFactory.decodeResource(this.getResources(), R.drawable.iss)
);
style.addSource(new GeoJsonSource("source-id"));
style.addLayer(new SymbolLayer("layer-id", "source-id").withProperties(
iconImage("space-station-icon-id"),
iconIgnorePlacement(true),
iconAllowOverlap(true),
iconSize(.7f)
));
}
You mentioned SymbolOptions, however, so it is likely the case that you are using the Mapbox Annotation Plugin for Android rather than directly adding SymbolLayers. As indicated in the documentation for the SymbolOptions#withIconImage method, icon images are specified as Strings which reference the names of images in your style's sprite sheet. This example from the Mapbox Android Plugins demo app demonstrates how to add an image from the resources folder to your style, to then be used as the icon image in a SymbolManager. Namely, ID_ICON_AIRPORT is defined as "airport" here, then the helper method addAirplaneImageToStyle here adds the relevant image to the style, and finally a Symbol is created here using SymbolOptions#withIconImage and ID_ICON_AIRPORT passed as the argument. You use this same approach for adding your own drawable image.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to have two columns with the same field but different values in a GridView
I'm currently doing a project in ASP.NET/C# on where I have a GridView which gets two types of data from two different tables, ID and Initials.
The table itself must be sort of like this:
UNIT 1
ID - INITIALS
And a Column right next to it:
UNIT 2
ID - INITIALS
The IDs are being called from Table 2, while the INITIALS are being called from Table 1. Each ID has it's own INITIALS, but my problem is that, since the INITIALS are being called twice, they repeat themselves on the following and previous Columns, as seen here.
My current code (ASPX):
<asp:GridView ID="GridView1" runat="server" CssClass= "table table-hover table-bordered" AllowPaging="false" PageSize="4" OnPageIndexChanging="OnPaging" OnPageIndexChanged="SearchByTagButton_Click" Style="max-width:75%;" AutoGenerateColumns="false">
<Columns>
<asp:TemplateField HeaderText="UNIDADE" HeaderStyle-ForeColor="#B70700" HeaderStyle-Width="34%">
<ItemTemplate>
<%# DataBinder.Eval(Container.DataItem, "unidade")%>
-
<%# DataBinder.Eval(Container.DataItem, "sigUnidade")%>
</ItemTemplate>
</asp:TemplateField>
<asp:TemplateField HeaderText="UNIDADE APOIADA" HeaderStyle-ForeColor="#B70700" HeaderStyle-Width="34%">
<ItemTemplate>
<%# DataBinder.Eval(Container.DataItem, "unidadeApoiada")%>
-
<%# DataBinder.Eval(Container.DataItem, "sigUnidade")%>
</ItemTemplate>
</asp:TemplateField>
</Columns>
And C#:
SqlDataAdapter da;
DataSet ds1 = new DataSet();
DataSet ds2 = new DataSet();
SqlConnection conn = new SqlConnection(strConn);
da = new SqlDataAdapter("SELECT ts.unidade, u.sigUnidade FROM T_SECRETARIAS ts INNER JOIN UNIDADES u on u.unidade = ts.unidade WHERE '%" + txtCodigoSearch.Text + "%' IS NULL OR LEN('%" + txtCodigoSearch.Text + "%') = 0 OR (ts.unidade='%" + txtCodigoSearch.Text + "%') OR ts.unidade LIKE '%" + txtCodigoSearch.Text + "%'", conn);
da.Fill(ds1);
da = new SqlDataAdapter("SELECT ts.unidadeApoiada, u.sigUnidade FROM T_SECRETARIAS ts INNER JOIN UNIDADES u on u.unidade = ts.unidadeApoiada WHERE '%" + txtCodigoSearch.Text + "%' IS NULL OR LEN('%" + txtCodigoSearch.Text + "%') = 0 OR (ts.unidadeApoiada='%" + txtCodigoSearch.Text + "%') OR ts.unidadeApoiada LIKE '%" + txtCodigoSearch.Text + "%'", conn);
da.Fill(ds2);
ds1.Merge(ds2);
GridView1.DataSource = ds1;
GridView1.DataBind();
I basically join two DataSets, one for each SQL Query, as I found it the only way to join both queries to get both ID's (they're different, as in they're all a "main ID" in Table 1, but in Table 2 they can also be "sub ID's".
I tried to explain my best, but would like my table to be formatted as ID - INITIALS without any repetitions. If I could get any help, I'd be really appreciated.
A:
Starting from comments, you can have the following code:
SqlDataAdapter da;
DataSet ds1 = new DataSet();
SqlConnection conn = new SqlConnection(strConn);
da = new SqlDataAdapter("SELECT ts.unidade, ts.unidadeApoiada, u1.sigUnidade as sigUnidade1, u2.sigUnidade as sigUnidade2 FROM T_SECRETARIAS ts INNER JOIN UNIDADES u1 on u1.unidade = ts.unidade INNER JOIN UNIDADES u2 on u2.unidade = ts.unidadeApoiada WHERE....", conn);
da.Fill(ds1);
GridView1.DataSource = ds1;
GridView1.DataBind();
and in aspx:
<asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="false">
<Columns>
<asp:TemplateField HeaderText="UNIDADE">
<ItemTemplate>
<%# DataBinder.Eval(Container.DataItem, "unidade")%>
-
<%# DataBinder.Eval(Container.DataItem, "sigUnidade1")%>
</ItemTemplate>
</asp:TemplateField>
<asp:TemplateField HeaderText="UNIDADE APOIADA">
<ItemTemplate>
<%# DataBinder.Eval(Container.DataItem, "unidadeApoiada")%>
-
<%# DataBinder.Eval(Container.DataItem, "sigUnidade2")%>
</ItemTemplate>
</asp:TemplateField>
</Columns>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Breadth-first search on an 8x8 grid in Java
What I'm trying to do is count how many moves it takes to get to the goal using the shortest path. It must be done using a breadth first search. I put the 8x8 grid into a 2d array which is filled with one of four chars, E for empty (can move into these spots), B for blocked (can't move here), R for robot (starting point), or G for goal. The algorithm had to check for movable spaces in the order up, left, right, then down, which I believe I've done correctly. After a node is checked it changes its contents to a 'B'. If the goal cannot be reached, 0 should be returned.
I have changed my code to implement what Kshitij told me, and it works beautifully. I was just too tired to see that I wasn't initializing my queue after every new data set lol. Thanks for the help!
public static int bfSearch(){
Queue <int []> queue = new LinkedList <int []> ();
int [] start = {roboty,robotx,0};
queue.add(start);
while (queue.peek() != null){
int [] array = queue.remove();
if(array[0]-1 >= 0 && grid[array[0]-1][array[1]] != 'B'){
if (grid[array[0]-1][array[1]] == 'G'){
return array[2]+1;
}
else{
grid[array[0]-1][array[1]] = 'B';
int [] temp = {array[0]-1, array[1], array[2]+1};
queue.add(temp);
}
}
if(array[1]-1 >= 0 && grid[array[0]][array[1]-1] != 'B'){
if (grid[array[0]][array[1]-1] == 'G'){
return array[2]+1;
}
else{
grid[array[0]][array[1]-1] = 'B';
int [] temp = {array[0], array[1]-1, array[2]+1};
queue.add(temp);
}
}
if(array[1]+1 <= 7 && grid[array[0]][array[1]+1] != 'B'){
if (grid[array[0]][array[1]+1] == 'G'){
return array[2]+1;
}
else{
grid[array[0]][array[1]+1] = 'B';
int [] temp = {array[0], array[1]+1, array[2]+1};
queue.add(temp);
}
}
if(array[0]+1 <= 7 && grid[array[0]+1][array[1]] != 'B'){
if (grid[array[0]+1][array[1]] == 'G'){
return array[2]+1;
}
else{
grid[array[0]+1][array[1]] = 'B';
int [] temp = {array[0]+1, array[1], array[2]+1};
queue.add(temp);
}
}
}
return 0;
}
A:
You'll need to store 2 things in your queue. Let's call each item in your queue a node.
position (which you already store)
count (moves needed to get to this position from the start position)
You start off by assigning the count of your start position to 0.
The way the algorithm works is:
you pop a node from the queue
you determine where you can go from the position specified by the node you just popped. That is, if you treat this as "making a tree on the fly", you're determining the children of the node you popped from the queue
you add these children to the queue.
In your 3rd step, when you add a node child to the queue, you'd have to determine the count that needs to be added to this node. This count is simply the count of the parent node (that you popped in step 1) + 1
Finally, your return value would be the count associated with the node that carries the destination position.
For instance, lets work with a 4x4 grid, where position [0,0] is the start, and position [0,3] is the destination.
S E E B
E B E E
B B B E
G E E E
Initially, your queue would be:
[{(0, 0), 0}]
where the value inside the () is the position, and the second value inside the {} is the count.
You pop this node from your queue, and you determine that you can get to positions (0,1) and (1,0). So you add items {(0, 1), 1} and {(1, 0), 1} to the queue. Note that the count is 1 because the count of the popped node was 0 and we incremented that by 1. Your queue now looks like:
[{(0, 1), 1}, {(1, 0), 1}]
You pop the first element, realize that it has no viable children, so you move on.
You pop the remaining element, and find out that it gives you one node you can get to, at position (2, 0). Since the node you popped has count 1, you add this new position paired with count = 1 + 1 = 2 to the queue.
Eventually, you'll pop the goal node from your queue, and it's count will be 9.
Edit
If you want to get the path from the source to the destination, the current encoding doesn't work as is. You'd need to maintain a separate 2D array of size 8x8 with the counts instead of encoding them in the node itself. And when you finally find the count for the destination, you backtrack from the destination to the source using he 2D count array. Essentially, if you can get to the destination in 9 moves, you can get to one of its adjacent positions in 8 moves. So you find the position that has count 8 and is adjacent to the destination. You iteratively repeat this until you get to the source.
The method you described, where you add an extra element to the nodes does not work. I'll leave it for you to find out why, since this is homework :)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
c++ interfaces error; identifier "class" is undefined
Using: VS2010 : New Project -> Windows32 Console App -> Empty Project
Language: C++
Problem: I'm attempting a simple interface test with two classes that implement the interface through the 1 virtual method. I also attempt to use constructors within those two classes to set default values. The main problem occurs when attempting to create an object of either class; the IDE complains the identifier "class" is undefined.
I have checked tens of other posts with the same error but most of them are errors in instantiation(proper term?) where they use
Class obj();
instead of
Class obj;
another issue in the other threads was that the header files were not included in main.cpp AND the concrete class(proper term?). Still other issues involved the ordering of includes(base class should be first), or even improper use of constructors/deconstructors.
I have checked over those issues to see if they would work for me but have been unable to figure out what the problem is.
Any insight would be appreciated!
Error: (all in main.cpp)
Error 1 error C2065: 'Human' : undeclared identifier 6 1 interfaceGameTest
Error 2 error C2146: syntax error : missing ';' before identifier 'hum1' 6 1 interfaceGameTest
Error 3 error C2065: 'hum1' : undeclared identifier 6 1 interfaceGameTest
Error 4 error C2065: 'Orc' : undeclared identifier 7 1 interfaceGameTest
Error 5 error C2146: syntax error : missing ';' before identifier 'orc1' 7 1 interfaceGameTest
Error 6 error C2065: 'orc1' : undeclared identifier 7 1 interfaceGameTest
Error 7 error C2065: 'hum1' : undeclared identifier 10 1 interfaceGameTest
Error 8 error C2227: left of '->getHP' must point to class/struct/union/generic type 10 1 interfaceGameTest
Code:
IPc.h
#ifndef IPC_H_HAS_BEEN_INCLUDED
#define IPC_H_HAS_BEEN_INCLUDED
class IPc {
private:
int hp;
int mana;
int endurance;
public:
void setHP(int h) { hp = h; }
void setMP(int m) { mana = m; }
void setEnd(int e) { endurance = e; }
int getHP() { return hp; }
int getMP() { return mana; }
int getEnd() { return endurance; }
virtual int Attack() = 0;
};
#endif
IPc.cpp
#include "IPc.h"
class Human: public IPc
{
public:
Human::Human()
{
this->setHP(10);
this->setMP(5);
this->setEnd(10);
}
Human::~Human(){}
int IPc::Attack() // I have tried just "int Attack()" as well
{
return 1;
}
};
class Orc: public IPc
{
public:
Orc::Orc()
{
this->setHP(20);
this->setMP(0);
this->setEnd(20);
}
Orc::~Orc() {}
int IPc::Attack()
{
return 5;
}
};
main.cpp
#include <iostream>
#include "IPc.h"
int main()
{
Human hum1; // error Human undefined
Orc orc1; // error Orc undefined
int humHP = 0;
humHP = hum1->getHP();
std::cout << "Human HP is: " << humHP << std::endl;
return 0;
}
Edit
I placed
class Human: public IPc { virtual int Attack() };
inside IPc.h (after the base class) and before #endif(also did one for Orc). I also changed
humHP = hum1->getHP();
to
humHP = hum1.getHP();
Which seems to have cleared up the previous issues(Thanks Mat and Rahul). I am now getting a 'class' type redefinition error. I believe this has to do with the include guards. Do I need to surround each individual class with its own set of guards in the .h file or have I implemented them improperly perhaps?
Edit2
I placed both class declarations in their own header files with their own guards. This seems to have resolved the issue of " 'class' type redefinition". Though there is now an error of unresolved external symbol on the virtual attack calls for both derived classes.
A:
You are not allowed to use qualified names in class member declarations. I.e. when declaring the constructor of class Human inside class Human, you are not allowed to call it Human::Human(). It should be just Human(). The same applies to all other methods.
Trying to declare method int IPc::Attack() inside other classes makes no sense at all. It should be just int Attack().
You declared classes Human and Orc inside IPc.cpp. These classes are only visible inside IPc.cpp and nowhere else. In C++ language class declarations for program-wide classes are typically placed in header files, just how you did it with IPc class.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Regular expression for only fixed 8 digit numeric value, no spaces allowed ?
i want an regular expression which allows employee id to accept but employee id is only 8 digit numeric expression no less or no more , without any spaces and characters or any other special characters ?
A:
You can try to use this simple regex:
^\d{8}$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
drag and drop from master to detail view in sapui5
I would like to enable drag and drop within an sapui5 app. To do so I'm using the jQuery draggable and droppable widgets. Whenever I'm dragging an element from the master view to the detail view or vise versa the dragged element is hiding behind the other view. The drop is still recognized, the element is just don't show up. Basically both views are just div's. It could have something to do with the overflow property but I'm not getting any sense into it.
I'm using those parameters on my draggable function:
draggable({
helper: "clone",
cancel: true,
cursor: "pointer",
stack: "
})
Here's a sample jsbin: http://jsbin.com/werewuf/4/edit?html,output
A:
As the sapMNav sapMSplitContainerMaster sapMSplitContainerMasterVisible element having overflow : hidden property, any child of this parent can not be seen visible out of this container.
Set 'overflow' : 'visible' for parent element and also for dragButton.$().parent().css("overflow-y", "visible").css("overflow-x", "visible");
$('.sapMSplitContainerMasterVisible').css('overflow', 'visible');
JSBin Demo
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Internal calls on my proxied class don't get routed through proxy
I was using method swizzling to wrap all method invocations in a class with some extra functionality. Specifically I was:
Checking if the required object for this method call was in the cache
If the cache had that object return it.
If not, dispatch to the original implementation, populate the cache and return that.
I wanted to see if using NSProxy gave more readable code, plus thought NSProxy might make it easy to redirect method calls with an unknown number of arguments.
So, the NSProxy version works fine except for one important catch. Any internal calls on my sub-class don't go via the proxy.
Here's how I create the stand-in interceptor:
- (id)init
{
self = [super init];
if (self)
{
TyphoonAssemblyProxy* proxy = [[TyphoonAssemblyProxy alloc] initWithAssembly:self];
self = proxy; // 'self' should now be the proxy. . . but it seems not :(
}
return self;
}
Internal calls (ie calls to 'self' within the proxied class) don't get routed through proxy:
Now, in a sub-class of my proxied class, I have code want internal calls to go via the proxy. But they don't.
- (id)knight
{
return [TyphoonDefinition withClass:[Knight class] properties:^(TyphoonDefinition* definition)
{
//THIS NEXT CALL TO [self defaultQuest] SHOULD GO THROUGH PROXY MACHINERY!
[definition injectProperty:@selector(quest) withDefinition:[self defaultQuest]];
[definition injectProperty:@selector(damselsRescued) withValueAsText:@"12"];
[definition setScope:TyphoonScopeDefault];
}];
}
Question:
What is an easy way to get these internal goes to go via the proxy?
A:
Instead of doing that maybe message forwarding would work for you.
https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/ObjCRuntimeGuide/Articles/ocrtForwarding.html
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Use MultiThreading to read a file faster
I want to read a file of 500 Mb with the help of 2 threads, so that reading the file will be much faster. Someone please give me some code for the task using core java concepts.
A:
Multi-threading is not likely to make the code faster at all. This because reading a file is an I/O-bound process. You will be limited by the speed of the disk rather than your processor.
A:
Instead of trying to multi-thread the reading, you may benefit from multi-threading the processing of the data. This can make it look like using multiple threads to read can help, but in reality, using one thread to read and multiple threads to process is often better.
This often takes longer and is CPU bound. Using multiple threads to read files usually helps when you have multiple files on different physical disks (a rare occasion)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there a encryption api's available to encrypt log file while using Microsoft Enterprise Library 3.1
How to encrypt a string message while it is being logged into the log file using Logger.Write(...) in Microsoft Enterprise Library 3.1. Are there any inbuilt API's in Microsoft Enterprise Library which does encryption?
A:
From msdn : http://msdn.microsoft.com/en-us/library/ff647732.aspx
The Logging Application Block formatters do not encrypt logging information. Trace listener destinations receive logging information as clear text. This means that attackers that can access a trace listener destination can read the information. You can prevent unauthorized access to sensitive information. One approach is to use access control lists (ACLs) to restrict access to flat files. You can also create a custom formatter that encrypts log information. For information about how to create a custom formatter, see Extending the Logging Application Block.
here is implementation of log message encryption : http://msdn.microsoft.com/en-us/magazine/cc188689.aspx
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Writing preprocessor directives to get string
Can you write preprocessor directives to return you a std::string or char*?
For example: In case of integers:
#define square(x) (x*x)
int main()
{
int x = square(5);
}
I'm looking to do the same but with strings like a switch-case pattern. if pass 1 it should return "One" and 2 for "Two" so on..
A:
You don't want to do this with macros in C++; a function is fine:
char const* num_name(int n, char const* default_=0) {
// you could change the default_ to something else if desired
static char const* names[] = {"Zero", "One", "Two", "..."};
if (0 <= n && n < (sizeof names / sizeof *names)) {
return names[n];
}
return default_;
}
int main() {
cout << num_name(42, "Many") << '\n';
char const* name = num_name(35);
if (!name) { // using the null pointer default_ value as I have above
// name not defined, handle however you like
}
return 0;
}
Similarly, that square should be a function:
inline int square(int n) {
return n * n;
}
(Though in practice square isn't very useful, you'd just multiply directly.)
As a curiosity, though I wouldn't recommend it in this case (the above function is fine), a template meta-programming equivalent would be:
template<unsigned N> // could also be int if desired
struct NumName {
static char const* name(char const* default_=0) { return default_; }
};
#define G(NUM,NAME) \
template<> struct NumName<NUM> { \
static char const* name(char const* default_=0) { return NAME; } \
};
G(0,"Zero")
G(1,"One")
G(2,"Two")
G(3,"Three")
// ...
#undef G
Note that the primary way the TMP example fails is you have to use compile-time constants instead of any int.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
If a set A has 3 elements and set B has 5 elements then how many one-one function from A to B?
Is there a set formula to find No. Of one-one function?
If so, what theory is involved with it?
A:
Let $A$ be a set of $n$ elements, and $B$ be a set of $m$ elements, $n,m \in \mathbb{Z}$. To give a $1-1$ function $f : A \to B$ is the same as to say what $Im(f) \subseteq B$ should be, and then give a bijection $A \to Im(f)$. There are $m \choose n$ ways to decide $Im(f)$, and $n!$ ways to give the bijection.
Hence the answer is ${m \choose n} n!$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Dynamic form CodeIgniter, Attribute Form in Database
I want ask something about Code Igniter Dynamic Form.
I want to make a Form. Every tag like input, text area, etc, and attribute like id, class, etc of that Form stored in database. No hard code in html/php. Only fetch the attribute and tag from database, the Controller will load from database and send to View. And automatically will display the Form like what I saved in database.
In this case, the future of this form will be increased or reduced without need to hard coding by the developer.
Anyone can tell me how to make like that. And give reference link to me, so i can learn it.
A:
Looking at this github project may help you to get the idea
Generate a form from a DB table
|
{
"pile_set_name": "StackExchange"
}
|
Q:
JavaScript Promise not executing then
I am trying to implement a promise into the following JavaScript code, however the process.then function never actually happens for some reason. Can anyone see why? I have set up the new promise and it executes as I have tested it with the console log, however it never executes the .then function
Thanks
function connect() {
'use strict';
//User Input
var query = document.getElementById('query').value;
//API key & URL
var apiUrl = 'https://community-wikipedia.p.mashape.com/api.php?action=opensearch&search=' + query + '&limit=20&namespace=0&format=json';
var apiKey = "xxxxx";
//While requesting the data from API set the innerHTML to loading.
//document.getElementById('suggestions').innerHTML='Loading your request...';
document.getElementById('spin').style.display = 'inline';
//Process the JSON data
var process = new Promise(function (resolve, reject) {
//Method for connecting to API
var httpRequest = new XMLHttpRequest();
//Opening the API URL
httpRequest.open('GET', apiUrl, true);
httpRequest.setRequestHeader("X-Mashape-Key", apiKey);
httpRequest.send(null);
//When state has changed then triggers processResponse function
httpRequest.onload = function() {
//Checks the response codes
if (httpRequest.readyState === 4) {
//document.getElementById('suggestions').innerHTML='';
if (httpRequest.status === 200) {
var response = JSON.parse(httpRequest.responseText);
//Clear any previous results
document.getElementById('suggestions').innerHTML = '';
//Remove spinner when data is input
document.getElementById('spin').style.display = 'none';
resolve(response);
} else {
alert('There was a problem with the request');
reject('No Good!');
}
}
}
process.then (function(response) {
//Set response to response
var response = response;
//Grab suggestions div from DOM
var suggestions = document.getElementById('suggestions');
//Create new element UL
var list = document.createElement('UL');
//Create new elements for li's
var newLi, newText;
//For all the text nodes
var textNodes = [];
//For all the li's
var liList = [];
//For all the links
var links = [];
//For loop to add and append all suggestions
for (var i = 0; i < response[1].length; i++) {
//Replace spaces with underscore
var setHTML = response[1][i].replace(/\s/g, '_');
//Creates the appropriate link
var link = 'http://en.wikipedia.org/wiki/'+setHTML;
//Create new a elements in array
links[i] = document.createElement('a');
//Adds the link to links array
links[i].href = link;
//Create new text node with the response from api
textNodes[i] = document.createTextNode(response[1][i]);
//Create a new element 'li' into array
liList[i] = document.createElement('li')
//Append the response(textnode) to the a in the array
links[i].appendChild(textNodes[i]);
//Append the a to the li in the array
liList[i].appendChild(links[i]);
//Append the li to the UL
list.appendChild(liList[i]);
}
//Append the UL to the suggestions DIV
suggestions.appendChild(list);
}
)}
)}
function init() {
'use strict';
document.getElementById("query").addEventListener("keyup", connect);
}
window.onload = init;
A:
You shouldn't place your process.then() in the new Promise() block.
Instead of:
var process = new Promise(function (resolve, reject) {
// Code
process.then (function(response) {
// Code
}
)}
Use:
var process = new Promise(function (resolve, reject) {
// Code
)}
process.then (function(response) {
// Code
}
Instead of trying to access a process variable in the promise's scope, this then properly sets a then for your process promise.
Also, var response = response; is pretty pointless. It doesn't really add anything to your code.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Turn this integral into a Laplace transformation by Change of Variables
Question from Advanced Engineering Mathematics - Greenberg.
Page 268, section 5.4 question 6.
$C(T)$ = $\int_0^{\infty} e^{-0.0744v^2/T^2}p(v)dv$
is an approximate relation between frequency spectrum p(v) and the specific heat C(T) of a crystal, where T is the temperature.
Solve for $p(v)$ if $C(T) = T$.
Hint (given in book): By a suitable change of variables, the integral can be made to be a Laplace transform.
Spent hours on this one, got nowhere.
Looking for an appropriate change of variables to put $\int_0^{\infty} e^{-0.0744v^2/T^2}p(v)dv$ into the form:
$P(s) = \int_0^{\infty} e^{-st}p(t)dt$ - The Laplace Transform of $p(t)$. Then we can use the inverse table of the transformation to solve for $p(t)$.
Taking $s = -0.0744v/T^2$ doesn't seem to be very useful, because then the transformed function is of the form $P(-0.0744v/T^2) = T$, which is still a function of the variable $v$.
Any help is very much appreciated. Thanks.
A:
You want $−0.0744 v^22/T^2$ to be $-s t$. Let us get rid of the square in $v^2$. We can take $u=v^2$, so $du=2v\,dv$. We substitute, and now we have
$$
\int_0^\infty\,e^{−0.0744 u/T^2}\,\left(\frac1{2\sqrt u}\,p(\sqrt u)\right)\,du.
$$
This is the Laplace Transform of the function in brackets at the point $s=0.0744/T^2$. So your equation is now
$$
T=F(0.0744/T^2).
$$
If we let $R=0.0744/T^2$, then $T=\sqrt{0.0744/R}$, and now the equation looks like
$$
\frac{\sqrt{0.0744}}{\sqrt{R}}=F(R)
$$
Looking at the inverse Laplace Transform of $\sqrt a/\sqrt s$, we get that
$$
\frac1{2\sqrt u}\,p(\sqrt u)=\frac{\sqrt{0.0744}}{\sqrt \pi\,\sqrt u}.
$$
As $\sqrt u=v$, we get
$$
p(v)=\frac{2\times\sqrt{0.0744}}{\sqrt \pi}.
$$
It not that exciting that $p$ turns out to be constant...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Simple Gridview and button problems. I am learning Asp.Net. Please help me out!
I would not say I wasted my time, but spent around few hours changing this. But invain. Could some one please help me out.
In the following code:
I need to use break in between Disable Location(Title) and the gridview
I want the border color of grid to none. I do not want any color.
I want both the buttons Disable or deactivate and Cancel to be displayed just below the gridview. and in the middle of the page.
Please help me out!! Thanks alot.
<body>
<form id="form1" runat="server">
<div style="display: block; background: url(images/reusable_blue_bg.jpg) repeat-x 0 -15px;
border-left: #88b9c7 1px solid; border-bottom: #88b9c7 1px solid; border-top: #88b9c7 1px solid;
border-right: #88b9c7 1px solid; padding: 0px 2px; height: 236px; min-height: 236px;
height: auto; margin-left: auto; margin-right: auto;">
<table align="center" style="width: 554px; border-top-style: none; border-right-style: none;
border-left-style: none; border-bottom-style: none" id="TABLE1">
<tr>
<td align="center" colspan="5" style="font-weight: normal; font-size: 20px; margin: 0px;
font-family: Arial; color: #1e7c9b;">
Disable Location</td>
</tr>
I need number 1 over here..
<asp:GridView ID="disableloc" runat="server" AutoGenerateColumns="False" DataKeyNames="LocationName"
DataSourceID="" AllowPaging="True" EnableViewState="true" BorderStyle="None">
i want 2 over here, i guess
</asp:GridView>
I want 3 in here..
<tr align ="center" style="position:fixed">
<asp:ImageButton ID="btnDisable" runat="server" ImageAlign="Middle" ImageUrl="~/images/green-deactivate.gif" OnClick="btnDisable_Click"
ValidationGroup="group1" />
<asp:ImageButton ID="btnCancel" runat="server" ImageUrl="~/images/cancel.gif" OnClick="btnCancel_Click" />
</tr>
</table>
</div>
Thankss so much!!
A:
Use padding-bottom on the style of the header td to get the space between the header and the gridview.
Use gridlines="None" on the GridView.
Wrap both the GridView and the buttons in tds, now they're directly in the tr and that's not exactly ok html so the button wind up in the top left corner of the table.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Bind different ip addresses to urllib2 object in seperate threads
The following code binds specified ip address to socket in main program globally.
import socket
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
sock.bind((sourceIP, 0))
return sock
socket.socket = bound_socket
Suppose main program has 10 threads, each with a urllib2 instance running inside the thread. How to bind 10 different ip addresses to each urllib2 object?
A:
You can define a dictionary mapping thread identifier to IP address or use threading.local() global object to define it per thread:
socket_data = threading.local()
socket_data = bind_ip = None
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
if socket_data.bind_ip is not None:
sock.bind((socket_data.bind_ip, 0))
return sock
socket.socket = bound_socket
def thread_target(bind_ip):
socket_data.bind_ip = bind_ip
# the rest code
for bind_ip in [...]:
thread = Thread(target=thread_target, args=(bind_ip,))
# ...
But note, that is rather dirty hack. A better way is to extend connect() method in subclass of HTTPConnection and redefine http_open() method in subclass of HTTPHandler.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can my girlfriend maximise her chances of getting into the UK on a tourist visa?
I am a British citizen, currently living in Japan on a working visa. My girlfriend is a Japanese national, but has only lived in Japan for the past year (before that she lived in the Philippines).
We'd like to go back to the UK to visit my family this Christmas, but we're a bit worried about visas.
Before getting my working visa (which my company sorted out before I moved here permanently) every time I came to Japan I just turned up at the border and was given a 3 month tourist visa. I (probably naively) never even thought about the possibility of being denied entry.
However, my girlfriend was looking at the visa application documents online and it suggests bringing bank statements and other supporting documents. This, to me, suggests that there is a chance that my girlfriend will be denied entry.
She is studying the language while she tries to find a full time job, but the studying is all self-study and private classes so she isn't tied to any institution providing her with proof she is a student. She is also currently only employed part-time, with a very variable income.
Will just turning up at the border be OK? Should we print out some bank statements just in case? Should we apply in advance in some way? Can I (a British citizen) 'vouch' for her in some way? If so, how? I assume I can't just follow her through immigration.
A:
As a Japanese citizen, your girlfriend does not need a visa to visit the UK:
You won’t need a visa to come to the UK
You can stay in the UK for up to 6 months without a visa.
However, you should bring the same documents you’d need to apply for a visa, to show to officers at the UK border.
You may want to apply for a visa if you have a criminal record or you’ve previously been refused entry into the UK.
In your case, perhaps the most important document to show is a return ticket (or onward ticket) for your girlfriend to show that she won't try to stay in the UK. The fact that you have long-term residence in Japan will work in your favour.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can one convert an environment variable to a string in groovy
In a jenkins file pipeline I have:
def task = readJSON(file: 'ecs/task-definition.json')
echo "Read Task Definition: $task"
task.containerDefinitions[0].image="${AWS_VERSION_IMAGE}"
echo "New Task Definition With Image Is: $task"
In the output value of the second echo statement i get:
New Task Definition With Image Is: [name:proxy, image:[bytes:[48, 48, 55, 49, 50, 54, 53, 56, 51, 55, 53, 55, 46, 100, 107, 114, 46]]
where AWS_VERSION_IMAGE is an environment variable defined as AWS_VERSION_IMAGE = "${AWS_DOCKER_REGISTRY}:${VERSION_TAG}" in an environment block.
A:
Thanks for the replies, I ended up fixing the issue by using String instead of def like this:
String image = "${AWS_VERSION_IMAGE}"
task.containerDefinitions[0].image=image
Now it works.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Compare JavaScript Array
Possible Duplicate:
Why [] == [] is false in javascript?
I would like to ask about strange thing, i.e.:
var x = "pl";
var y = ["pl"];
[x] == y; // false - why?
x == y; // true - how ?
x === y; // false - okay
Can some one explain it?
Thanks in advance.
A:
The first one is false because you're comparing two arrays (which are objects) - a comparison which will always be false unless the objects are actually the same object, or if the objects are coerced to a different type of value like in the second comparison.
In the second comparison, y is coerced to be a string value, and then found to be equal to "pl".
For instance, this code:
["pl"] + "foo" → "plfoo"
Incidentally, this is why you should always use === instead of == - it doesn't result in any surprising coercions. That's why the third comparison is false.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
UIActionSheet orientation
I am having a problem trying to get the orientation of the device correct. I have to show an action sheet that should come depending on the orientation of the device. Here is the code I am using.
UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
switch (orientation)
{
case UIDeviceOrientationPortrait:
case UIDeviceOrientationFaceUp:
case UIDeviceOrientationFaceDown:
case UIDeviceOrientationUnknown:
case UIDeviceOrientationPortraitUpsideDown:
[self displayActionSheetInPotraitMode];
break;
case UIInterfaceOrientationLandscapeLeft:
case UIInterfaceOrientationLandscapeRight:
[self displayActionSheetInLandscapeMode];
break;
default:
[self displayActionSheetInPotraitMode];
break;
}
A:
Let's look what the UIDevice Class Reference tells us about the orientation property.
The value of this property always returns 0 unless orientation notifications have been enabled by calling beginGeneratingDeviceOrientationNotifications.
So, you should call
[[UIDevice currentDevice] beginGeneratingDeviceOrientationNotifications]
somewhere earlier in your code.
Also, I'd like to suggest a way to simplify your current code:
UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
if (UIDeviceOrientationIsLandscape(orientation)) {
[self displayActionSheetInLandscapeMode];
} else {
[self displayActionSheetInPotraitMode];
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Clone constant of hash into new variable without mutating constant on update with .each block?
I'm struggling with something. I've abstracted my code out to be as simple as possible, yet I still don't understand why it's having this behaviour.
I'm creating a constant consisting of a set of key-value pairs and freezing it. I'm then using the .dup method to copy the hash into a new variable.
However, when I iterate over an array and try to store it in the (previously empty) array in the new variable, it not only updates the new variable, but also the original constant. This only seems to be the case with the .each method - if I pass the new values directly as a new array, it works without updating the constant.
My abstracted code is below:
CONFIG_VALUES = { results: [], loop_count: 0 }.freeze
the_results = ["foo", "bar"]
abc = CONFIG_VALUES.dup
the_results.each do |res|
abc[:results] << res
end
abc
#=> {:results=>["foo", "bar"], :loop_count=>0}
CONFIG_VALUES
#=> {:results=>["foo", "bar"], :loop_count=>0}
A:
Hash#dup method isn't recursive. Anyway, if you use Ruby on Rails, and I think you do since you tagged it, you can use #deep_dup method: http://api.rubyonrails.org/classes/Hash.html#method-i-deep_dup
It's an ActiveSupport method, so you could just use the gem in case you aren't using Ruby on Rails.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to enter real full screen mode in macOS?
With the code below I tried to enter real full screen mode in macOS. If nil is passed as options then it enters kind of full screen mode, but no content is visible.
class ViewController: NSViewController {
override func viewDidLoad() {
super.viewDidLoad()
let opts: NSApplication.PresentationOptions = [.fullScreen]
var options = [NSView.FullScreenModeOptionKey: Any]()
options[.fullScreenModeAllScreens] = 0
// options[.fullScreenModeApplicationPresentationOptions] = opts.rawValue
view.enterFullScreenMode(NSScreen.main!, withOptions: options)
}
}
How to make the content visible or is there another way to enter full screen mode?
A:
Tell the ViewController's window to expand into fullscreen mode:
self.view.window?.toggleFullScreen(self)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
terminal mac - bash command not found, permission denied and '>' in prompt
I tried to change colours in Terminal following this guide:
http://voidcanvas.com/ubuntu-like-mac-terminal/
I created bash_profile file, saved and quitted Terminal. Now when i open Terminal I get this
Last login: Mon Oct 17 01:36:24 on ttys000
-bash: : command not found
-bash: : command not found
-bash: git: command not found
-bash: gt: command not found
-bash: /dev/null: Permission denied
->> $
the last line is changed in StackExchange's text editor, I do not understand why please look at the uploaded photo below - this is how it looks in my terminal
I deleted bash_profile and I still get the same.
Every command I write is followed by:
-bash: git: command not found
-bash: gt: command not found
-bash: /dev/null: Permission denied
What should I do it to stop it?
Update from comments: Here's what I put in the file.
export CLICOLOR=1
export LSCOLORS=GxBxCxDxexegedabagaced
parse_git_branch() {
git branch 2> /dev/null |
sed -e '/^[^*]/d' -e 's/* (.*)/ (\1)/'
}
export PS1="\e[0;35m->> \e[1;34m\W\e[0;32m\$(parse_git_branch)\e[0;37m $ "
A:
The > is a syntax error, apparently caused by HTML markup in whatever source you copy/pasted this from. Where you see > the author intended > and where you see < the author intended <. If there's an & that will need to be replaced with a literal & etc.
See a listing of HTML entity codes for a somewhat more exhaustive list.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Evitar que uma imagem sobreponha outra no input file
Tenho este código:
$img = $_FILES['file']['name'];
$diretorio = "imagens/";
$tmp = $_FILES['file']['tmp_name'];
move_uploaded_file($tmp, $diretorio.$img);
Se eu colocar, por exemplo: bola.png e outro usuário colocar bola.png depois, a imagem dele vai alterar a minha. Eu gostaria de que se já houver esse arquivo na minha pasta imagens e no meu banco de dados, ele apareça como, por exemplo: bola(1).png, depois bola(2).png e assim por diante.
A:
Basta usar is_file e criar uma função recursiva para evitar duplicar
function increment_name($path)
{
//Se o arquivo não existir o nome será aceito
if (!is_file($path)) {
return $path;
}
//Pega as informações do PATH
$info = pathinfo($path);
//Pega o nome sem extensão
$name = $info['filename'];
/*
* Se não houver um formato como "x (1).txt"
* então inicia do zero para incrementar 1
*/
$current = 0;
/*
* Verifica se já existe algo como "x (1).txt"
* se existir pega o numero e manda os valores do regex para $out
*/
if (preg_match('#\((\d+)\)$#', $name, $out)) {
//Pega o numero que estava entre parenteses
$current = $out[1];
//Remove o numero e os parenteses do final
$name = rtrim(substr($name, 0, -strlen($current)-2));
}
//Incrementa um numero
$name .= ' (' . (++$current) . ')';
//Checa recursivamente se o NOVO nome já existe ou não
return increment_name($info['dirname'] . '/' . $name . '.' . $info['extension']);
}
//Usando
$img = $_FILES['file']['name'];
$diretorio = "imagens/";
$tmp = $_FILES['file']['tmp_name'];
$new_name = increment_name($diretorio.$img);
move_uploaded_file($tmp, $new_name);
Usando uniqid()
Seguindo as sugestões dos comentários, pode usar o uniqid() só que tem um problema, ela não é 100% garantida pois usa o "tempo", então poder contornar o problema você tentar aplicar um rand() e também checar se o nome já existe, fazendo uma checagem recursiva
Criei um exemplo:
function create_ufilename($name, $path = '.')
{
//Pega a extensão da imagem original
$ext = pathinfo($name, PATHINFO_EXTENSION);
//Gera um nome baseado no tempo
$id = uniqid(rand(1, 100));
//Gera o caminho
$path .= '/' . $id . '.' . $ext;
//Se existir tentará novamente, caso contrário retornará o novo nome
return is_file($path) ? create_ufilename($name, $path) : $path;
}
//Usando
$img = $_FILES['file']['name'];
$diretorio = "imagens/";
$tmp = $_FILES['file']['tmp_name'];
$new_name = increment_name($diretorio.$img);
//Deve passar o nome da imagem e a pasta que deseja salvar, assim:
$salvar = create_ufilename($img, $diretorio);
move_uploaded_file($tmp, $salvar);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How does kernel code knows which spi bus is using?
I modified device tree file and enable spi using 4 GPIO pins, which support pinmux and switch from gpio to spi function.
But in Linux kernel code, how does the code know which spi bus/pins is used?
For example, I find a Linux kernel driver: max1111.c, which drives a spi ADC chip. But I checked its code, and don't find where the spi bus/pins is specified.
I paste max1111.c below.
/*
* max1111.c - +2.7V, Low-Power, Multichannel, Serial 8-bit ADCs
*
* Based on arch/arm/mach-pxa/corgi_ssp.c
*
* Copyright (C) 2004-2005 Richard Purdie
*
* Copyright (C) 2008 Marvell International Ltd.
* Eric Miao <[email protected]>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* publishhed by the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/err.h>
#include <linux/hwmon.h>
#include <linux/hwmon-sysfs.h>
#include <linux/spi/spi.h>
#include <linux/slab.h>
enum chips { max1110, max1111, max1112, max1113 };
#define MAX1111_TX_BUF_SIZE 1
#define MAX1111_RX_BUF_SIZE 2
/* MAX1111 Commands */
#define MAX1111_CTRL_PD0 (1u << 0)
#define MAX1111_CTRL_PD1 (1u << 1)
#define MAX1111_CTRL_SGL (1u << 2)
#define MAX1111_CTRL_UNI (1u << 3)
#define MAX1110_CTRL_SEL_SH (4)
#define MAX1111_CTRL_SEL_SH (5) /* NOTE: bit 4 is ignored */
#define MAX1111_CTRL_STR (1u << 7)
struct max1111_data {
struct spi_device *spi;
struct device *hwmon_dev;
struct spi_message msg;
struct spi_transfer xfer[2];
uint8_t tx_buf[MAX1111_TX_BUF_SIZE];
uint8_t rx_buf[MAX1111_RX_BUF_SIZE];
struct mutex drvdata_lock;
/* protect msg, xfer and buffers from multiple access */
int sel_sh;
int lsb;
};
static int max1111_read(struct device *dev, int channel)
{
struct max1111_data *data = dev_get_drvdata(dev);
uint8_t v1, v2;
int err;
/* writing to drvdata struct is not thread safe, wait on mutex */
mutex_lock(&data->drvdata_lock);
data->tx_buf[0] = (channel << data->sel_sh) |
MAX1111_CTRL_PD0 | MAX1111_CTRL_PD1 |
MAX1111_CTRL_SGL | MAX1111_CTRL_UNI | MAX1111_CTRL_STR;
err = spi_sync(data->spi, &data->msg);
if (err < 0) {
dev_err(dev, "spi_sync failed with %d\n", err);
mutex_unlock(&data->drvdata_lock);
return err;
}
v1 = data->rx_buf[0];
v2 = data->rx_buf[1];
mutex_unlock(&data->drvdata_lock);
if ((v1 & 0xc0) || (v2 & 0x3f))
return -EINVAL;
return (v1 << 2) | (v2 >> 6);
}
#ifdef CONFIG_SHARPSL_PM
static struct max1111_data *the_max1111;
int max1111_read_channel(int channel)
{
return max1111_read(&the_max1111->spi->dev, channel);
}
EXPORT_SYMBOL(max1111_read_channel);
#endif
/*
* NOTE: SPI devices do not have a default 'name' attribute, which is
* likely to be used by hwmon applications to distinguish between
* different devices, explicitly add a name attribute here.
*/
static ssize_t show_name(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "%s\n", to_spi_device(dev)->modalias);
}
static ssize_t show_adc(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct max1111_data *data = dev_get_drvdata(dev);
int channel = to_sensor_dev_attr(attr)->index;
int ret;
ret = max1111_read(dev, channel);
if (ret < 0)
return ret;
/*
* Assume the reference voltage to be 2.048V or 4.096V, with an 8-bit
* sample. The LSB weight is 8mV or 16mV depending on the chip type.
*/
return sprintf(buf, "%d\n", ret * data->lsb);
}
#define MAX1111_ADC_ATTR(_id) \
SENSOR_DEVICE_ATTR(in##_id##_input, S_IRUGO, show_adc, NULL, _id)
static DEVICE_ATTR(name, S_IRUGO, show_name, NULL);
static MAX1111_ADC_ATTR(0);
static MAX1111_ADC_ATTR(1);
static MAX1111_ADC_ATTR(2);
static MAX1111_ADC_ATTR(3);
static MAX1111_ADC_ATTR(4);
static MAX1111_ADC_ATTR(5);
static MAX1111_ADC_ATTR(6);
static MAX1111_ADC_ATTR(7);
static struct attribute *max1111_attributes[] = {
&dev_attr_name.attr,
&sensor_dev_attr_in0_input.dev_attr.attr,
&sensor_dev_attr_in1_input.dev_attr.attr,
&sensor_dev_attr_in2_input.dev_attr.attr,
&sensor_dev_attr_in3_input.dev_attr.attr,
NULL,
};
static const struct attribute_group max1111_attr_group = {
.attrs = max1111_attributes,
};
static struct attribute *max1110_attributes[] = {
&sensor_dev_attr_in4_input.dev_attr.attr,
&sensor_dev_attr_in5_input.dev_attr.attr,
&sensor_dev_attr_in6_input.dev_attr.attr,
&sensor_dev_attr_in7_input.dev_attr.attr,
NULL,
};
static const struct attribute_group max1110_attr_group = {
.attrs = max1110_attributes,
};
static int setup_transfer(struct max1111_data *data)
{
struct spi_message *m;
struct spi_transfer *x;
m = &data->msg;
x = &data->xfer[0];
spi_message_init(m);
x->tx_buf = &data->tx_buf[0];
x->len = MAX1111_TX_BUF_SIZE;
spi_message_add_tail(x, m);
x++;
x->rx_buf = &data->rx_buf[0];
x->len = MAX1111_RX_BUF_SIZE;
spi_message_add_tail(x, m);
return 0;
}
static int max1111_probe(struct spi_device *spi)
{
enum chips chip = spi_get_device_id(spi)->driver_data;
struct max1111_data *data;
int err;
spi->bits_per_word = 8;
spi->mode = SPI_MODE_0;
err = spi_setup(spi);
if (err < 0)
return err;
data = devm_kzalloc(&spi->dev, sizeof(struct max1111_data), GFP_KERNEL);
if (data == NULL) {
dev_err(&spi->dev, "failed to allocate memory\n");
return -ENOMEM;
}
switch (chip) {
case max1110:
data->lsb = 8;
data->sel_sh = MAX1110_CTRL_SEL_SH;
break;
case max1111:
data->lsb = 8;
data->sel_sh = MAX1111_CTRL_SEL_SH;
break;
case max1112:
data->lsb = 16;
data->sel_sh = MAX1110_CTRL_SEL_SH;
break;
case max1113:
data->lsb = 16;
data->sel_sh = MAX1111_CTRL_SEL_SH;
break;
}
err = setup_transfer(data);
if (err)
return err;
mutex_init(&data->drvdata_lock);
data->spi = spi;
spi_set_drvdata(spi, data);
err = sysfs_create_group(&spi->dev.kobj, &max1111_attr_group);
if (err) {
dev_err(&spi->dev, "failed to create attribute group\n");
return err;
}
if (chip == max1110 || chip == max1112) {
err = sysfs_create_group(&spi->dev.kobj, &max1110_attr_group);
if (err) {
dev_err(&spi->dev,
"failed to create extended attribute group\n");
goto err_remove;
}
}
data->hwmon_dev = hwmon_device_register(&spi->dev);
if (IS_ERR(data->hwmon_dev)) {
dev_err(&spi->dev, "failed to create hwmon device\n");
err = PTR_ERR(data->hwmon_dev);
goto err_remove;
}
#ifdef CONFIG_SHARPSL_PM
the_max1111 = data;
#endif
return 0;
err_remove:
sysfs_remove_group(&spi->dev.kobj, &max1110_attr_group);
sysfs_remove_group(&spi->dev.kobj, &max1111_attr_group);
return err;
}
static int max1111_remove(struct spi_device *spi)
{
struct max1111_data *data = spi_get_drvdata(spi);
hwmon_device_unregister(data->hwmon_dev);
sysfs_remove_group(&spi->dev.kobj, &max1110_attr_group);
sysfs_remove_group(&spi->dev.kobj, &max1111_attr_group);
mutex_destroy(&data->drvdata_lock);
return 0;
}
static const struct spi_device_id max1111_ids[] = {
{ "max1110", max1110 },
{ "max1111", max1111 },
{ "max1112", max1112 },
{ "max1113", max1113 },
{ },
};
MODULE_DEVICE_TABLE(spi, max1111_ids);
static struct spi_driver max1111_driver = {
.driver = {
.name = "max1111",
.owner = THIS_MODULE,
},
.id_table = max1111_ids,
.probe = max1111_probe,
.remove = max1111_remove,
};
module_spi_driver(max1111_driver);
MODULE_AUTHOR("Eric Miao <[email protected]>");
MODULE_DESCRIPTION("MAX1110/MAX1111/MAX1112/MAX1113 ADC Driver");
MODULE_LICENSE("GPL");
A:
SPI device driver (max1111 in your case) get pointer to underlying SPI-controller (struct spi_device *spi) during probe stage (max1111_probe). Driver should use it send requests to controller (using spi_sync, for example). Driver doesn't know about what PINS SPI-controller use.
What SPI-controller is passed to SPI device driver? SPI device, should be
declared in the DTS-file inside SPI-controller node. The controller initialized from SPI-controller node is passed to device probe.
SPI-controller can be hardware (specific to SoC), or SPI-GPIO. In case of hardware SPI, pins usually dedicated and specified in SoC TRM. In case of SPI-GPIO, GPIO names are specified inside DTS-properties of SPI-GPIO. The properties names are: gpio-sck, gpio-miso, gpio-mosi, num-chipselects and cs-gpios (list).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Double LinkedList Deep Copy in Kotlin with Generics and Thread Safety
Goal
Return a deep copy of a double LinkedList.
Each node also contains an additional random pointer, potentially to any node or null.
Code to start
data class Node<T>(
var data: T?,
var previous: Node<T>? = null,
var next: Node<T>? = null,
var random: Node<T>? = null
class LinkedList {
// TODO: Implement deep copy here.
}
Questions
Generics - Is there a better approach to handle the generic variance as to not need as T when passing in a generic type? i.e. linkedList.add(data = 1 as T)
Add thread-safety for operations - Are there any specific recommendations on thread-safety for this solution or broader topics to research to understand thread-safety considerations further?
Implement
See the full code on GitHub.
LinkedList.kt
class Node<T>(
var prev: Node<T>? = null,
var next: Node<T>? = null,
var rand: Node<T>? = null,
var data: T
)
class LinkedList<T>(
var first: Node<T>? = null,
var last: Node<T>? = null,
val randMap: HashMap<Node<T>?, Node<T>?> = hashMapOf()
) {
// Add Node to the end of LinkedList
fun add(data: T): Node<T> {
val temp = last
val newNode = Node(prev = temp, data = data)
last = newNode
if (temp == null)
first = newNode
else
temp.next = newNode
return newNode
}
fun deepCopyWithoutRandoms(prev: Node<T>?, node: Node<T>?): Node<T>? {
return if (node == null)
null
else {
val newNode = Node(data = node.data)
if (node.rand != null) {
newNode.rand = node.rand
randMap.put(node.rand, null)
}
newNode.prev = prev
newNode.next = deepCopyWithoutRandoms(newNode, node.next)
if (randMap.containsKey(node))
randMap.put(node, newNode)
return newNode
}
}
fun updateRandoms(node: Node<T>?): Node<T>? {
if (node != null) {
if (node.rand != null)
node.rand = randMap.get(node.rand!!)
updateRandoms(node.next)
return node
} else return null
}
fun clear() {
var node = first
while (node != null) {
node.prev = null
node.next = null
node.rand = null
node.data = 0 as T
node = node.next
}
}
fun toString(first: Node<T>?): String {
var output = ""
var node = first
while (node != null) {
output += String.format("(prev:%s next:%s data:%s random:%s)\n", node.prev, node.next, node.data, node.rand)
node = node.next
}
return output
}
}
A:
I'm not going to touch on your question on thread safety as it is a broad topic I am not familiar with. However, I can help with your questions about generics.
Right now, you're using generics great, except in one single place
node.data = 0 as T
The type of node.data is T. This code will fail if T is not Int - for example, if T is String, the code will look like this:
node.data = 0 as String
and that will throw a runtime exception.
Here's the important thing, though. There's no reason to do node.data = <anything>.
I assume the reason for having it originally was to "zero out" or get rid of the data as it's removed from the list - but that's what java will do for you automatically!
Let's say you have the following structure
linked list /--> node 1 /--> value 1
----------- | ------ | --------
first node ---/ data ---/ 7
when you delete the pointer to node 1, you end up in this situation
linked list node 1 /--> value 1
----------- ------ | --------
first node->null data ---/ 7
now that there is no reference anywhere to node 1, the jvm garbage collector deletes it
linked list value 1
----------- ------
first node->null 7
and because there is no reference to value 1, it's also deallocated.
This means that there's no reason to set the data field to anything - and, besides the point, there is no possible value you could set it to that would work for any value of T (in java, though, you could use null)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to be informed of mouse/keyboard events in a non-focus application?
I must implement a master application that starts other applications. If the user does not interact with the computer during a given time, the master application is supposed to kill the current slave application (only one can be started at a time).
So, for this I need to detect user actions (keyboard, mouse) knowing that the master application has no more the focus. But I do not know how to do this. I'm under Visual C++ 2017. And I'm using Qt 5.9.1 for the GUI.
On Windows documentation I have seen some posts about "hook" functions. But I do not know if (and how) I can use it for this particular purpose.
Thanks for your help
A:
OK,
Finally I found the solution. Hook functions correspond to me needs. I found the perfect example for my problem: https://code.msdn.microsoft.com/CppWindowsHook-06957865
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ArrayBuffer vs Blob and XHR2
XHR2 differences states
The ability to transfer ArrayBuffer, Blob, File and FormData objects.
What are the differences between ArrayBuffer and Blob ?
Why should I care about being able to send them over XHR2 ? (I can understand value of File and FormData)
A:
This is an effort to replace the old method which would take a "string" and cut sections of it out.
You would use an ArrayBuffer when you need a typed array because you intend to work with the data, and a blob when you just need the data of the file.
Blobs (according to spec anyway) have space for a MIME and easier to put into the HTML5 file API than other formats (it's more native to it).
The ArrayBuffer lets us work with typed arrays which is much faster than string manipulation to work with specific bytes and lets us define what type the array segments actually are. Since JavaScript is not strictly typed, it's hard to take a file that might be broken into an array of 32bit ints or perhaps 64bit floats (just imagine 8 bit ints-- that'd be a nightmare in terms of performance with string manipulation and bitwise calculations, especially with unicode).
As far as I can tell you can always move a blob to an array buffer or to a string representation, but this being native to XHR allows scripts to be faster which is the main advantage.
I'd use a blob for working with the file API, but I'd use the array for preforming computation on the data.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to compress huge java-script framework into few files (including sub directory's)
I have a huge javascript framework with sub directories that contains a lot of JavaScript includes to other JavaScript files, how can I take these JavaScript files and compress them to a single or very few compressed JavaScript
files?
A:
You can do that with "jsmin" or "closure-compiler".
http://www.crockford.com/javascript/jsmin.html
https://developers.google.com/closure/compiler/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can I use Named and Optional Arguments in ironpython
I hope to load .net dll in ironpython.
But one of static functions in .net dll, has some Named and Optional Arguments.
like, Draw(weight:w,height:h, Area=1)
Only can I use full arguments?
A:
Named and optional parameters are fully supported. .NET has had these for a long time for VB.NET support and so IronPython has supported that same way to do them since the beginning. The new C# syntax maps to the same underlying metadata as the old VB support.
For calling you use f(x = 42) which is Python's named parameter syntax. For optional parameters you can just leave them out. In your example case you can probably do Draw(weight, height) and leave Area out. Or you can call w/ weight and height as named parameters and leave Area out.
The underlying .NET meta data that IronPython looks for is either the OptionalAttribute or DefaultParameterValueAttribute. For optional we pass in default(T) unless the type is object in which case we pass in Missing.Value. This generally matches how reflection calls these APIs as well.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
In c++, difference between linking a class object and just including it.
this is my first time here.
I'm a physics grad student and I've recently found a research group to work with. We study statistical mechanics using computer models, so there is a significant programming aspect to it.
My first job has been to take the old data analysis code (written in java) and clean it up in c++. The guy before me who wrote the original program used the traditional physics method of using an enormous amount of spaghetti code and shoving it all in a constructor. This irked me quite a bit, so I divided it up into separate classes and now it has a reasonable class structure.
In the past, whenever we had defined a class we would put its definitions and its methods in a header function, then to use it we would just include it. I know this breaks the rules of how to use .h and .cpp files, but is there any reason other than code etiquette to compile and link a class object rather than include it?
Sorry for the long question.
A:
The idea of headers is that we separate the public interface (i.e. declarations) from internal implementation details (i.e. the actual method bodies). This split has all kinds of advantages:
There's a helper function I need? Let's just put it into the .cpp file, and outside code cannot see it. I can also do things like using std without interfering with other code.
We are stuck in a C mindset and want to use a macro? Declare it in the .cpp file to avoid polluting outside code.
There is a bug in a method? After fixing it, we only need to recompile that single .cpp file and re-link the application. This is much faster than recompiling everything.
Related to that: header files are often included multiple times, and are therefore recompiled again and again … conserver compilation time, and put as little information as possible into the headers.
Headers make the handling of cyclic dependencies across multiple files possible.
There are also a few drawbacks with headers, but you can't do anything to change that: headers are code duplication, they are idiotic language design, the C preprocessor is of an outdated design, and encapsulation (the split between public interface and implementation details) doesn't work in C++ because private fields of a class are part of the public interface.
What do we take from that? Mainly the point about compilation time. Headers allow you to recompile only those parts that have changed. This should be supported by virtually any build chain – learn about the make program if you are currently doing everything by hand.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Shopify Asset API updating collection.liquid using PUT method giving 404 with cURL
I am trying to update collection.liquid using Shopify API.
I am using below Shopify API wrapper with CodeIgniter,
Shopify API wrapper
This wrapper uses cURL to make API calls. I have used this library to make other apps for Shopify and it works just fine with GET,POST methods. For the first time I have tried using PUT method with it. and its giving me cURL Error given below ERROR #22: The requested URL returned error: 404 Not Found"
protected function modify_asset($theme_id)
{
$data = array(
'API_KEY' => $this->config->item('shopify_api_key'),
'API_SECRET' => $this->config->item('shopify_secret'),
'SHOP_DOMAIN' => $this->session->userdata('shop_domain'),
'ACCESS_TOKEN' => $this->session->userdata('access_token')
);
$this->load->library('shopify', $data);
$fields = array(
"asset" => array(
"key" => "templates\/collection.liquid",
"value" => "<p>We are busy updating the store for you and will be back within 10 hours.<\/p>"
)
);
$args = array(
'URL' => '/admin/themes/163760333/assets.json',
'METHOD' => 'PUT',
'RETURNARRAY' => TRUE,
'DATA' => $fields
);
try{
$modification_response = $this->shopify->call($args);
return $modification_response;
}
catch(Exception $e){
$modification_response = $e->getMessage();
log_message('error','In Get Active Theme Id' . $modification_response);
//redirect('/wrong/index');
var_dump('In modification response ' . $modification_response);
exit;
}
}
}
Above is my function to implement the API call. You can see cURL options and its implementation on below link:
cURL options and happening of Shopify API call
Note : This request is working just fine on POSTMAN.
A:
I've just been running some tests on this code using the information you provided, and was able to get a successful submission after removing the backslash in $fields['asset']['key'] as per the example above.
So
"key" => "templates\/collection.liquid",
Becomes:
"key" => "templates/collection.liquid",
It appears Shopify doesn't require forward slashes in file keys to be escaped.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Cancel connection of blocking socket?
I'm using the following code for client socket
int ConnectToServerSocket
(
char* _serverIP, //in
char* _serverPort, //in
SOCKET& _connectedSocket //out
)
{
struct addrinfo *addrResult = NULL,hints;
ZeroMemory(&hints, sizeof (hints));
hints.ai_socktype = SOCK_STREAM;
hints.ai_protocol = IPPROTO_TCP;
hints.ai_family = AF_UNSPEC;
int result = 0;
if (getaddrinfo(_serverIP, _serverPort, &hints, &addrResult))
{
int err = WSAGetLastError();
return err;
}
_connectedSocket = socket(addrResult->ai_family, addrResult->ai_socktype, addrResult->ai_protocol);
if (_connectedSocket == INVALID_SOCKET)
{
int err = WSAGetLastError();
freeaddrinfo(addrResult);
return err;
}
if (connect(_connectedSocket, addrResult->ai_addr, (int)addrResult->ai_addrlen) != 0)
{
int err = WSAGetLastError();
closesocket(_connectedSocket);
_connectedSocket = INVALID_SOCKET;
return err;
}
return 0; //successful
}
The problem is I want to cancel the connection anytime, In the cancel event button, I called closesocket(_connectedSocket); but it was blocked by connect() function so long after return the error.
Someone can show me how to interrupt the connect() function immediately?
Many thanks,
T&T
A:
Have another thread do the connect. That will allow you to wait for that other thread using whatever method, for however long, and with whatever abort mechanism you wish.
You don't need to abort the connect itself.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
DBCC CHECKDB Notification
There are plenty of questions on DBA.SE regarding DBCC CHECKDB and how to resolve problems when errors are returned. My specific question is on actually getting notified that DBCC CHECKDB returned errors. Most all DBAs know that you can automate the command and should run it often.
I came across this article by Cindy Gross, which has some very good notes. In it she mentions use of SQL Server Agent that if it finds errors from the execution of the CHECKDB command it will fail that step (or job depending on configuration). She points to Paul Randal's blog post on the topic here.
Now I am curious if anyone knows that the Check Database Integrity Task in a maintenance plan would do the same thing? MSDN does not mention that it will and I have not truthfully been an environment where it has come across a corruption issue; so can't say that it does. This would be versus simply setting up a SQL Agent Job with multiple steps that runs the specific command against each database, as Cindy suggested.
Thoughts? Obviously proof is in the pudding so providing more than just a guess would be helpful...
A:
The Check Database Integrity Task provided in the maintenance plan issue DBCC CHECKDB WITH NO_INFOMSGS on the database selected. You can view its command by clicking the view-SQL in the task setup. If you doubt the generated SQL command, you can use SQL profiler to see its SQL command. If corruption was found, the agent job with this maintenance task will generate error and fail (with proper job step setup).
One thing to point out, running DBCC CHECKDB is equal as performing DBCC CHECKALLOC, DBCC CHECKTABLE, DBCC CHECKCATALOG and other validation. If you are running DBCC CHECKDB, you do not have to run them separately. Running them separately usually is to perform specific integrity check or need to spread out the integrity check to smaller task due to limited time to perform entire DBCC CHECKDB. More information can be found here on MSDN.
A:
I setup a SQL Server 2008 R2 instance and have the following databases:
AdventureWorks2008R2
DemoFatalCorruption1
zzOtherDatabase
I setup a Maintenance Plan with the Check Database Integrity Task in it.
After running the plan it did fail:
After reviewing the log it showed that it will continue checking additional databases after hitting one that failed from corruption being found:
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to deselect the contents of a TextField in swift
I have a simple desktop app where a TextField should be focused when the window loads. I have this working, but it's a little annoying that, having loaded the users content into the TextField, the entire contents of the field become selected automatically. The user may want to start editing the content, but they will rarely/never want to replace it all at once (imagine a text editor doing this, to see what I mean).
I see there is an Action for selectAll: but what I want is the opposite Action of selectNone:
I tried passing nil to the selectText method, but that doesn't work:
textField.selectText(nil)
I found a number of answers on StackOverflow that mention a selectedTextRange, but this appears to be outdated, because Xcode 6.3 doesn't recognize this as a valid property on TextField.
Can anyone explain how I do this?
A:
It's been a while since I've dealt with NSTextFields to this level (I work mostly in iOS these days).
After doing a little digging I found this on the net:
NSText* textEditor = [window fieldEditor:YES forObject:textField];
NSRange range = {start, length};
[textEditor setSelectedRange:range];
window is the window containing your field, textField.
This requires the field editor to be managing your field, what can be done simply by previously selecting the whole text of the field using the selectText:sender method.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What could reasonably be done in case 7,500 airplanes must be landed quickly?
I read 5,000 to 10,000 aircraft are flying at a given time. There are several very unexpected but plausible reasons to land all planes quickly, e.g: a war / terrorism event; a giant volcano eruption); or a major meteorite impact
Whatever, in case there is a sudden need to stop flying, what would happen:
From an ATC standpoint, is there any impossibility to conduct such massive landing?
Is there enough runway throughput to allow planes to land quickly?
Is there enough room to park all aircraft?
Is there any preparation for this kind of operation in some countries?
A:
There's roundabout 400 paved runways over 2438 m (8000 ft) in the USA and EU, respectively, and about 200 or so in China and Russia, respectively, so that's about 1200. They are long enough to deal with most aircraft. Some might be untowered or unavailable, but then there are other countries, too. Estimates for "commercial" airports worldwide run around 4000 (there are about 9000 IATA codes, apparently).
So, every suitable runway would have to accommodate around 2 to 10 aircraft.
You can have around 50 to 70 movements per runway per hour, though you need separation of up to 3 minutes to avoid wake turbulence if a smaller jet arrives after a heavy. Let's say 30 landings per hour, and we're talking about 20 minutes, just in terms of pure runway capacity.
Now, to get all those planes lined up nicely... Parking might get crowded, too, but I'd think it would be doable - just fill up taxi ways progressively (presumably you don't care about take-offs for a while). See this diagram of aircraft parking prior to the Tenerife disaster (where many planes diverted to the small Tenerife airport due to a bomb in Las Palmas).
At any rate, the practicalities would be daunting. Would be interesting to see a proper feasibility study.
A:
Assuming that GA/small private and commercial aircraft land at separate locations:
Say that around 80%, of those airplanes are GA flights (which they probably are); you would have around 6000 GA airplane flights based on your average number of 7500. Based on that number, it would be theoretically possible to stack quite a few landings concurrently à la Oshkosh. Given that three runways there can land and park somewhere around 8000 airplanes in a couple days (from 6am to 8pm), I would say that spreading that many planes out across all the airports in the world, you could ground almost all GA aircraft within an hour or so.
Note that this kind of operation requires a bit of set up and coordination, so it would probably take a longer to actually pull this operation off.
Commercial aircraft would take a bit longer though as a good number of them would be flying at cruise altitude from which it takes ~30 minutes to comfortably get down from. However, some of these aircraft would be in an over-water flight which would take maybe 4 hours to even reach a decent airport, after which it would have to get into the pattern which might toss on another hour or so to that.
Under the best conditions, I would say that to clear the skies would take around 4.5 hours, but it could probably take longer than that due to airplane concentration in certain areas and some people messing up their landings.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Django Rest Framework different format response format for same URL and HTTP method
I working on an application with uses Django Rest Framework to handle queries and I use django-rest-framework-datatables plugin to help me handle pagination to datatables.
Works fine, but when I request for a single register it keeps bringing me a json format for datatables, like this:
{
"count": 1,
"next": null,
"previous": null,
"results": [{
"id": 1,
"name": "University of Passo Fundo",
"country": "Brazil"
}]
}
This is not a big issue, but I would prefer to receive just the result field. How can I defined two different response format for a same URL and same method, just checking request parameters in django rest framework?
Follow my code:
urls.py
router = routers.DefaultRouter()
router.register(r'institution', InstitutionViewSet, base_name='Institution')
urlpatterns = [
path('admin/', admin.site.urls),
path('api-auth/', include('rest_framework.urls')),
# api
path('api/', include(router.urls)),
# views
url(r'^$', Home.as_view(), name='index'),
url(r'institution/', Institution.as_view(), name='institution'),
]
serializer.py
class InstitutionSerializer(serializers.ModelSerializer):
class Meta:
model = Institution
fields = '__all__'
datatables_always_serialize = ('id', 'name', 'country')
models.py
class Institution(models.Model):
id = models.AutoField(db_column='id', primary_key=True)
name = models.CharField(db_column='name', max_length=255, null=False)
country = models.CharField(db_column='country', max_length=255, null=False)
class Meta:
db_table = 'Institution'
managed = True
verbose_name = 'Institutions'
verbose_name_plural = 'Institutions'
ordering = ['id']
def __str__(self):
return self.name
views.py
class InstitutionViewSet(viewsets.ModelViewSet):
serializer_class = InstitutionSerializer
def get_queryset(self):
if 'type' in self.request.GET and self.request.GET['type'] == 'edit':
return Institution.objects.filter(id=self.request.GET['id'])
return Institution.objects.all().order_by('id')
A:
First of all, that's the way Django render response for pagination.
So you can see the next or previous list of items based on the page.
And second you should override the list view of Django to be like this:
class InstituttionViewSet(viewsets.ModelViewSet):
serializer_class = InstitutionSerializer
pagination_class = None
def list(self, request, *args, **kwargs):
queryset = self.filter_queryset(self.get_queryset())
serializer = self.get_serializer(queryset, many=True)
return Response(serializer.data)
In here, we are overriding the list method which is responsible to render a list of items API. So it will first get all items in queryset, then pass it to serializer to write it to a specific format, and at last return that lists in json for a response.
Also, remember I also set pagination_class=None So Django will not use pagination for APIs anymore.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Firefox 5, 6, 7 and XULRunner: Which versions are which?
I'm trying to recompile a Firefox extension that has binary components for use with Firefox 5 now that the beta is out. According to this I need to rebuild the binary components. What I can't figure out is which xulrunner to download and build against from here.
Is there a table that matches up FF versions (5, 6, 7) with code names (Beta, Central, Aurora) with Xul Runner versions (2, etc)?
Any decent guide would be great.
Update
It looks like the SDK / Mozilla version number has been changed to match Firefox's version number. Based on that my guess is now this:
Firefox 5 | Beta | XULRunner 5.0
Firefox 6 | Aurora | XULRunner 6.0
Firefox 7 | Central | XULRunner 7.0
A:
Mozilla's wiki has a section of the Firefox page that gives the mappings you're looking for, but it only covers the already-released versions.
Another page called Releases lists upcoming versions and their codenames, but doesn't indicate the underlying XULRunner version.
It would appear that you'd want "latest-mozilla-beta" (which is listed as XULRunner 5) for now. Judging by the version numbers in the newer nightlies, it looks like the XULRunner versions are going to be shifted so that they match up with their corresponding Firefox versions.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
powershell dictionary of arrays - dynamically table-format output
So here's what I'm trying to do...
basically i have a dictionary of arrays
$data = @{
"123" = @('ABC',
'DEF',
'GHI'
)
"234" = @(
'JKL',
'MNO',
'PQR'
)
"345" = @(
'STU',
'VWX',
'YZ'
)
}
$serverArray = @('one', 'two', 'three')
If I do this...
$alignment = @()
$alignment += @{label="Type";Expression={$_.name};alignment="left"}
$alignment += @{label=$serverArray[0];Expression={$_.value[0]};alignment="left"}
$alignment += @{label=$serverArray[1];Expression={$_.value[1]};alignment="left"}
$alignment += @{label=$serverArray[2];Expression={$_.value[2]};alignment="left"}
$data.GetEnumerator() | sort name | Format-Table $alignment -autosize
I get the correct desired output.
Type one two three
123 ABC DEF GHI
234 JKL MNO PQR
345 STU VWX YZ
What I was trying to do though is make it so that no matter the length of my $serverArray it would format with the appropriate columns.
So I tried a few different variations (using for loop and foreach) to this but nothing seems to work ...
$alignment = @()
$alignment += @{label="Type";Expression={$_.name};alignment="left"}
for([int]$s=0; $s -lt $serverArray.length; $s++) {
write-output $s
$alignment += @{label=$serverArray[$s];Expression={$_.value[$s]};alignment="left"}
}
$websites.GetEnumerator() | sort name | Format-Table $alignment -autosize
Seems like since the expression is being stored - it's storing $s literally instead of the actual value that it represents at the time.
How can I make it store the actual value in the expression for $s (0, 1, or 2) instead?
Ideally I could have a serverArray of "one, two, three, four, five" and since it's in a loop everything work just the same.
Any suggestions? Greatly appreciate the help!
A:
Looks like the problem is the evaluation of $s in the creation of the expression part of the hash table which takes it literally instead of its value.
I'd try this:
for([int]$s=0; $s -lt $serverArray.length; $s++) {
$scr = [scriptblock]::Create('$_.value['+$s+"]")
$alignment += @{label=$serverArray[$s];Expression=$scr;alignment="left"}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
FX Task, returning a value always returns null
I am trying to return a String from a FX Task.
Expected output: true,testABC.
Real output: true,null.
The call to the task:
public static void main(String[] args) {
ExecutorService pool = Executors.newCachedThreadPool();
Future<String> my_String = (Future<String>) pool.submit(new my_task());
try {
Thread.sleep(500);
System.out.println(my_String.isDone());
System.out.println(my_String.get());//gettings the String from the future
} catch (InterruptedException | ExecutionException ex) {
Logger.getLogger(Return_test.class.getName()).log(Level.SEVERE, null, ex);
}}
The task:
public class my_task extends Task<String>{
@Override
protected String call() throws Exception {
String tempString = "testABC";
return tempString;
}}
A:
Task implements Runnable, but not Callable. So when you call pool.submit(myTask), you are calling the overloaded form of ExecutorService.submit(...) taking a Runnable. In general, of course, Runnables do not return values, so the Future that is returned from that version of submit(...), as Aerus has already pointed out, just returns null from its get() method (it cannot get a value from the Runnable).
However, Task also extends FutureTask directly, so you can use your task directly to get the result. Just do
Task<String> myTask = new my_task(); // Please use standard naming conventions, i.e. new MyTask();
pool.submit(myTask);
try {
// sleep is unnecessary, get() will block until ready anyway
System.out.println(myTask.get());
} catch (InterruptedException | ExecutionException ex) {
Logger.getLogger(Return_test.class.getName()).log(Level.SEVERE, null, ex);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is GAS and how can I avoid it?
What is this GAS that all the pros talk about?
It sounds pretty bad. How do I avoid it?
Is there a cure?
A:
"GAS" is a joking acronym for "Gear Acquisition Syndrome" — basically, a hobby that is adjacent to photography. See also "Lens Acquisition Syndrome" and similar. It means buying new equipment for its own sake, and a drive to keep doing so, probably with an ever-increasing budget.
I don't think it's necessary bad. There can be something fun about following the technology trends, or being a collector. As collection hobbies go, lenses (either old or new) seems just as reasonable as stamps or baseball cards. And it's worth noting that this isn't specific to photography — one hears similar in other areas where there is both a creative aspect and a tools component. I think the term actually came from musicians (as that's where I can find earliest use of the term), and there is a similar thing with buying tools in woodworking.
This "syndrome" is often accompanied by a fixation on technical data and an obsession with review sites, forums, and rumor sites. I'm sure a lot of us here can at least relate — answering questions on this site is another photography-adjacent hobby, and (um) people with "GAS" are often quite knowledgeable about all kind of technical details.
The problem is when what you really want to do is actual photography, but you're kind of itchy and unsatisfied, and you start to think that buying something new will fix it. (See for example How to get back to photography after having a long break?) And buying something new sometimes works! Maybe that new lens or flash or lighting modifier will help you see in a new way and break the rut. But, it might also only be a temporary jolt, leaving you back where you were, only to spend more money again and again.
One fix? Before you buy something, try something new with what you have. Only buy something new when it fills a specific need — never because you feel bored with what you have.
Another suggestion: if you're in the habit of reading camera gear focused blogs and sites (I'm lookin' at you, Digital Photography Review) daily, switch it up. Those sites basically run on GAS (sorry — can't help the pun!) so of course they have incentive to encourage it. Instead, make an internet reading list centered on making photographs. Frequent sites which are all about discussing, showing, and sharing photographs rather than technology or equipment.
If you must buy something, try a book — maybe something about the creative side of photography from Michael Freeman's The Photographer's Eye series, but even better, try a book of photographs by someone you admire. Or, a compromise: Why Photographs Work (which never offers "this works because it was shot with the latest full-frame and $3k lens").
As the humorous acronym suggests, this isn't usually a serious term. It's said by enthusiasts who spend a lot of money on an expensive hobby to poke fun at themselves a bit, to maybe assuage a bit of guilt through self-deprecating humor. Occasionally, I do see it as a complaint or a warning, particularly when someone seems obsessed with buying a full-frame camera or big heavy lenses with special-colored rings when they don't seem to really be getting the most out of what they have in terms of actually making photographs. But even then — it's okay for people to have this adjacent hobby, as long as that's what they want and they're not fooling themselves.
A:
The only thing missing from the other answers is a reference to the well circulated Letter to George, which is written by Michael Johnston, the former well known editor of Camera & Darkroom magazine who later became editor-in-chief of Photo Technique. I always took it to be written from the point of view of a camera salesperson who wants to give a friend the 'inside scoop' on how to not fall for all of the other camera salesperson's tricks to maximize their sales commissions.
In it, "Mike" explains to "George" why he recommended an expensive, top-of-the-line camera right off the bat to "George" when he asked what camera he should buy to start doing photography. It was to save him the time and expense of spending thousands upon thousands of dollars on GAS (Gear Acquisition Syndrome) before he bought that model anyway.
Then there is this one from the archives of What the Duck, a cartoon strip that took a humorous look at the many aspects of photography. In this strip we meet a duck who started out with photography as a hobby but then transitioned to another hobby: justifying purchases of photographic equipment.
A:
The less obvious risks of GAS (other than to your bank balance) were nicely summarized by Ansel Adams in his introduction to The Camera:
It is easy to confuse the hope for accomplishment with the desire to
possess superior instruments
|
{
"pile_set_name": "StackExchange"
}
|
Q:
BFS implementation to find connected components taking too long
This is in continuation to a question I asked here. Given the total number of nodes (employees) and the adjacency list (friendship amongst employees), I need to find all the connected components.
public class Main {
static HashMap<String, Set<String>> friendShips;
public static void main(String[] args) throws IOException {
BufferedReader in= new BufferedReader(new InputStreamReader(System.in));
String dataLine = in.readLine();
String[] lineParts = dataLine.split(" ");
int employeeCount = Integer.parseInt(lineParts[0]);
int friendShipCount = Integer.parseInt(lineParts[1]);
friendShips = new HashMap<String, Set<String>>();
for (int i = 0; i < friendShipCount; i++) {
String friendShipLine = in.readLine();
String[] friendParts = friendShipLine.split(" ");
mapFriends(friendParts[0], friendParts[1], friendShips);
mapFriends(friendParts[1], friendParts[0], friendShips);
}
Set<String> employees = new HashSet<String>();
for (int i = 1; i <= employeeCount; i++) {
employees.add(Integer.toString(i));
}
Vector<Set<String>> friendBuckets = bucketizeEmployees(employees);
System.out.println(friendBuckets.size());
}
public static void mapFriends(String friendA, String friendB, Map<String, Set<String>> friendsShipMap) {
if (friendsShipMap.containsKey(friendA)) {
friendsShipMap.get(friendA).add(friendB);
} else {
Set<String> friends = new HashSet<String>();
friends.add(friendB);
friendsShipMap.put(friendA, friends);
}
}
public static Vector<Set<String>> bucketizeEmployees(Set<String> employees) {
Vector<Set<String>> friendBuckets = new Vector<Set<String>>();
while (!employees.isEmpty()) {
String employee = getHeadElement(employees);
Set<String> connectedEmployeesBucket = getConnectedFriends(employee);
friendBuckets.add(connectedEmployeesBucket);
employees.removeAll(connectedEmployeesBucket);
}
return friendBuckets;
}
private static Set<String> getConnectedFriends(String friend) {
Set<String> connectedFriends = new HashSet<String>();
connectedFriends.add(friend);
Set<String> queuedFriends = new LinkedHashSet<String>();
if (friendShips.get(friend) != null) {
queuedFriends.addAll(friendShips.get(friend));
}
while (!queuedFriends.isEmpty()) {
String poppedFriend = getHeadElement(queuedFriends);
connectedFriends.add(poppedFriend);
if (friendShips.containsKey(poppedFriend))
for (String directFriend : friendShips.get(poppedFriend)) {
if (!connectedFriends.contains(directFriend) && !queuedFriends.contains(directFriend)) {
queuedFriends.add(directFriend);
}
}
}
return connectedFriends;
}
private static String getHeadElement(Set<String> setFriends) {
Iterator<String> iter = setFriends.iterator();
String head = iter.next();
iter.remove();
return head;
}
}
I have tested my code using the following script, the results of which I consume as sdtIn
#!/bin/bash
echo "100000 100000"
for i in {1..100000}
do
r1=$(( $RANDOM % 100000 ))
r2=$(( $RANDOM % 100000 ))
echo "$r1 $r2"
done
While I was able to verify (for trivial inputs) that my answer is correct, when I try with huge inputs as with the above script, I see that the run takes long (~20s).
Is there anything I can do better in my implementation ?
A:
First of all, two things to read or search for: Cluster analysis (just a hint, I'm not an expert about that) and Linked: The New Science of Networks by Albert-László Barabási.
Barabási in his book shows that networks usually have some nodes which have a lot more connections than the others. The distribution is not the same in the real world as the sample shell script generates.
The code is quite good, I like your variable and method names and separated methods. I wonder why haven't anyone reviewed it yet.
Vector<Set<String>> friendBuckets = bucketizeEmployees(employees);
I'd use a simple List or ArrayList here. Vector is considered obsolete.
In the getConnectedFriends method the
Set<String> queuedFriends = new LinkedHashSet<String>();
could be a Queue. It has a poll method. As far as I tested it's faster than the currently used iterator-based remove.
public class Main {
Main isn't a good class name. Everyone can have a Main. What's it purpose? Try to find a more descriptive name.
static HashMap<String, Set<String>> friendShips;
HashMap<...> reference types should be simply Map<...>. See: Effective Java, 2nd edition, Item 52: Refer to objects by their interfaces
Should I always use the private access modifier for class fields?; Item 13 of Effective Java 2nd Edition: Minimize the accessibility of classes and members.
static HashMap<String, Set<String>> friendShips;
Instead of Map<String, Set<String>> you could use Guava's Multimap (doc, javadoc) which was designed exactly for that. It would reduce the size of the mapFriends method:
public static void mapFriends(final String friendA, final String friendB,
final Multimap<String, String> friendsShipMap) {
friendsShipMap.put(friendA, friendB);
}
So, it could be removed.
public static Vector<Set<String>> bucketizeEmployees(Set<String> employees) {
...
}
This method calls getConnectedFriends(employee), which is the following:
private static Set<String> getConnectedFriends(String friend) {
...
}
It's confusing: what is the difference between an employee and friend? Are they the same?
if (friendShips.get(friend) != null) {
The following is the same:
if (friendShips.containsKey(friend)) {
A guard clause would be even better.
if (!friendShips.containsKey(friend)) {
return connectedFriends;
}
if (!connectedFriends.contains(directFriend) && !queuedFriends.contains(directFriend)) {
queuedFriends.add(directFriend);
}
The !queuedFriends.contains(directFriend) condition is unnecessary, it's a set which can't contain elements twice and adding an already added element to a LinkedHashSet doesn't modify anything. From the javadoc:
Note that insertion order is not affected if an element is re-inserted into the set.
The following pattern occurs more than once in the code:
if (map.containsKey(key)) {
String value = map.get(key);
...
}
...
It might be a microoptimization, but if you profile the code and the results shows it as a bottleneck the following structure is the same:
String value = map.get(key);
if (value != null) {
...
}
...
A few guard clause would help to make getConnectedFriends more flatten:
while (!queuedFriends.isEmpty()) {
final String poppedFriend = getHeadElement(queuedFriends);
connectedFriends.add(poppedFriend);
if (!friendShips.containsKey(poppedFriend)) {
continue;
}
for (final String directFriend: friendShips.get(poppedFriend)) {
if (connectedFriends.contains(directFriend)) {
continue;
}
queuedFriends.add(directFriend);
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to compile all dependencies and shared libs into one binary
I want to compile all dependencies etc and shared libraries into the binary?
How to do that?
g++ -std=c++11 txtocr.cpp -o txtocr -llept -ltesseract
Tesseract depends on leptonica and some shared tesseract libraries.. But how to compile everything into the binary so it would be 100% portable
A:
I believe the answer is "It depends". If you only have the shared library without the code of the library I am afraid the answer will be NO as not all information you need in order to build a static application are within your dynamic library.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to organize(group) nodes under a closed element - XSLT
I have tried simple grouping XML with XSLT 1.0 and it worked, but here I have something more complicated and actually different situation.
So the XML structure is basically this:
<Main>
<TB>
--> some elements and stuff - not relevant
<City>
<Area>
<Position>5</Position>
<House>
--> some elements and stuff
</House>
</Area>
<Area>
<Position>5</Position>
<Block>
--> some elements and stuff
</Block>
</Area>
<Area>
<Position>6</Position>
<House>
--> some elements and stuff
</House>
</Area>
<Area>
<Position>6</Position>
<Block>
--> some elements and stuff
</Block>
</Area>
</City>
<City>
--> same structure but with several repetitions of Position 7 and 8.
</City>
</TB>
</Main>
What I need is to group the Blocks and Houses which are under the same position and remove the repetition of Position numbers. For example it will get like this:
<City>
<Area>
<Position>5</Position>
<House>
--> some elements and stuff
</House>
<Block>
--> some elements and stuff
</Block>
</Area>
<Area>
<Position>6</Position>
<House>
--> some elements and stuff
</House>
<Block>
--> some elements and stuff
</Block>
</Area>
</City>
<City>
--> same structure for Position 7 and 8.
</City>
It's harder because the Position is not an attribute of the Area, so I basically have to identify the value of the Position of the Area, then grab the House and Block that fall under the same Position, and put them together surrounded by the same <Area> </Area>.
A:
This looks like a fairly standard Muenchian grouping problem to me, grouping Area elements (not House or Block elements directly) by their Position.
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:strip-space elements="*" />
<xsl:output method="xml" indent="yes" />
<xsl:key name="areaByPosition" match="Area" use="Position" />
<xsl:template match="@*|node()">
<xsl:copy><xsl:apply-templates select="@*|node()" /></xsl:copy>
</xsl:template>
<!-- for the first Area in each Position -->
<xsl:template match="Area[generate-id() =
generate-id(key('areaByPosition', Position)[1])]">
<Area>
<!-- copy in the Position element once only -->
<xsl:apply-templates select="Position" />
<!-- copy in all sub-elements except Position from all matching Areas -->
<xsl:apply-templates select="
key('areaByPosition', Position)/*[not(self::Position)]" />
</Area>
</xsl:template>
<!-- ignore all other Area elements -->
<xsl:template match="Area" />
</xsl:stylesheet>
This assumes there are no other elements named Area elsewhere in the document, if any of the "some elements and stuff" may be named Area then you need to be a bit more specific, for example limiting the grouping to Area elements that are direct children of a City:
<xsl:key name="areaByPosition" match="City/Area" use="Position" />
<xsl:template match="City/Area[generate-id() =
generate-id(key('areaByPosition', Position)[1])]"
priority="2">
...
</xsl:template>
<xsl:template match="City/Area" priority="1" />
(with explicit priorities because without that both templates would have the same default priority of 0.5)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to remove .php, .html, .htm extensions with .htacess?
I'm Developing one website and want to remove the extensions from my website s7info in order to make the URLs more user and search friendly. I stumbled across tutorials on how to remove the .php extension from a PHP page. What about the .html? i want to remove those as well..!!
A:
try this,
.htacess file
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.php -f
RewriteRule ^(.*)$ $1.php
#RewriteRule ^([a-z]+)\/?$ $1.php [NC]
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.html -f
RewriteRule ^(.*)$ $1.html
#RewriteRule ^([a-z]+)\/?$ $1.html [NC]
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.htm -f
RewriteRule ^(.*)$ $1.htm
#RewriteRule ^([a-z]+)\/?$ $1.htm [NC]
</IfModule>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
running git on a linux server vs github
I'm considering using git for source code management. I'm a little confused as to whether I should setup a ubuntu server with git or just use github ? Is there really any advantage of using github ? Also, how would you deploy the code from git (or github) to the production server if the production server does not have git installed ? Do I have to write some kind of deployment script in php (using ftp functions) or use shell script ?
Thanks
A:
Git doesn't need a server to operate at all.
You can install it on your local machine, create a repository and start using it.
A remote git installation (which you can do on your own remote server or which github.com provides for free) is only needed to create a remote copy of your git repository (useful if you're going to be programming from multiple locations, or if you want to share your source code's history with the world)
The advantage of github - public repos are free, it has GUI clients for Windows and Mac (don't know about Linux).
Deploying the code to a production server is a completely different thing - it has nothing to do with git. There's no need for git to be installed on the production server.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to create a random path between two known points in matrix in MATLAB
If there is a matrix and two known points, how to create one random path (no need to be the shortest) between these two points with:
a path that can have a level of deviation
a path can be totally random (but not necessary)
the next step can be only from 4 neighbors
ex.
5x5 matrix with two known points: (2,1) and (5,5)
After input: pt1 = [2,1]; pt2 = [5,5];.
How could I get the pattern such as follows with the path recorded in the parameter, such aspath = [2,1;2,2-;3,2;4,2;4,3;4,4;4,5;5,5].
X X X X X
o o X X X
X o X X X
X o o o o
X X X X o
A:
PART A - Aim is to find coordinates of a line/path connecting two points on a 2D domain such that no two neighboring coordinates are diagonal to each other i.e. that is left/right/top/bottom only.
Function codes
function pts_array = points_array(pt1,pt2)
if pt1(1)==pt2(1)
if pt2(2)>pt1(2)
pts_array = [repmat(pt1(1),(pt2(2)-pt1(2)+1),1) (pt1(2):pt2(2))'];
elseif pt2(2)<pt1(2)
pts_array = flipud([repmat(pt1(1),(pt1(2)-pt2(2)+1),1) (pt2(2):pt1(2))']);
else
pts_array = pt1;
end
elseif pt1(2)==pt2(2)
if pt2(1)>pt1(1)
pts_array = [(pt1(1):pt2(1))' repmat(pt1(2),(pt2(1)-pt1(1)+1),1)];
elseif pt2(1)<pt1(1)
pts_array = flipud([(pt2(1):pt1(1))' repmat(pt1(2),(pt1(1)-pt2(1)+1),1)]);
else
pts_array = pt1;
end
else
gslope1_org = (pt2(2)-pt1(2))/(pt2(1)-pt1(1));
if gslope1_org <1
pt1 = fliplr(pt1);
pt2 = fliplr(pt2);
end
gslope1 = (pt2(2)-pt1(2))/(pt2(1)-pt1(1));
off1 = 1;
pts_array = [pt1];
gpt1 = pt1;
while 1
slope1 = (pt2(2)-gpt1(2))/(pt2(1)-gpt1(1));
if (slope1<gslope1)
gpt1 = [gpt1(1)+off1 gpt1(2)];
pts_array = [pts_array; gpt1];
else
new_y = floor(gpt1(2)+slope1);
range_y = (gpt1(2)+1 : floor(gpt1(2)+slope1))';
gpt1 = [gpt1(1) new_y];
pts_array = [pts_array ; [repmat(gpt1(1),[numel(range_y) 1]) range_y]];
end
if isequal(gpt1,pt2)
break;
end
end
if gslope1_org <1
pts_array = fliplr(pts_array);
end
end
function pts_array = points_array_wrap(pt1,pt2) %%// Please remember that this needs points_array.m
x1 = pt1(1);
y1 = pt1(2);
x2 = pt2(1);
y2 = pt2(2);
quad4 = y2<y1 & x2>x1; %% when pt2 is a lower height than pt1 on -slope
quad3 = y2<y1 & x2<x1; %% when pt2 is a lower height than pt1 on +slope
quad2 = y2>y1 & x2<x1; %% when pt2 is a higher height than pt1 on -slope
if quad4
y2 = y2+ 2*(y1 - y2);
end
if quad2
y2 = y2 - 2*(y2 - y1);
t1 = x1;t2 = y1;
x1 = x2;y1 = y2;
x2 = t1;y2 = t2;
end
if quad3
t1 = x1;t2 = y1;
x1 = x2;y1 = y2;
x2 = t1;y2 = t2;
end
pts_array = points_array([x1 y1],[x2 y2]);
if quad4
offset_mat = 2.*(pts_array(:,2)-pt1(2));
pts_array(:,2) = pts_array(:,2) - offset_mat;
end
if quad3
pts_array = flipud(pts_array);
end
if quad2
offset_mat = 2.*(pt1(2)-pts_array(:,2));
pts_array(:,2) = pts_array(:,2) + offset_mat;
pts_array = flipud(pts_array);
end
return;
Script
pt1 = [2 1];
pt2 = [5 5];
pts_array = points_array_wrap(pt1,pt2);
plot(pts_array(:,1),pts_array(:,2),'o'), grid on, axis equal
for k = 1:size(pts_array,1)
text(pts_array(k,1),pts_array(k,2),strcat('[',num2str(pts_array(k,1)),',',num2str(pts_array(k,2)),']'),'FontSize',16)
end
Output
pts_array =
2 1
2 2
3 2
3 3
4 3
4 4
4 5
5 5
Plot
PART B - Aim is to find coordinates of a line/path connecting two points on a 2D domain through given spaces.
In this special case, we are assuming that there are some spaces and only through which the path is to be connected. This is not asked by OP, but I thought it could interesting to share. So, for this, the spaces would be the o's as shown in OP's question.
Code
function your_path = path_calc(mat1,starting_pt,final_pt)
[x1,y1] = find(mat1);
pt1 = [x1 y1];
d1 = pdist2(pt1,final_pt,'euclidean');
[~,ind1] = sort(d1,'descend');
path1 = pt1(ind1,:);
your_path = path1(find(ismember(path1,starting_pt,'rows')):end,:);
return;
Run - 1
%%// Data
mat1 = zeros(5,5);
mat1(2,1:2) = 1;
mat1(3,2) = 1;
mat1(4,2:5) = 1;
mat1(5,5) = 1;
starting_pt = [2 1];
final_pt = [5 5];
%%// Path traces
path = path_calc(mat1,starting_pt,final_pt);
Gives -
mat1 =
0 0 0 0 0
1 1 0 0 0
0 1 0 0 0
0 1 1 1 1
0 0 0 0 1
path =
2 1
2 2
3 2
4 2
4 3
4 4
4 5
5 5
Run - 2
%%// Data
mat1 = zeros(5,5);
mat1(2,1:2) = 1;
mat1(3,2) = 1;
mat1(4,2:5) = 1;
mat1(5,5) = 1;
mat1 = fliplr(mat1');
%%// Notice it starts not from the farthest point this time
starting_pt = [2 3];
final_pt = [5 1];
%%// Path traces
path = path_calc(mat1,starting_pt,final_pt);
Gives
mat1 =
0 0 0 1 0
0 1 1 1 0
0 1 0 0 0
0 1 0 0 0
1 1 0 0 0
path =
2 3
2 2
3 2
4 2
5 2
5 1
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I interpolate an object from point A to B such that it accelerates, overshoots, and bounces back to target position?
How The Game Works
You grab and drag an object around. Once you let go, it interpolates to a certain, variable position, and that's what I'm trying to do here.
Problem Description
I'm talking about vectors, but simplifying that to ordinary numbers for the sake of an example, if A is 0 and B is 1, the object would go something like this:
0.0
0.2
0.5
0.9
1.3
0.9
1.1
1.0
What I've Tried
private IEnumerator GoTo(Vector2 endPosition) {
float elapsed = 0;
float duration = 1f;
while (elapsed <= duration) {
transform.position = Vector2.LerpUnclamped(transform.position, endPosition, animCurve.Evaluate(elapsed / duration));
distance = CalculateDistance(transform.position, endPosition);
elapsed += Time.deltaTime;
yield return new WaitForEndOfFrame();
}
}
Where animCurve is:
The highest point is approx. 1.3, and the plot converges to 1.0 from there.
Result
This doesn't work at all. Unless something's wrong with Unity's Vector2.LerpUnclamped(), I'm lost.
A:
A few small fixes:
Cache your initial position, and lerp from there to your end, to avoid a feedback loop where transform.position is being used to modify itself.
Advance your elapsed before updating position - that way you finish the loop at your end position, rather than one frame before your end position.
Yield return null to resume next frame with next frame's delta, not at the end of this frame.
All together:
private IEnumerator GoTo(Vector2 endPosition) {
float elapsed = 0;
float duration = 1f;
Vector2 startPosition = (Vector2)transform.position;
while (elapsed <= duration) {
elapsed += Time.deltaTime;
float t = animCurve.Evaluate(elapsed / duration));
transform.position = Vector2.LerpUnclamped(startPosition, endPosition, t);
distance = CalculateDistance(transform.position, endPosition);
yield return null;
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
cloning a dictionary that has List as values
how do you clone a dictionary such as this one :
Dictionary<int, List<User>>()
Every attempt that I make of cloning it ends up failing.
If I have this :
Dictionary<int, List<User>> dict1 = new Dictionary<int, List<User>>();
User user1=new User{Name="Mey"};
dict1.Add(1,user1);
doing this :
var dict2 = new Dictionary<int, List<User>>(dict1);
dict2 will still be referencing user1 instead of a new User object.
I want the User object to be duplicated so that changing the clone properties is not reflected on the original object.
Edit :
So I wrote the following code snippet :
var dict2 = new Dictionary<int, List<User>>();
//clone the dict1 dictionary
foreach (var item in dict1)
{
var list = new List<User>();
foreach (var u in item.Value)
{
list.Add(new User{ Name = u.Name, Total=u.Total});
}
dict2.Add(item.Key, list);
}
class User
{
public string Name{get;set;}
public double Total{get;set;}
}
A:
.Net collections do not have built-in cloning support.
You need to create a new dictionary, loop through all of the entries in the original dictionary, add a corresponding entry with a new List<User> in the new dictionary, and loop through the original list to add copies of the User objects to the new list.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Jehovah's Witnesses and Shunning Reconciled with New Testament
Jehovah's Witnesses shun people who fall foul of scriptural law,
Jehovah’s Witness kids grow up knowing that if they ever mess up, their parents will leave them — and that’s scary (quote from person interviewed in link above)
and looking at what I believe are scriptural reasons for shunning, they seem to be from the following
Proverbs 4:15
Shun it, do not take it;Turn away from it, and pass it by. (NWT)
Job 2:3
And Jehovah said to Satan: “Have you taken note of my servant Job? There is no one like him on the earth. He is an upright man of integrity, fearing God and shunning what is bad. He is still holding firmly to his integrity, even though you try to incite me against him to destroy him for no reason.” (NWT)(Job said "It is unthinkable for me to declare you men righteous! Until I die, I will not renounce my integrity!" - Job 27:5 - NWT)
but how can Jehovah's Witnesses reconcile shunning in the manner prescribed by the organisation with Matthew 7:1
“Stop judging that you may not be judged; (NWT)
Do not judge, or you too will be judge (NIV)
Romans 12:17-19
17 Return evil for evil to no one. Take into consideration what is fine from the viewpoint of all men.18 If possible, as far as it depends on you, be peaceable with all men.19 Do not avenge yourselves, beloved, but yield place to the wrath; for it is written: “‘Vengeance is mine; I will repay,’ says Jehovah.” (NWT)
and
You must love your neighbor as yourself. - Matthew 22:39-40, Romans 13:9, Galatians 5:14 & James 2:8
Plus, should you not be trying to stear them back to the correct way?
A:
Before reconciling the shunning arrangement with these Scriptures, I'll first explain the situation.
Being shunned by Jehovah's Witnesses is a consequence of a person being either disfellowshipped or disassociated. A disfellowshipped person is someone who was previously a baptized member of Jehovah's Witnesses but then was found to be unrepentant in committing a serious sin. A disassociated person is someone who was previously baptized as one of Jehovah's Witnesses but who later expressed that they no longer wished to be known as one of Jehovah's Witnesses.
Shunning is a last resort for those who have not heeded the counsel of the elders in their congregation. A great effort has already been made to steer the person back, but it has failed. Now it is the responsibility of the elders to maintain purity and unity within the congregation. When a person is jeopardizing that purity and poses a danger to the faith of others in the congregation, the congregation must cease all association with that person. In time, the person may discover the error of their decision, and at that time the elders would be happy to help this person return to the fold.
The Biblical basis for the practice of shunning is found in these scriptures:
1 Corinthians 5:1-6, 11-13
1 Actually sexual immorality is reported among you, and such immorality as is not even found among the nations—of a man living with his father’s wife. 2 And are you proud of it? Should you not rather mourn, so that the man who committed this deed should be taken away from your midst? 3 Although absent in body, I am present in spirit, and I have already judged the man who has done this, as if I were actually with you. 4 When you are gathered together in the name of our Lord Jesus, and knowing that I am with you in spirit along with the power of our Lord Jesus, 5 you must hand such a man over to Satan for the destruction of the flesh, so that the spirit may be saved in the day of the Lord.
6 Your boasting is not good. Do you not know that a little leaven ferments the whole batch of dough?
...
11 But now I am writing you to stop keeping company with anyone called a brother who is sexually immoral or a greedy person or an idolater or a reviler or a drunkard or an extortioner, not even eating with such a man. 12 For what do I have to do with judging those outside? Do you not judge those inside, 13 while God judges those outside? “Remove the wicked person from among yourselves.”
From this first scripture, we can see that Paul had the authority to judge a man in the congregation at Corinth. The command was to stop associating with the man who had committed sexual immorality, which is a serious sin. This drastic action was necessary in order to avoid the "leaven" from spreading to the rest of the congregation.
Romans 16:17-20
17 Now I urge you, brothers, to keep your eye on those who create divisions and causes for stumbling contrary to the teaching that you have learned, and avoid them. 18 For men of that sort are slaves, not of our Lord Christ, but of their own appetites, and by smooth talk and flattering speech they seduce the hearts of unsuspecting ones. 19 Your obedience has come to the notice of all, and so I rejoice over you. But I want you to be wise as to what is good, but innocent as to what is evil. 20 For his part, the God who gives peace will crush Satan under your feet shortly. May the undeserved kindness of our Lord Jesus be with you.
In this scripture, we can see the command to avoid those who create divisions and causes for stumbling which disrupt the peace and unity of the congregation. Obedience to this command is a cause for rejoicing because it is seen by all.
2 Timothy 2:16-18
16 But reject empty speeches that violate what is holy, for they will lead to more and more ungodliness, 17 and their word will spread like gangrene. Hy·me·naeʹus and Phi·leʹtus are among them. 18 These men have deviated from the truth, saying that the resurrection has already occurred, and they are subverting the faith of some.
1 Timothy 1:20
20 Hy·me·naeʹus and Alexander are among these, and I have handed them over to Satan so that they may be taught by discipline not to blaspheme.
In these two scriptures, we can see the repeated mention of Hy·me·naeʹus, a man who participated in misleading the congregation and was thereafter disciplined by being "handed over to Satan," the same discipline that was given to the sexually immoral man in Corinth.
2 John 9-11
9 Everyone who pushes ahead and does not remain in the teaching of the Christ does not have God. The one who does remain in this teaching is the one who has both the Father and the Son. 10 If anyone comes to you and does not bring this teaching, do not receive him into your homes or say a greeting to him. 11 For the one who says a greeting to him is a sharer in his wicked works.
This scripture discusses how to treat those who do not remain in "the teaching of the Christ." It makes it very clear that even greeting these persons causes us to become associated with their wrongdoing.
We can see throughout all of these verses that shunning these wrongdoers has several purposes:
To prevent the congregation from following after their sinful course
To restore peace and unity to the congregation
To protect the reputation of God's people
To discipline the wrongdoer so that they might return to Jehovah
To reconcile these purposes with the commands to "stop judging" and to "love your neighbor as yourself," it's important to note the example of Jesus himself in how he demonstrated these principles. Did he show a judgemental attitude in how he dealt with others? Did he completely hold back from counseling others? No. Jesus was able to discipline others with righteousness by using the Scriptures. (2 Timothy 3:16)
A:
As a JW, this is an entirely false premise. "Jehovah’s Witness kids grow up knowing that if they ever mess up, their parents will leave them."
This is "trying" to describe "disfellowshipping." Which would only happen if an ordained minister turns their back on the congregation unrepentantly. A child that isn't a baptized minister can not be disfellowshipped. Someone who sins, but is repentant generally isn't disfellowshipped (unless the sin is extreme, in which case they could be disfellowshipped even if repentant). And even in the case of disfellowshipped people, this breaks spiritual bonds, not family ones.
In the book “Keep yourselves in God’s Love” we find:
In some instances, the disfellowshipped family member may still be living in the same home as part of the immediate household. Since his being disfellowshipped does not sever the family ties, normal day-to-day family activities and dealings may continue. Yet, by his course, the individual has chosen to break the spiritual bond between him and his believing family. So loyal family members can no longer have spiritual fellowship with him. For example, if the disfellowshipped one is present, he would not participate when the family gets together for family worship. However, if the disfellowshipped one is a minor child, the parents are still responsible to instruct and discipline him. Hence, loving parents may arrange to conduct a Bible study with the child.*—Proverbs 6:20-22; 29:17."
There are some situations where people are encouraged to avoid disfellowshipped people so as to not enable their behaviors. But to overimplify this to "if a child messes up their parent will leave them" is offensively dishonest. Some parents in any religion (including our own) definitely have disowned or neglected their children... this is a bad thing we do not approve of.
A:
In order to answer your question, it is important to establish that discipline in the form of removing an unrepentant wrongdoer from the congregation is a form of judgement authorized by God to preserve the holiness of the congregation.
Examples are recorded throughout the scriptures from the time Gods people were organized into a congregation.
For instance, this direction was given to the nation of Israel in this regard:
Deut 13:6...
“If your brother, the son of your mother, or your son or your daughter or your cherished wife or your closest companion should try to entice you in secrecy, saying, ‘Let us go and serve other gods, gods that neither you nor your forefathers have known, from the gods of the peoples all around you, whether near you or those far away from you, from one end of the land to the other end of the land, you must not give in to him or listen to him, nor should you show pity or feel compassion or protect him; instead, you should kill him without fail. Your hand should be the first to come upon him to put him to death, and the hand of all the people afterward. And you must stone him to death, because he has sought to turn you away from Jehovah your God, who has brought you out of the land of Egypt, out of the house of slavery. Then all Israel will hear and become afraid, and they will never again do anything bad like this among you.”
This verse is significant in regards to your question, because despite God being the originator of marriage & family, when an individual rebelled against God, punishment was to be carried out despite family ties. This is not a contradiction of Gods command that “a man shall stick to his wife”, but rather shows how seriously God views rebellion against him. In Israel the punishment meant death, a permanent “cutting off”. Imagine how difficult it would have been to carry out Gods law. (Christians everywhere should be grateful we are not under the Mosiac Law). This verse also shows the discipline was to serve as a deterrent so others would not follow the same course.
Another reason a wrongdoer was to be removed from the congregation, was it impeded the flow of Gods Holy Spirit within the entire congregation. We can see this from another account recorded in the scriptures...
In Joshua 7: 1-26, One individual, Achan, secretly stole some items. Because he violated God’s explicit instructions, when Israel went to conquer the next city in Canaan, Jehovah withheld his blessing. When Joshua asked Jehovah why they had lost the battle, he was told “I will not be with you again unless you annihilate from your midst what was devoted to destruction”. Achan and his family (who were probably aware of his sin) were “cut off”—executed. Once that rebellious influence was removed from the congregation, God’s Holy Spirit flowed freely and Israel was successful again.
Regarding other serious sin (as when a death occurred) God commissioned Israelite elders to investigate. They were to establish facts, weigh carefully a manslayer’s motive, attitude, and previous conduct when deciding whether to show mercy. They had to determine whether the fugitive acted “out of hatred” and “with malicious intent.” (Numbers 35:20-24) If the testimony of witnesses was considered, at least two witnesses had to substantiate a charge of intentional murder. —Num. 35:30.
You can see from all these instances, the punishment of “cutting off” was for serious sins: apostasy, theft, murder. (Other sins that also required “cutting off” included disrespect of Jehovah, idolatry, child sacrifice, spiritism, desecration of sacred things, and practices as incest, bestiality, and sodomy.)
Today, Christians, while not under the Mosiac law, are also commanded to keep the Christian congregation clean, free from the influence of willful violators who deliberately “practice” sin. Some of the offenses that could merit disfellowshipping from the Christian congregation are fornication, adultery, homosexuality, greed, extortion, thievery, lying, drunkenness, reviling, spiritism, murder, idolatry, apostasy, and the causing of divisions in the congregation. (1Co 5:9-13; 6:9, 10; Tit 3:10, 11; Re 21:8)
This would not be a contradiction to “ not judge your brother”, rather it would be in harmony with the direction given by the apostles to preserve the holiness of the Christian congregation. In fact, the scriptures make clear while we are NOT to judge those outside the congregation (that responsibility belongs to God), older men were given the responsibility to judge willful violators within the congregation....
1 Cor 5:11-13. But now I am writing you to stop keeping company with anyone called a brother who is sexually immoral or a greedy person or an idolater or a reviler or a drunkard or an extortioner, not even eating with such a man. For what do I have to do with judging those outside? Do you not judge those inside, while God judges those outside? “Remove the wicked person from among yourselves.”
By not permissively overlooking willful sinfulness, a high standard of conduct befitting Gods holy people would (and should) be the norm in the congregation. Hypocrisy among those claiming to represent God was something Jesus fully condemned. This assures:
1) the free flow of Gods Holy Spirit in the congregation
2) others are not influenced by bad conduct
3) the congregation does not bring reproach upon God.
Thus, when someone visits a congregation of Jehovah’s witnesses (or one of Jehovah’s witnesses knocks on your door), you can be assured that person at least meets the Bible’s minimum code of conduct.
This all being said, disfellowshipping is a rare occurrence.
Firstly, because before one can be baptized as one of Jehovah’s Witnesses, an individual must have a good knowledge of the scriptures. They must fully understand the standard of conduct required to be in good standing in the congregation, be active in the door to door public ministry and attend all meetings. Obviously, someone who has reached that point in their determination to do Gods will is already dedicated, well before they present themselves for baptism. There’s no surprises and there’s no infant baptism. (Unlike in Israel, you were “born” in a covenant relationship with God; you didn’t have a choice)
Secondly, it’s really hard to be disfellowshipped. It’s a last resort when someone has changed their Christian course and just does not want to do what the Bible says. But the spirit behind disfellowshipping is never harsh. We are friends, families who have known each other, sometimes grown up together. The elders are our brothers, our friends and family as well. Members of the congregation truly mourn. When someone is disfellowshipped, they are encouraged to still attend meetings. They can sit with their families, go to the literature counter to request publications and can approach the elders. The elders also arrange to visit disfellowshipped ones to offer encouragement. In this way the congregation is obedient to scriptural counsel, but still keeps the door open for an individual who wants to turn their course around.
For further reading on this subject, please follow the links below.
https://wol.jw.org/en/wol/d/r1/lp-e/2015287
https://wol.jw.org/en/wol/d/r1/lp-e/1995003
https://wol.jw.org/en/wol/d/r1/lp-e/1102002039
|
{
"pile_set_name": "StackExchange"
}
|
Q:
React widgets within text (static HTML)
I'm writing a book with a bunch of small interactive widgets within the text content. So I have text like this in my database:
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit,
sed do eiusmod tempor incididunt ut labore et dolore.</p>
<div class="interactive-graph" />
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit,
sed do eiusmod tempor incididunt ut labore et dolore.</p>
These widgets require JS to run. Some of them are re-used in other chapters of the book, but most of them are one-offs.
I understand that React requires a single root component, and I've used React to make web apps before. But I don't understand how I would be able to write these widgets as React components and automatically mount them as the dynamic text is rendered on the page.
Is React just a bad fit for this? Can you think of any examples that demonstrate how to do this?
A:
Your situation is perfectly suitable for React.
Simplest example that would work for you:
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<title>Hello React!</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.7/react.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.7/react-dom.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script>
</head>
<body>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore.</p>
<div id="interactive-graph" />
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore.</p>
<script type="text/babel">
ReactDOM.render(
<h1>Hello, world!</h1>, document.getElementById('interactive-graph') );
</script>
</body>
</html>
Example was taken from React Getting started guide. But generally, you want to introduce build system that converts your JSX files into JS bundle by using browserify or wabpack and include it into your main HTML that way. That requires some initial learning curve. Mentioned link is the best place to start.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
getting common joins for every combination of entries from a table
I have a table of users with userid and username.
Another table has a list of interests, with interestid and name.
A third table is a join table, with userid and interestid.
For each pair of users, I want to get the count of interests they have in common. I've tried a lot of things, the most recent is this:
SELECT u1.username AS me, u2.username AS you, COUNT(j.interestid) AS commoninterests
FROM users u1, users u2
INNER JOIN interests_join j
ON u1.id = j.id
WHERE u1.id != u2.id
GROUP BY u1.name
I just can't get a working query on this. Any help?
A:
This is a self join on interests_join:
select ij1.userid, ij2.userid, count(*)
from interests_join ij1 join
interests_join ij2
on ij1.interestid = ij2.interestid and
ij1.userid < ij2.userid
group by ij1.userid, ij2.userid;
Note: this version only brings back the ids and only one pair for two users: (a, b) but not (b, a).
Now, this gets trickier if you want to include user pairs that have no common interests. If so, you need to first generate the user pairs using a cross join and the bring in the interests:
select u1.username, u2.username, count(ij2.userid)
from users u1 cross join
users u2 left join
interests_join ij1
on ij1.userid = u1.userid left join
interests_join ij2
on ij2.userid = u2.userid and
ij1.interestid = ij2.interestid
group by u1.username, u2.username;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Verifying the correctness of a Sudoku solution
A Sudoku is solved correctly, if all columns, all rows and all 9 subsquares are filled with the numbers 1 to 9 without repetition. Hence, in order to verify if a (correct) solution is correct, one has to check by definition 27 arrays of length 9.
Q1: Are there verification strategies that reduce this number of checks ?
Q2: What is the minimal number of checks that verify the correctness of a (correct) solution ?
(Image sources from Wayback Machine:
first
second)
The following simple observation yields an improved verification algorithm: At first enumerate rows, columns and subsquares as indicated in pic 2. Suppose the columns $c_1,c_2,c_3$ and the subsquares $s_1, s_4$ are correct (i.e. contain exactly the numbers 1 to 9). Then it's easy to see that $s_7$ is correct as well. This shows:
(A1) If all columns, all rows and 4 subsquares are correct, then the solution is correct.
Now suppose all columns and all rows up to $r_9$ and the subsquares $s_1,s_2,s_4,s_5$ are correct. By the consideration above, $s_7,s_8,s_9$ and $s_3,s_6$ are correct. Moreover, $r_9$ has to be correct, too. For, suppose a number, say 1, occurs twice in $r_9$. Since the subsquares are correct, the two 1's have be in different subsquares, say $s_7,s_8$. Hence the 1's from rows $r_7, r_8$ both have to lie in $s_9$, i.e. $s_9$ isn't correct. This is the desired contradiction.
Hence (A1) can be further improved to
(A2) If all columns and all rows up to one and 4 subsquares are correct, then the solution is correct.
This gives as upper bound for Q2 the need of checking 21 arrays of length 9.
Q3: Can the handy algorithm (A2) be further improved ?
A:
The consequence relation $\models$ defined in Emil Jeřábek's answer is a matroid. In fact, it is a linear matroid.
Let $X=\{r_1,\ldots,r_9,c_1,\ldots,c_9,b_1,\ldots,b_9\}$ be the set of possible checks. Recall that given $S \subset X$ and $x \in X$, the notation $S \models x$ means that every Sudoku grid which is valid on $S$ is also valid on $x$.
We may embed $X$ into the free abelian group $V$ generated by the 81 cells of the Sudoku grid, by mapping a check $x \in X$ to the formal sum of the cells contained in $x$. The span of $X$ has rank $21$, and the kernel of the natural map $\mathbf{Z}X \to V$ is generated by the six relations of the form $r_1+r_2+r_3-b_1-b_2-b_3$.
Proposition. We have $S \models x$ if and only if $x \in \operatorname{Vect}(S)$.
Proof. By Proposition 2 from Emil's answer, the consequence relations $\models$ and $\vdash$ coincide, so we may work with $\vdash$. Let us prove that $S \vdash x$ implies $x \in \operatorname{Vect}(S)$. By transitivity, we may assume $S=D \backslash \{x\}$ for some $D \in \mathcal{D}$. It is straightforward to check that $x \in \operatorname{Vect}(D \backslash \{x\})$ in each case (i)-(iv).
Conversely, let us assume $x=\sum_{s \in S} \lambda_s s$ for some $\lambda_s \in \mathbf{Z}$. Since the elements of $X$ have degree 9, we have $\sum_{s \in S} \lambda_s = 1$. Any Sudoku grid provides a linear map $\phi : V \to E$, where $E$ is the free abelian group with basis $\{1,\ldots,9\}$ (map each cell to the digit it contains). If the grid is valid on $S$ then $\phi(s)=[1]+\cdots+[9]$ for every $s \in S$, and thus $\phi(x)=[1]+\cdots+[9]$, which means that the grid is valid on $x$. QED
Note that a set of checks $S$ is complete if and only if $\operatorname{Vect}(S)=\operatorname{Vect}(X)$. In particular, the minimal complete sets are those which form a basis of $\operatorname{Vect}(X)$, and it is now clear that every such set has cardinality $21$.
We also obtain a description of the independent sets : these are exactly the sets which are linearly independent when considered in $V$. Any independent set may be extended to a minimal complete set (we may have worked with $\mathbf{Q}$-coefficients instead of $\mathbf{Z}$-coefficients above).
A:
$\DeclareMathOperator\span{span}$Here is an argument which works for general $n\times n$ Sudokus, $n\ge 2$, using some ideas from the other answers (namely, casting the problem in terms of linear algebra as in François Brunault’s answer, and the notion of alternating paths below is related to the even sets as in Tony Huynh’s answer, attributed to Zack Wolske).
I will denote the cells as $s_{ijkl}$ with $0\le i,j,k,l< n$, where $i$ identifies the band, $j$ the stack, $k$ the row within band $i$, and $l$ the column within stack $j$. Rows, columns, and blocks are denoted $r_{ik},c_{jl},b_{ij}$ accordingly. Let $X=\{r_{ik},c_{jl},b_{ij}:i,j,k,l< n\}$ be the set of all $3n^2$ checks. For $S\subseteq X$ and $x\in X$, I will again denote by $S\models x$ the consequence relation “every Sudoku grid satisfying all checks from $S$ also satisfies $x$”.
Let $V$ be the $\mathbb Q$-linear space with basis $X$, and $V_0$ be the span of the vectors $\sum_kr_{ik}-\sum_jb_{ij}$ for $i< n$, and $\sum_lc_{jl}-\sum_ib_{ij}$ for $j< n$.
Lemma 1: If $x\in\span(S\cup V_0)$, then $S\models x$.
Proof: A grid $G$ induces a linear mapping $\phi_G$ from $V$ into an $n^2$-dimensional such that for any $x'\in X$, the $i$th coordinate of $\phi_G(x')$ gives the number of occurrences of the number $i$ in $x'$. We have $\phi_G(V_0)=0$, and $G$ satisfies $x'$ iff $\phi_G(x')$ is the constant vector $\vec 1$. If $x=\sum_i\alpha_ix_i+y$, where $x_i\in S$ and $y\in V_0$, then $\phi_G(x)=\vec\alpha$ for $\alpha:=\sum_i\alpha_i$. The same holds for every grid $G'$ satisfying $S$; in particular, it holds for any valid grid, which has $\phi_{G'}(x)=\vec1$, hence $\alpha=1$. QED
We intend to prove that the converse holds as well, so assume that $x\notin\span(S\cup V_0)$. We may assume WLOG $x=r_{00}$ or $x=b_{00}$, and we may also assume that $r_{i0}\notin S$ whenever $r_{ik}\notin S$ for some $k$, and $c_{j0}\notin S$ whenever $c_{jl}\notin S$ for some $l$. By assumption, there exists a linear function $\psi\colon V\to\mathbb Q$ such that $\psi(S\cup V_0)=0$, and $\psi(x)\ne0$. The space of all linear functions on $V$ vanishing on $V_0$ has dimension $3n^2-2n$, and one checks easily that the following functions form its basis:
$\omega_{ik}$ for $0\le i< n$, $0< k< n$: $\omega_{ik}(r_{ik})=1$, $\omega_{ik}(r_{i0})=-1$.
$\eta_{jl}$ for $0\le j< n$, $0< l< n$: $\eta_{jl}(c_{jl})=1$, $\eta_{jl}(c_{j0})=-1$.
$\xi_{ij}$ for $i,j< n$: $\xi_{ij}(r_{i0})=\xi_{ij}(c_{j0})=\xi_{ij}(b_{ij})=1$.
(The functions are zero on basis elements not shown above.) We can thus write
$$\psi=\sum_{ik}u_{ik}\omega_{ik}+\sum_{jl}v_{jl}\eta_{jl}+\sum_{ij}z_{ij}\xi_{ij}.$$
If $r_{ik}\in S$, $k\ne0$, then $0=\psi(r_{ik})=u_{ik}$, and similarly $c_{jl}\in S$ for $l\ne0$ implies $v_{jl}=0$. Thus, the functions $\omega_{ik}$ and $\eta_{jl}$ that appear in $\psi$ with a nonzero coefficient individually vanish on $S$. The only case when they can be nonzero on $x$ is $\omega_{0k}$ if $x=r_{00}$ and $r_{00},r_{0k}\notin S$, but then taking any valid grid and swapping cells $s_{0000}$ and $s_{00k0}$ shows that $S\nvDash x$ and we are done. Thus we may assume that the first two sums in $\psi$ vanish on $S\cup\{x\}$, and therefore the third one vanishes on $S$ but not on $x$, i.e., WLOG
$$\psi=\sum_{ij}z_{ij}\xi_{ij}.$$
That $\psi$ vanishes on $S$ is then equivalent to the following conditions on the matrix $Z=(z_{ij})_{i,j< n}$:
$z_{ij}=0$ if $b_{ij}\in S$,
$\sum_jz_{ij}=0$ if $r_{i0}\in S$,
$\sum_iz_{ij}=0$ if $c_{j0}\in S$.
Let us say that an alternating path is a sequence $e=e_p,e_{p+1},\dots,e_q$ of pairs $e_m=(i_m,j_m)$, $0\le i_m,j_m< n$, such that
$i_m=i_{m+1}$ if $m$ is even, and $j_m=j_{m+1}$ if $m$ is odd,
the indices $i_p,i_{p+2},\dots$ are pairwise distinct, except that we may have $e_p=e_q$ if $q-p\ge4$ is even,
likewise for the $j$s.
If $m$ is even, the incoming line of $e_m$ is the column $c_{j_m0}$, and its outgoing line is the row $r_{i_m0}$. If $m$ is odd, we define it in the opposite way. An alternating path for $S$ is an alternating path $e$ such that $b_{i_mj_m}\notin S$ for every $m$, and either $e_p=e_q$ and $q-p\ge4$ is even ($e$ is an alternating cycle), or the incoming line of $e_p$ and the outgoing line of $e_q$ do not belong to $S$.
Every alternating path $e$ induces a matrix $Z_e$ which has $(-1)^m$ at position $e_m$ for $m=p,\dots,q$, and $0$ elsewhere. It is easy to see that if $e$ is an alternating path for $S$, then $Z_e$ satisfies conditions 1, 2, 3.
Lemma 2: The space of matrices $Z$ satisfying 1, 2, 3 is spanned by matrices induced by alternating paths for $S$.
Proof:
We may assume that $Z$ has integer entries, and we will proceed by induction on $\|Z\|:=\sum_{ij}|z_{ij}|$. If $Z\ne 0$, pick $e_0=(i_0,j_0)$ such that $z_{i_0j_0}>0$. If the outgoing line of $e_0$ is outside $S$, we put $q=0$, otherwise condition 2 guarantees that $z_{i_0,j_1}< 0$ for some $j_1$, and we put $i_1=i_0$, $e_1=(i_1,j_1)$. If the outgoing line of $e_1$ is outside $S$, we put $q=1$, otherwise we find $i_2$ such that $z_{i_2j_1}>0$ by condition 3, and put $j_2=j_1$. Continuing in this fashion, one of the following things will happen sooner or later:
The outgoing line of the last point $e_m$ constructed contains another point $e_{m'}$ (and therefore two such points, unless $m'=0$). In this case, we let $p$ be the maximal such $m'$, we put $q=m+1$, $e_q=e_p$ to make a cycle, and we drop the part of the path up to $e_{p-1}$.
The outgoing line of $e_m$ is outside $S$. We put $q=m$.
In the second case, we repeat the same construction going backwards from $e_0$. Again, either we find a cycle, or the construction stops with an $e_p$ whose incoming line is outside $S$. Either way, we obtain an alternating path for $S$ (condition 1 guarantees that $b_{i_mj_m}\notin S$ for every $m$). Moreover, the nonzero entries of $Z_e$ have the same sign as the corresponding entries of $Z$, thus $\|Z-Z_e\|<\|Z\|$. By the induction hypothesis, $Z-Z_e$, and therefore $Z$, is a linear combination of some $Z_e$s. QED
Now, Lemma 2 implies that we may assume that our $\psi$ comes from a matrix $Z=Z_e$ induced by an alternating path $e=e_p,\dots,e_q$. Assume that $G$ is a valid Sudoku grid that has $1$ in cells $s_{i_mj_m00}$ for $m$ even, and $2$ for $m$ odd. Let $G'$ be the grid obtained from $G$ by exchanging $1$ and $2$ in these positions. Then $G'$ violates the following checks:
$b_{i_mj_m}$ for each $m$.
If $e$ is not a cycle, the incoming line of $e_p$, and the outgoing line of $e_q$.
Since $e$ is an alternating path for $S$, none of these is in $S$. On the other hand, $\psi(x)\ne0$ implies that $x$ is among the violated checks, hence $S\nvDash x$.
It remains to show that such a valid grid $G$ exists. We can now forget about $S$, and then it is easy to see that every alternating path can be completed to a cycle, hence we may assume $e$ is a cycle. By applying Sudoku permutations and relabelling the sequence, we may assume $p=0$, $i_m=\lfloor m/2\rfloor$, $j_m=\lceil m/2\rceil$ except that $i_q=j_q=j_{q-1}=0$. We are thus looking for a solution of the following grid:
$$\begin{array}{|ccc|ccc|ccc|ccc|ccc|}
\hline
1&&&2&&&&&&&&&&&&\\
\strut&&&&&&&&&&&&&&&\\
\strut&&&&&&&&&&&&&&&\\
\hline
&&&1&&&2&&&&&&&&&\\
&&&&&&&&&&&&&&\cdots&\\
&&&&&&&&\ddots&&&&&&&\\
\hline
2&&&&&&&&&1&&&&&&\\
\strut&&&&&&&&&&&&&&&\\
\strut&&&&&&&&&&&&&&&\\
\hline
\strut&&&&&&&&&&&&&&&\\
\strut&&&&\vdots&&&&&&&&&&&\\
\strut&&&&&&&&&&&&&&&\\
\hline
\end{array}$$
where the upper part is a $q'\times q'$ subgrid, $q'=q/2$.
If $q'=n$, we can define the solution easily by putting $s_{ijkl}=(k+l,j-i+l)$, where we relabel the numbers $1,\dots,n^2$ by elements of $(\mathbb Z/n\mathbb Z)\times(\mathbb Z/n\mathbb Z)$, identifying $1$ with $(0,0)$ and $2$ with $(0,1)$. In the general case, we define $s_{ijkl}=(k+l+a_{ij}-b_{ij},l+a_{ij})$. It is easy to check that this is a valid Sudoku if the columns of the matrix $A=(a_{ij})$ and the rows of $B=(b_{ij})$ are permutations of $\mathbb Z/n\mathbb Z$. We obtain the wanted pattern if we let $a_{ij}=b_{ij}=j-i\bmod{q'}$ for $i,j< q'$, and extend this in an arbitrary way so that the columns of $A$ and the rows of $B$ are permutations.
This completes the proof that $x\notin\span(S\cup V_0)$ implies $S\nvDash x$. This shows that $\models$ is a linear matroid, and we get a description of maximal incomplete sets of checks by means of alternating paths.
We can also describe the minimal dependent sets. Put
$$D_{R,C}=\{r_{ik}:i\in R,k< n\}\cup\{c_{jl}:j\in C,l< n\}\cup\{b_{ij}:(i\in R\land j\notin C)\lor(i\notin R\land j\in C)\}$$
for $R,C\subseteq\{0,\dots,n-1\}$. If $R$ or $C$ is nonempty, so is $D_{R,C}$, and
$$\sum_{i\in R}\Bigl(\sum_kr_{ik}-\sum_jb_{ij}\Bigr)-\sum_{j\in C}\Bigl(\sum_lc_{jl}-\sum_ib_{ij}\Bigr)\in V_0$$
shows that $D_{R,C}$ is dependent. On the other hand, if $D$ is a dependent set, there is a linear combination
$$\sum_i\alpha_i\Bigl(\sum_kr_{ik}-\sum_jb_{ij}\Bigr)-\sum_j\beta_j\Bigl(\sum_lc_{jl}-\sum_ib_{ij}\Bigr)\ne0$$
where all basic vectors with nonzero coefficients come from $D$. If (WLOG) $\alpha:=\alpha_{i_0}\ne0$, put $R=\{i:\alpha_i=\alpha\}$ and $C=\{j:\beta_j=\alpha\}$. Then $R\ne\varnothing$, and $D_{R,C}\subseteq D$.
On the one hand, this implies that every minimal dependent set is of the form $D_{R,C}$. On the other hand, $D_{R,C}$ is minimal unless it properly contains some $D_{R',C'}$, and this can happen only if $R'\subsetneq R$ and $C=C'=\varnothing$ or vice versa. Thus $D_{R,C}$ is minimal iff $|R|+|C|=1$ or both $R,C$ are nonempty.
This also provides an axiomatization of $\models$ by rules of the form $D\smallsetminus\{x\}\models x$, where $x\in D=D_{R,C}$ is minimal. It is easy to see that if $R=\{i\}$ and $C\ne\varnothing$, the rules for $D_{R,C}$ can be derived from the rules for $D_{R,\varnothing}$ and $D_{\varnothing,\{j\}}$ for $j\in C$, hence we can omit these. (Note that the remaining sets $D_{R,C}$ are closed, hence the corresponding rules have to be included in every axiomatization of $\models$.)
To sum it up:
Theorem: Let $n\ge2$.
$S\models x$ if and only if $x\in\span(S\cup V_0)$. In particular, $\models$ is a linear matroid.
All minimal complete sets of checks have cardinality $3n^2-2n$. (One such set consists of all checks except for one row from each band, and one column from each stack.)
The closed sets of $\models$ are intersections of maximal closed sets, which are complements of Sudoku permutations of the sets
$\{b_{00},b_{01},b_{11},b_{12},\dots,b_{mm},b_{m0}\}$ for $0< m< n$
$\{c_{00},b_{00},b_{01},b_{11},b_{12},\dots,b_{mm},r_{m0}\}$ for $0\le m< n$
$\{c_{00},b_{00},b_{01},b_{11},b_{12},\dots,b_{m-1,m},c_{m1}\}$ for $0\le m< n$
The minimal dependent sets of $\models$ are the sets $D_{R,C}$, where $R,C\subseteq\{0,\dots,n-1\}$ are nonempty, or $|R|+|C|=1$.
$\models$ is the smallest consequence relation such that $D_{R,C}\smallsetminus\{x\}\models x$ whenever $x\in D_{R,C}$ and either $|R|,|C|\ge2$, or $|R|+|C|=1$.
A:
One can use information theoretic considerations to obtain lower bounds for the number of checks. I'll prove that at least 15 checks are necessary.
Proof. First note that for any two rows $r_i$ and $r_j$ (contained in the same band), it is easy to construct a Sudoku which is correct everywhere except $r_i$ and $r_j$. Thus, one must check at least 2 rows from each band, and hence at least 6 rows. By symmetry, one must also check at least 6 columns.
Next, we define a $4$-set of $3 \times 3$ squares to be a corner set if they are the corners of a rectangle. For any corner set $S$, it is easy to construct a Sudoku which is correct on all rows, columns, and squares except for $S$. Note that any set of squares which meets all corner sets must have size at least 3 Thus, we must check at least 3 squares.
$6+6+3=15.$
Edit. Here is an improvement that shows that 16 checks are in fact necessary. This idea is due to Zack Wolske (see the comments below). Call a subset of $3 \times 3$ squares an even set if it contains an even number of squares from each row and column of squares.
Note that a corner set is an even set.
Lemma. If $S$ is a set of at most three squares, then the complement of $S$ contains a non-empty even set.
The only non-trivial verification is if $S$ is a transversal, in which case the complement of $S$ is itself an even set of size 6. This lemma shows that at least 4 squares must be checked. To see this suppose that we have only checked at most three squares. By the Lemma, we may select a non-empty even set $E$ contained in the squares we have not checked. We next label the center cell of each square in $E$ with a $1$ or a $2$ such that each row and column is either completely unlabelled or contains exactly one $1$ and one $2$. Clearly, we can extend this partial labelling to a fully correct Sudoku. If we then flip $1$ and $2$ in the center cells of $E$, we obtain a Sudoku that is incorrect on each square in $E$, but correct on all other squares, rows and columns. Thus, we must check 4 squares as claimed.
$6+6+4=16$.
Edit 2. I now can prove that at least 18 checks are necessary. Recall that we have so far established that at least 6 rows (at least 2 from each band), and 6 columns (at least 2 from each stack), and 4 squares are necessary. Therefore, suppose in a minimum set of checks $V$ we have checked $6+r'$ rows, $6+c'$ columns and $4+s'$ squares.
Note that for each unchecked square $x$, it cannot be the case that at most two columns of $x$ and at most two rows of $x$ are checked. If so, there would be a cell of $x$ such that the row containing $x$, the column containing $x$ and the square containing $x$ are all unchecked,
which is a contradiction.
If $s' \geq 2$, then we are done. So, we have checked at most 5 squares. In particular, the set of unchecked squares are not all in the same column or same row. Thus, there are two unchecked squares that are in different rows and in different columns. As mentioned, both of these unchecked squares must have all rows checked or all columns checked. Therefore, $r'+c' \geq 2$, and we are done.
Edit 3. I can now prove that at least 19 checks are necessary. Using the notation from the
previous edit, if $s' \geq 3$, we are done. We define a band $B$ to be tight (for $V$), if $V$ uses all three rows of $B$. If $s'=2$, then at least one band or one stack must be tight, so we are done. If $s'=1$, then by the previous edit we have $r'+c' \geq 2$, and we are done.
The only remaining possibility is if $s'=0$. Thus, there are 5 unchecked squares. Observe that any set of 5 squares must either contain a transversal, a band, or a stack.
If the unchecked squares contain a transversal, then $r'+c' \geq 3$ (since the sum of the tight bands and tight stacks must be at least 3). By symmetry, we may assume that there
is an unchecked band.
Lemma. If there is an unchecked band $B$, then at least two stacks are tight.
Proof. If not, by symmetry we may assume that $s_1, s_2, s_3$ are unchecked and that $c_1$ and $c_4$ are unchecked. By taking a correct Sudoku and swapping the first entry and fourth entries of the first row, we obtain a Sudoku that is correct everywhere, except $s_1, s_2, c_1$, and $c_4$, which is a contradiction.
By the lemma, there are at least two tight stacks. If there are three, then $c' \geq 3$, so we are done. If there are exactly two tight stacks, then the band $B$ itself must be tight, otherwise there is a cell whose row, column and square are all unchecked. Hence $c'+r' \geq 3$, and we are again done.
Remark. There is quite a bit of slack in these arguments, so with enough case analysis, I think one can get to 21 with $\epsilon$ new ideas.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MySQL how to get number of values from a user?
I have this 3 tables:
Users:
user_id|user_nick
1 | a
2 | b
Category:
cat_id|cat_type
1 | a
2 | b
3 | c
4 | d
Meta:
met_id|met_name|met_user|met_type
10 | bla | 1 | 1
11 | blabla | 2 | 2
12 | foo | 1 | 3
13 | blafoo | 2 | 4
14 | foofoo | 1 | 4
15 | foobla | 1 | 4
How can I return something like this ?
user_id|met_type|total
1 | 1 | 1
1 | 2 | 0
1 | 3 | 1
1 | 4 | 2
For just one user and not for all of them.
met_type is a foreign key from Category.
I've tried like this but no success :/
SELECT met_user, met_type, COUNT(*) FROM Meta GROUP BY met_user WHERE met_user = '1'
A:
Query:
SELECT met_user, met_type, count(*)
FROM Meta
WHERE met_user='1'
GROUP BY met_type;
To get empty groups, you can use generateSeries() here:
SELECT m.met_user, g.meta_type, count(m)
FROM generate_series(1, 4) AS g(meta_type)
LEFT OUTER JOIN Meta AS m
ON m.met_user='1'
AND m.met_type=g.meta_type
GROUP BY g.meta_type, m.met_user
ORDER BY g.meta_type;
Check it out! I made an sql fiddle.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to move local git repo from one hdd to another hdd?
I'm currently setting up git on one of my local hdds, but will eventually be moving all my git repos on to another hdd (I want to split my work files from my own files, but I do not have the drives yet). Would it be possible to move those local work git repos to the new hdds by just dragging and dropping?
I have a Mac OSX Lion, and am setting up local repos for multiple macs that use my home NAS server as the mothership. I am still a beginner at this git stuff, so any tips are much appreciated. Thanks!
A:
Yes, you can simply copy all the files to your other computer. But make sure to also copy the "hidden" files, meaning the files that names are starting with a dot.
Here's a link to a Page describing how to make the hidden files appear in the finder.
Each and every git-repository has a hidden folder called .git in its top-most directory. This folder contains all the history, revisions and so on. Inside that folder you can also find a file called config, that you could modify to your wishes after moving the repository.
So basically this .git folder is everything that makes the difference between your bare project files and a git-enabled repository.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
About Apache Commons EqualsBuilder and HashCodeBuilder and null values
The classes EqualsBuilder and HashCodeBuilder from the Apache Commons Lang library can be used for object comparison purposes.
E.g., one can test equality between two Person objects like follows:
Person p1 =...;
Person p2 =...;
boolean equals = new EqualsBuilder().
append(p1.name, p2.name).
append(p1.secondname, p2.secondname).
append(p1.surname, p2.surname).
append(p1.age, p2.age).
isEquals();
Suppose that a field is not mandatory, e.g. secondname. How does EqualsBuilder and HasCodeBuilder handle this fact? Is the comparison done on this field or not? Or the comparison on a null field can be skipped as a special option?
A:
These two methods will consider p1.name and p2.name to be equal if they're both null. Here's the relevant part of the freely available source code:
public EqualsBuilder append(Object lhs, Object rhs) {
if (isEquals == false) {
return this;
}
if (lhs == rhs) {
return this;
}
...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Como mover o cursor para o fim do conteúdo com span de um elemento contenteditable após o focus
Estou fazendo um input text usando contenteditable, o que preciso é um botão em Javascript que ao clicar de o focus nesse campo, até consegui, mas depois disso preciso colocar o cursor no final do conteúdo, para o usuário continuar editando, para o funcionamento ser igual a um input, lemabrando que dentro dessa div pode ter também divs e spans, veja o que já tenho:
$(document).on('click', '.btn', function (e) {
$('.formfield').focus();
});
.formfield{
width: 200px;
height: 30px;
display: block;
margin-bottom: 15px;
border: solid 1px #000000;
resize: none;
}
.btn{
width: 80px;
height: 30px;
line-height: 30px;
display: block;
margin-bottom: 15px;
background-color: #380303;
color: #ffffff;
text-align: center;
cursor: pointer;
}
<div class="formfield" contenteditable="true">Text <span>aa</span></div>
<div class="btn">get focus</div>
<script src="https://code.jquery.com/jquery-1.12.3.js" integrity="sha256-1XMpEtA4eKXNNpXcJ1pmMPs8JV+nwLdEqwiJeCQEkyc=" crossorigin="anonymous"></script>
A:
Aqui vai uma solução utilizando o range e o selection :
$(document).on('click', '.btn', function (e) {
var el = document.querySelector(".formfield");
var range = document.createRange();
var sel = window.getSelection();
range.setStart(el, 1);
range.collapse(true);
sel.removeAllRanges();
sel.addRange(range);
});
.formfield{
width: 200px;
height: 30px;
display: block;
margin-bottom: 15px;
border: solid 1px #000000;
resize: none;
}
.btn{
width: 80px;
height: 30px;
line-height: 30px;
display: block;
margin-bottom: 15px;
background-color: #380303;
color: #ffffff;
text-align: center;
cursor: pointer;
}
<div class="formfield" contenteditable="true">Texto de exemplo</div>
<div class="btn">get focus</div>
<script src="https://code.jquery.com/jquery-1.12.3.js" integrity="sha256-1XMpEtA4eKXNNpXcJ1pmMPs8JV+nwLdEqwiJeCQEkyc=" crossorigin="anonymous"></script>
Esta resposta é baseada nesta resposta do SOEn
A:
A resposta do LazyFox esta quase correta, mas os elementos filhos afetam o cursor, no caso o range.setStart(el, 1); vai mover o cursor no pai, mas qualquer Element (textNode "não conta") dentro também tem que passar pelo comportamento, acaba que o cursor é barrado no primeiro elemento filho que encontrar, então o que pode fazer é pegar o ultimo elemento HTML filho (se existir) e aplicar o setStart nele, exemplo:
$(document).on('click', '.btn', function (e) {
var el = document.querySelector(".formfield");
if (el.lastElementChild) el = el.lastElementChild; //se existir elementos
var range = document.createRange();
var sel = window.getSelection();
range.setStart(el, 1);
range.collapse(true);
sel.removeAllRanges();
sel.addRange(range);
});
.formfield{
width: 200px;
height: 30px;
display: block;
margin-bottom: 15px;
border: solid 1px #000000;
resize: none;
}
.btn{
width: 80px;
height: 30px;
line-height: 30px;
display: block;
margin-bottom: 15px;
background-color: #380303;
color: #ffffff;
text-align: center;
cursor: pointer;
}
<div class="formfield" contenteditable="true">Texto de <b>TESTE</b></div>
<div class="btn">get focus</div>
<script src="https://code.jquery.com/jquery-1.12.3.js" integrity="sha256-1XMpEtA4eKXNNpXcJ1pmMPs8JV+nwLdEqwiJeCQEkyc=" crossorigin="anonymous"></script>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can't make image fit into container
So far the following code has always worked for me in order to make an image fit into its container:
img {
height: auto !important;
max-width: 100% !important;
width: auto;
}
Today it doesn't work, the image just appears in its normal size and if it's too big for the container (which has fixed width and height) then it's simply cropped. How is that possible?
Just to give some context here's a screenshot showing Firebug's output when selecting the image followed by another one when selecting its container:
A:
The Following css should work, I would also recommend removing overflow: hidden from the parent div
img {
height: auto;
width: auto;
max-height: 100%;
max-width: 100%;
}
EDIT: I removed the !important after each directive as the OP said it works without.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
parsing a text file in sas
So I have a rather messy text file I'm trying to convert to a sas data set. It looks something like this (though much bigger):
0305679 SMITH, JOHN ARCH05 001 2
ARCH05 005 3
ARCH05 001 7
I'm trying to set 5 separate variables (ID, name, job, time, hours) but clearly only 3 of the variables appear after the first line. I tried this:
infile "C:\Users\Desktop\jobs.txt" dlm = ' ' dsd missover;
input ID $ name $ job $ time hours;
and didn't get the right output, then I tried to parse it
infile "C:\Users\Desktop\jobs.txt" dlm = ' ' dsd missover; input
allData $; id = substr(allData, find(allData,"305")-2, 7);
but I'm still not getting the right output. Any ideas?
EDIT: I'm trying now to use .scan() and .substr() to apart the larger data set, how do I subset a single line from the data?
A:
Your data might not be all that messy; it just might be in a hierarchical format where the first row contains all five variables and subsequent rows contain values for variables 3-5. In other words, ID and NAME should be retained as you read through the file.
If that is correct (it's a hierarchical layout) this here is a possible solution:
data have;
retain ID NAME;
informat ID 7. JOB $6. TIME 3. HOURS 1.;
input @1 test_string $7. @;
if notdigit(test_string) = 0
then input @1 ID NAME $12. JOB time hours;
else input @1 JOB time hours;
drop test_string;
datalines;
0305679 SMITH, JOHN ARCH05 001 2
ARCH05 005 3
ARCH05 001 7
0305680 JONES, MARY ARCH06 002 4
ARCH06 005 3
ARCH07 001 7
run;
The key thing is to really understand how your raw file is organized. Once you know the rules, using SAS to read it is a snap!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Recaptcha provided by Google is not 100% accurate
I am using Recaptcha using the following link
http://www.google.com/recaptcha
But, I observed, it not 100% accurate.
It validates as success, if we type 2-3 letters wrongly.
Is there a setting, which will make it 100% accurate.
Now, If a user types "tentace veri", it still validates SUCCESS.
Why can't it be 100% accurate.
A:
No, Recaptcha only knows about one word, it's using crowdsourcing for the other, that's why it says "digitizing ..." it's actually using humans to digitize pieces of books. You could type in tentace helium and it would work. There's even an internet movement to replace the second word with a dirty word. You can tell which one it knows about because it distorts it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Uncaught Syntax Error: Unexpected Token : getJSON
I am trying to populate my page dynamically using the data from my JSON file I am getting this error
"Uncaught Syntax Error: Unexpected Token :" on line 2.
So here's my json file, there is more to it but I didnt want to post the entire file.
{
"jobs": [
{
"title": "Graduate IT Development Programme #1",
"path": "/path/to/job",
"type": "Graduate job",
"location": [
"North West",
"North East"
],
"closingDate": "20/05/2014",
"continuous": false,
"skills": [
"HTML",
"CSS",
"JavaScript",
"Java",
"CI",
"Testing"
],
"contract": "Permanent",
"salary": {
"lower": 14501,
"upper": 17000,
"currency": "£"
},
"employer": {
"name": "Mercer",
"href": "/path/to/employer",
"logo": "img/mercer-logo.png"
}
},
{
"title": "Web Developer",
"path": "/path/to/job",
"type": "Graduate job",
"location": ["Greater London"],
"continuous": true,
"skills": [
"HTML",
"CSS",
"JavaScript"
],
"salary": {
"lower": 16000,
"upper": 21000,
"currency": "€"
},
"employer": {
"name": "FDM plc",
"href": "/path/to/employer",
"logo": "img/fdm-logo.png"
}
},
{
"title": "Front-end Web Developer",
"path": "/path/to/job",
"type": "Graduate scheme",
"location": ["Greater London"],
"closingDate": "20/04/2014",
"continuous": false,
"skills": [
"HTML",
"CSS",
"Java",
"Testing"
],
"salary": {
"lower": 17001,
"upper": 19500,
"currency": "£"
},
"employer": {
"name": "British Airways plc",
"href": "/path/to/employer",
"logo": "img/british-airways-logo.png"
}
}
]
}
And here's my .getJSON function (document.write is just temporary until it's working)
$(document).ready(function() {
$.getJSON( 'js/jobs.json',function( result ){
document.write(result.jobs.title);
});
});
So I'm not sure what the problem is. Having looked at other questions and other solutions I feel somewhat more confused than I was before.
A:
If you have looked at the json structure, jobs is an array of objects. Therefore title cannot be directly accessed. You should get it by the index, for eg.
$(document).ready(function () {
$.getJSON('js/jobs.json', function (result) {
// in case the result is not in json data type
// otherwise not necessary
result = JSON.parse(result);
result.jobs.map(function (v) {
console.log(v.title);
document.write(v.title);
});
});
});
DEMO
EDITED DEMO
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Sitecore azure can't start
We have problem with deploying sitecore application into Azure Environment. After updating Cloud Service it cannot start, providnig information:
Unhandled Exception: Microsoft.ApplicationServer.Caching.DataCacheException. In WaIISHost process logs I'm finding such error:
0 : [00003180:00000006, 2014/09/09 06:35:16.89, ERROR] Unhandled exception: IsTerminating 'True', Message 'System.TimeoutException: We waited for 'Boolean <CreateSymbolicLink>b__1()' that didn't finish within 00:00:30.
at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo)
at System.Environment.get_StackTrace()
at Sitecore.Azure.Sys.Retryer.Do.Until(Func`1 predicate, TimeSpan timeout)
at RoleRootConfigurator.CreateSymbolicLink(String relativePathToAppRoot, DirectoryInfo localResourceDir)
at WebRole.RoleRootConfigurator.ConfigureSymbolicLinksForApproot(DirectoryInfo localResourceDir)
at WebRole.OnStart()
at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.InitializeRoleInternal(RoleType roleTypeEnum)
at Microsoft.WindowsAzure.ServiceRuntime.Implementation.Loader.RoleRuntimeBridge.<InitializeRole>b__0()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()
at Sitecore.Azure.Sys.Retryer.Do.Until(Func`1 predicate, TimeSpan timeout)
at WebRole.RoleRootConfigurator.CreateSymbolicLink(String relativePathToAppRoot, DirectoryInfo localResourceDir)
at WebRole.RoleRootConfigurator.ConfigureSymbolicLinksForApproot(DirectoryInfo localResourceDir)
at WebRole.OnStart()
at Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.InitializeRoleInternal(RoleType roleTypeEnum)
at Microsoft.WindowsAzure.ServiceRuntime.Implementation.Loader.RoleRuntimeBridge.<InitializeRole>b__0()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.ThreadHelper.ThreadStart()'
We have created our custom WebRole, based on code prepared by Sitecore Developers. Here is presented code responsible for crating symbolic links:
public void ConfigureSymbolicLinksForApproot(DirectoryInfo localResourceDir)
{
if (RoleEnvironment.IsEmulated)
return;
Trace.TraceInformation(" -- Configure app root starting...");
this.CreateSymbolicLink("temp", localResourceDir);
this.CreateSymbolicLink("App_Data/debug", localResourceDir);
this.CreateSymbolicLink("App_Data/diagnostics", localResourceDir);
this.CreateSymbolicLink("App_Data/indexes", localResourceDir);
this.CreateSymbolicLink("App_Data/logs", localResourceDir);
this.CreateSymbolicLink("App_Data/packages", localResourceDir);
this.CreateSymbolicLink("App_Data/viewstate", localResourceDir);
this.CreateSymbolicLink("App_Data/MediaCache", localResourceDir);
this.CreateSymbolicLink("App_Data/Submit_Queue", localResourceDir);
}
private void CreateSymbolicLink(string relativePathToAppRoot, DirectoryInfo localResourceDir)
{
DirectoryInfo appRootDir = new DirectoryInfo(Path.Combine(this.AppRoot.FullName, relativePathToAppRoot));
Do.ThisOnce((Action)(() => RmDir.RemoveDir(appRootDir))).Until((Func<bool>)(() => !Directory.Exists(appRootDir.FullName)));
DirectoryInfo tempLocalResourceDir = new DirectoryInfo(Path.Combine(localResourceDir.FullName, relativePathToAppRoot));
Do.ThisOnce(new Action(tempLocalResourceDir.CreateIfNotExists)).Until((Func<bool>)(() => Directory.Exists(tempLocalResourceDir.FullName)));
Do.ThisOnce((Action)(() => MkLink.CreateLink(appRootDir, tempLocalResourceDir))).WithTracePing("Waiting for '{0}' to be created as symbolic link in app root", (object)appRootDir.FullName).Until((Func<bool>)(() => Directory.Exists(appRootDir.FullName)));
}
I've also found such informations in Event Viewer:
Faulting application name: CacheService.exe, version: 1.0.5137.0, time stamp: 0x52304f01
Faulting module name: KERNELBASE.dll, version: 6.2.9200.16864, time stamp: 0x531d34d8
Exception code: 0xe0434352
Fault offset: 0x0000000000047b8c
Faulting process id: 0x1e80
Faulting application start time: 0x01cfcc0ca7dac7a3
Faulting application path: F:\plugins\Caching\CacheService.exe
Faulting module path: D:\Windows\system32\KERNELBASE.dll
Report Id: ee6a3966-37ff-11e4-93f6-00155d67d4ca
Faulting package full name:
Faulting package-relative application ID:
and
Application: CacheService.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: Microsoft.ApplicationServer.Caching.DataCacheException
Stack:
at Microsoft.ApplicationServer.Caching.AzureCommon.AzureUtility.ProcessException(System.Exception)
at Microsoft.ApplicationServer.Caching.Colocatedservice.CacheService.<OnStart>b__0(System.Object)
at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
Based on this I've tried to update Windows AzureCache libraries using nuget, provided by Sitecore or this placed in Azure SDK 2.2 folder, but nothing has changed. Any help will be gratefull.
A:
Jacbar.
Based on the initial exception, the deployment process failed during creating Windows Symbolic Links for the following directories:
\temp
\App_Data\debug
\App_Data\diagnostics
\App_Data\indexes
\App_Data\logs
\App_Data\packages
\App_Data\viewstate
\App_Data\MediaCache
Sitecore Azure uses this trick to avoid overflow of last disk (usually disk F:/) on a Virtual Machine that has a limited size of 1.5 GB (used to be 1GB). This disk is used to keep ASP.NET Web Application you deploy to PaaS.
As .NET Reflector shows to me, the Sitecore.Azure.Sys.Retryer.Do.Until(Func predicate) method uses hardcoded timeout that equals to 30 seconds. Looks like 30 seconds is not enough in your case to remove old directory under F:\approot (aka Website) folder, create a new directory in the Azure Local Storage Resources and link it using Symbolic Links.
I would recommend opening a support ticket with Sitecore Support to figure out the best way to address 30 seconds hardcoded value.
Best Wishes, Oleg
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to do search filtering through a collection of arrays?
I am going to build advance search function in laravel 5. I queried from 'itemregistrations' table by filtering a few fields such as negeriID, categoryID and operasiID. I need to do array map to calculate age value of each item and put in the array. By getting values using itemregistration table and calculating age runs okay but it problem in searching through the if statement. It cannot searching and retrieve the values through the array in the collection.
$newitem = DB::table('itemregistrations')
->select('itemregistrations.*')
->get();
//added code to get 'age' value:
$newitem->map(function ($detail) {
$detail->age = \Carbon\Carbon::createFromFormat('Y',$detail->lahir_yy)->diffInYears();
return $detail;
});
if ($request->has('negeri_lahir')) {
$newitem->where('NegeriID', '==', $request->negeri_lahir);
}
if ($request->has('kategori')) {
$newitem->where('CategoryID', $request->kategori);
}
if ($request->has('pangkat')) {
$newitem->where('OperasiID', $request->pangkat);
}
dd($newitem->get());
The problem because of the array map added, turning the collection in array values causing this error.
It is producing error:
Type error: Too few arguments to function Illuminate\Support\Collection::get(), 0 passed in C:\xampp\htdocs\
This is the array list in the collection for dd($newitem);
#items: array:1123 [▼
0 => {#709 ▶}
1 => {#680 ▶}
2 => {#681 ▶}
3 => {#712 ▶}
Collection {#671 ▼
#items: array:1123 [▼
0 => {#709 ▼
+"ItemRegistrationID": 1
+"COID": 109064
+"FType": ""
+"STNo": "0"
+"RegistrationDate": "2005-12-01"
and more attributes...
How to enable the searching through the array list?
A:
First of all, you don't need to use select() in query.
Looks like better to make filtering in db query using when().
Try:
$newitem = DB::table('itemregistrations')
->when(request('age'), function($query){
$query->whereRaw('YEAR(curdate()) - lahir_yy >= ?', [request('age')]);
})
->when(request('negeri_lahir'), function($query){
$query->where('NegeriID', request('negeri_lahir'));
})
->when(request('kategori'), function($query){
$query->where('CategoryID', request('kategori'));
})
->when(request('pangkat'), function($query){
$query->where('OperasiID', request('pangkat'));
})
->get();
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Complex query melting my brain! Rails and Postgres
I apologize if I'm missing something really obvious here, but hopefully you'll humour me!
I have these models
Employee - with id, first_name, last_name
Shift Type - with id, shift_name
Date Indices - with id, date
Locations - with id, location
Allocated shifts - with employee_id, shift_type_id, date_index_id, location_id
Now I can write queries that show me allocated shifts and join with locations, names etc. but what I was is to be able to produce a table that takes dates as columns and employees as rows to produce a roster like such
______________________________________________
|employee|date 1 |date 2 | date 3 |
|'dave' |early shift|late shift |day off |
|'martha'|day off |early shift|early shift|
etc.
I'm sure I'm just pretty dumb, but how can I create these 'virtual' columns and link them to the employee?
A:
You are looking for a "pivot" or "crosstab" query. Postgres has the additional module tablefunc for that. More info in this related answer:
PostgreSQL Crosstab Query
And many links to similar questions on SO from there.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Cannot instantiate cyclic dependency
I am using @ngrx/effects with @angular/router. (Angular 2 version RC4)
If I add this line private router: Router in the effects:
@Injectable()
export class RouterEffects {
constructor(
private updates$: StateUpdates<AppState>,
private router: Router // <- this line
) {}
}
I will get this error:
EXCEPTION: Cannot instantiate cyclic dependency! (Token Application
Initializer -> Token @ngrx/effects Bootstrap Effects -> Router ->
ApplicationRef -> ApplicationRef_)
How can I solve this? Thanks
A:
Thanks Anthony @qdouble and Mike Ryan @MikeRyan52 on gitter.
https://gitter.im/ngrx/effects?at=57850fc0b79455146fa4236f
Application initializers will be deprecated in the next RC. So it will
probably be fixed around then.
And Anthony's walkaround way is here:
https://gitter.im/ngrx/effects?at=576ff574bb1de91c546fde19
|
{
"pile_set_name": "StackExchange"
}
|
Q:
My C++ window program run but i cannot see
Below is the code. When run, there's no window shown, and there are no error messages. What did I do wrong?
//WinApp.h
#pragma once
#include<Windows.h>
class WinApp
{
private: HWND hWnd;
MSG msg;
static WinApp *instance;
public:
WinApp(void);
~WinApp(void);
void CreateWnd(HINSTANCE hInstance, int iCmdShow);
int Run(HINSTANCE hInstance, int iCmdShow);
void Release();
static HRESULT CALLBACK WndProc(HWND hwnd, UINT imsg, WPARAM wParam, LPARAM lParam);// window proc
static WinApp* GetInstance();
};
//WinApp.cpp
#include "WinApp.h"
#include<Windows.h>
WinApp::WinApp(void)
{
hWnd=NULL;
ZeroMemory(&msg,sizeof(MSG));
}
WinApp::~WinApp(void)
{
delete hWnd;
}
void WinApp::CreateWnd(HINSTANCE hInstance, int iCmdShow){
WNDCLASSEX WndClassex;
ZeroMemory(&WndClassex, sizeof(WNDCLASSEX));
WndClassex.cbSize=sizeof(WNDCLASSEX);
WndClassex.hCursor=LoadCursor(NULL, IDC_ARROW);
WndClassex.hIcon=LoadIcon(NULL, IDI_APPLICATION);
WndClassex.hbrBackground=(HBRUSH) GetStockObject(WHITE_BRUSH);
WndClassex.hInstance=hInstance;
WndClassex.style=CS_HREDRAW|CS_VREDRAW;
WndClassex.lpszClassName=L" ";
WndClassex.lpszMenuName=NULL;
WndClassex.lpfnWndProc=&WinApp::WndProc;
RegisterClassEx(&WndClassex);
hWnd=CreateWindowEx(0,
L" ",
L"UNUSUAL",
WS_CAPTION|WS_MINIMIZEBOX|WS_SYSMENU|WS_THICKFRAME,
0,
0,
512,
512,
NULL,
NULL,
hInstance,
NULL
);
ShowWindow(hWnd, SW_RESTORE);
UpdateWindow(hWnd);
}
void WinApp::Release(){
delete this;
}
int WinApp::Run(HINSTANCE hInstance, int iCmdShow){
this->CreateWnd(hInstance, iCmdShow);
while(GetMessage(&msg, NULL, 0, 0)){
TranslateMessage(&msg);
DispatchMessage(&msg);
}
return (int)msg.wParam;
}
WinApp* WinApp::instance=0;
WinApp* WinApp::GetInstance(){
if(instance==NULL){
instance=new WinApp;
}
return instance;
}
HRESULT WinApp::WndProc(HWND hwnd, UINT imsg,WPARAM wParam, LPARAM lParam ){
switch (imsg)
{
case WM_PAINT:
HDC hdc;
PAINTSTRUCT ps;
RECT rect;
hdc = BeginPaint (hwnd, &ps);
GetClientRect (hwnd, &rect);
FillRect(hdc, &ps.rcPaint, (HBRUSH) (COLOR_WINDOW+1));
EndPaint (hwnd, &ps);
break;
case WM_QUIT:
PostQuitMessage(0);
WinApp::GetInstance()->Release();
break;
default: DefWindowProc(hwnd, imsg, wParam, lParam);
break;
}
return 0;
}
//main.cpp
#include<Windows.h>
#include"WinApp.h"
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance,LPSTR lpszCmdLine, int iCmdShow){
WinApp::GetInstance()->Run(hInstance, iCmdShow);
return 0;
}
A:
There are many things wrong with your code.
But apart from that, the reason why your window is not showing up is because WinApp::WndProc doesn't return DefWindowProc's result.
So simply change
default:
DefWindowProc(hwnd, imsg, wParam, lParam);
break;
to
default:
return DefWindowProc(hwnd, imsg, wParam, lParam);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What does the green dotted vertical line represent in the flame chart on the timeline panel?
The blue dotted line around 475ms represents the DOMContentLoaded event. The red dotted line around 760ms represents the load event. But when I hover over the green dotted line it just says Frame at 499ms. What does that mean?
A:
This is the first paint. The hover area to trigger the tooltip is incredibly small and towards the top of the bulk at the top.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how to add scrollbars to ttk::combobox?
Is there a way to enable vertical and horizontal scrolling in ttk::combobox?
I looked at the https://www.tcl.tk/man/tcl/TkCmd/ttk_combobox.htm
this manual page doesn't have any option for scrollbars.
Regards,
David
A:
Vertical scrolling should automatically enable once you have sufficient values to choose between. There's no option to enable horizontal scrolling (and it isn't very good UX practice to have very long values in there); try keeping your individual values relatively short.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Finding the differential equation, given a solution
I am unable to understand how to find the differential equation when a general solution has been given. Here are a few example solutions, which require their differential equations to be found:
(a) $y = ax^2 + bx + c$
(b) $y^2 = 4ax$
(c) $x^2 - 2xy + y^2 = a^2$
Since I have my test coming up, I would be grateful if someone could explain the logic of solving such a question. You could perhaps help me with 2 of the questions, and I will try the third one.
Hoping to receive some help soon.
Thank you
A:
Remember that an expression with $n$ arbitrary constants will yield a differential equation of order $n$. So to get the $n^{th}$ order derivative you'll have to differentiate the expression $n$ times, and in that process you'll obtain $n$ more relations so that now you have a total of $n+1$ relations from which you can eliminate the $n$ arbitrary constants to obtain the differential equation.
Most of the times though the constants more or less dissappear by themselves. For example,consider
$y=ax^2+bx+c$.
There are 3 arbitrary constants $a$,$b$ and $c$ so just differentiate 3 times to obtain the DE $y'''=0$
Now consider $y^2=4ax$. Since there is only one constant $a$, differentiate once to get $2yy'=4a$. Now eliminate $4a$ to obtain the DE $2xy'=y$
I think with that in mind you can find the DE for a given solution.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Parametric solution of the Diophantine equation $\frac{1}{p}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z} ,x,y,z∈Z^+.$
I have prove that, for any given positive integer $p,$ parametric solution of the Diophantine equation
$$\frac{1}{p}=\frac{1}{x}+\frac{1}{y}$$
can be written in the form $x=ac(a+b),y=bc(a+b),$ where $p=abc.$
Proof
Let
$\frac{1}{p}=\frac{1}{x}+\frac{1}{y} ,x,y∈Z^+.$
Then $x+y=t$ and $xy=pt$ for some $t∈Z^+.$
Now the quadratic equation $z^2-tz+pt=0$ has two integer roots $x,y.$
Discriminant of this equation can be written as $Δ_(x,y)=t^2-4pt=q^2, q∈Z^+.$
The quadratic equation $t^2-4pt-q^2=0$ gives the value of $t.$
$Δ_t=16p^2+4q^2=4r^2,r∈Z^+.$
$4p^2+q^2=r^2,r∈Z^+.$
This equation is of the form of Pythagoras equation.
Therefore $p=abc,q=(a^2-b^2 )c$ and $r=(a^2+b^2 )c$ where $a,b,c$ are parameters.
Backward substitution gives that $t=(a+b)^2 c.$
Hence we can obtain that
$x=ac(a+b),y=bc(a+b).$
Then I was try to find the general parametric solution of the Diophantine equation
$$\frac{1}{p}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z} ,x,y,z∈Z^+.$$
I have found some particular solutions like,
$$\frac{1}{n}=\frac{1}{n+2}+\frac{1}{n(n+1)}+\frac{1}{(n+1)(n+2)}$$
$$\frac{1}{n}=\frac{1}{n+1}+\frac{1}{n(n+2)}+\frac{1}{n(n+1)(n+2)}$$
$$\frac{1}{n}=\frac{1}{n+1}+\frac{1}{(n+1)^2} +\frac{1}{n(n+1)^2 }$$
$$\frac{1}{n}=\frac{1}{n+1}+\frac{1}{n(2n+1)}+\frac{1}{(n+1)(2n+1)}$$
$$\frac{1}{n}=\frac{1}{n+1}+\frac{1}{(n^2+n+1)}+\frac{1}{n(n+1)(n^2+n+1)}.$$
But still I have no idea about how to attack the general one.
Here I have three questions.
1) Is there any different proof for general solution of first equation than my proof ?
2) Is there any general parametric solution for the second Diophantine equation ?
3) Is there any reference for these type of Diophantine equations ?
A:
Got it. Your equation is $$ xy = px + py, $$
$$ xy - px - py = 0, $$
$$ xy - px - py + p^2 = p^2, $$
$$ (x-p)(y-p) = p^2. $$
Apparently this observation occurs at Number of solution for $xy +yz + zx = N$
All solutions are given by finding a divisor $w$ of $p^2,$ with triple
$$ \color{magenta}{ \left( p, \; \; p + w, \; \; p + \frac{p^2}{w} \; \right).} $$
If $w < p$ these are in order, if $w=p$ it is just $(p,2p,2p),$ if $w > p$ it is a repeat but out of order. So, the total number of solutions is
$$ \frac{1 + d(p^2)}{2}, $$ where $d(n)$ is the number of positive divisors of $n.$
Note that the primitive triples, $\gcd(p,x,y),$ come when my $w$ is $1$ or some other square, so $p^2/w$ is also a square, in addition we require $\gcd(w,p^2/w)= 1$; for example $(6,10,15)$ with $w=4$ and $p^2/w = 9.$
OR $$ (30,31,930); \; \; (30,34,255); \; \; (30,39,130); \; \; (30,55,66). $$
$p$ up to $30.$
p x y
1 2 2
2 4 4
2 3 6
3 6 6
3 4 12
4 8 8
4 6 12
4 5 20
5 10 10
5 6 30
6 12 12
6 10 15
6 9 18
6 8 24
6 7 42
7 14 14
7 8 56
8 16 16
8 12 24
8 10 40
8 9 72
9 18 18
9 12 36
9 10 90
10 20 20
10 15 30
10 14 35
10 12 60
10 11 110
11 22 22
11 12 132
12 24 24
12 21 28
12 20 30
12 18 36
12 16 48
12 15 60
12 14 84
12 13 156
13 26 26
13 14 182
14 28 28
14 21 42
14 18 63
14 16 112
14 15 210
15 30 30
15 24 40
15 20 60
15 18 90
15 16 240
16 32 32
16 24 48
16 20 80
16 18 144
16 17 272
17 34 34
17 18 306
18 36 36
18 30 45
18 27 54
18 24 72
18 22 99
18 21 126
18 20 180
18 19 342
19 38 38
19 20 380
20 40 40
20 36 45
20 30 60
20 28 70
20 25 100
20 24 120
20 22 220
20 21 420
21 42 42
21 30 70
21 28 84
21 24 168
21 22 462
22 44 44
22 33 66
22 26 143
22 24 264
22 23 506
23 46 46
23 24 552
24 48 48
24 42 56
24 40 60
24 36 72
24 33 88
24 32 96
24 30 120
24 28 168
24 27 216
24 26 312
24 25 600
25 50 50
25 30 150
25 26 650
26 52 52
26 39 78
26 30 195
26 28 364
26 27 702
27 54 54
27 36 108
27 30 270
27 28 756
28 56 56
28 44 77
28 42 84
28 36 126
28 35 140
28 32 224
28 30 420
28 29 812
29 58 58
29 30 870
30 60 60
30 55 66
30 50 75
30 48 80
30 45 90
30 42 105
30 40 120
30 39 130
30 36 180
30 35 210
30 34 255
30 33 330
30 32 480
30 31 930
jagy@phobeusjunior:~$
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
All primitive solutions are given by finding a divisor $w$ of $p$ such that $\gcd(w,p/w) = 1$ with triple
$$ \color{magenta}{ \left( p, \; \; p + w^2, \; \; p + \frac{p^2}{w^2} \; \right).} $$ To keep them ordered we also choose $w \leq \sqrt p.$ If $p$ is a square in the first place, larger than $1,$ then $w=\sqrt p$ does not ever give a primitive solution anyway, that just gives $(p,2p,2p).$
Here are just the primitive ones for $p \leq 30$ and then $p=210.$
p x y
1 2 2
2 3 6
3 4 12
4 5 20
5 6 30
6 7 42
6 10 15
7 8 56
8 9 72
9 10 90
10 11 110
10 14 35
11 12 132
12 13 156
12 21 28
13 14 182
14 15 210
14 18 63
15 16 240
15 24 40
16 17 272
17 18 306
18 19 342
18 22 99
19 20 380
20 21 420
20 36 45
21 22 462
21 30 70
22 23 506
22 26 143
23 24 552
24 25 600
24 33 88
25 26 650
26 27 702
26 30 195
27 28 756
28 29 812
28 44 77
29 30 870
30 31 930
30 34 255
30 39 130
30 55 66
jagy@phobeusjunior:~$
210 211 44310
210 214 11235
210 219 5110
210 235 1974
210 246 1435
210 259 1110
210 310 651
210 406 435
jagy@phobeusjunior:~$
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
{
"pile_set_name": "StackExchange"
}
|
Q:
On Error not triggering on Rundeck from a Kubernetes app
I'm running a Kubernetes/Docker job on Rundeck 2.11.5-1. My job looks like:
Job
Sub Job 1
Remote Command (kubectl run command)
On error
Sub Job
Sub Job 2
Remote Command (kubectl run command)
On error
Sub Job
The problem I'm having is that if Sub Job 1 fails, its "On Error" does not trigger and Sub Job 2 runs as if all was well.
Is there something that kubectl needs to return to indicate there was an error?
What are some things I should look for/do to cause my job to stop on error.
A:
I found the solution. Add --restart=Never to the kubectl statement and the Rundeck job now correctly terminates on an apps failure.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
multiple choice question on group of matrices
Consider the set of matrices $$G=\left\{ \left( \begin{array}{ll}s&b\\0&1 \end{array}\right) b \in \mathbb{Z}, s \in \{1,-1\} \right\}.$$Then which of the following are true
G forms a group under addition
G forms an abelian group under multiplication
Every element of G is diagonolizable over $\mathbb{C}$
G is finitely generated group under multiplication
I am getting
1) is false since not closed under addition
2)Forms a group under multiplication ( abelian or not i don't know)
3)Not true if $a=1$
4) dont know
please help me to complete
A:
$1$ is false:
Your approach is correct. Since, for example, $\begin{pmatrix}1&*\\0&1 \end{pmatrix}+\begin{pmatrix}1&*\\0&1 \end{pmatrix}=\begin{pmatrix}2&*\\*&* \end{pmatrix} \notin G$
$2$ is false:
Take $b \neq 0$.$$\begin{pmatrix}1&b\\0&1 \end{pmatrix}\begin{pmatrix}-1&b\\0&1 \end{pmatrix}=\begin{pmatrix}-1&2b\\0&1 \end{pmatrix}$$ whereas $$\begin{pmatrix}-1&b\\0&1 \end{pmatrix}\begin{pmatrix}1&b\\0&1 \end{pmatrix}=\begin{pmatrix}-1&\color{red}{0}\\0&1 \end{pmatrix}$$
$3$ is false too:
Since, for example, $\begin{pmatrix}1&b\\0&1 \end{pmatrix}$ is not diagonalizable when $b \neq 0$
$4$ is true
The finite set $$\left\{\begin{pmatrix}1&1\\0&1 \end{pmatrix},\begin{pmatrix}1&-1\\0&1 \end{pmatrix},\begin{pmatrix}-1&0\\0&1 \end{pmatrix}\right\}$$ generates $G$(verify!)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What silverlight dev must learn to use arcGIS silverlight?
I am already familiar with Silverlight programming but not have any experience with GIS.
my role as silverlight developer is only to display existing GIS data.
If you guys have any experience with arcGIS silverlight control & api, what else do you think I must learn to be able to use it.
any learning reference could be helpful. thanks.
A:
you don't need alot, you can dl the SDK from ESRI and then check out thier help site they have crap-loads of examples, both downloadable source and live samples (with the source code). If you have a license, you can use bing maps in the ESRI silverlight control--there are assemblies in the SDK for that.
as an aside, the SDK also includes the WPF assemblies as well.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Tilde in author field changes S letter with Estonian babel
Bibtex generates fallowing code to bbl file.
\bibitem{Bell_1964}
J.~S. Bell, \enquote{On the {Einstein-Podolsky-Rosen} paradox,} Physics
\textbf{1}.
and similarly
\bibitem{BellSpeakableAndNot}
J.~S. Bell, \emph{Speakable and Unspeakable in Quantum Mechanics; 2nd ed.}
(Cambridge Univ. Press, Cambridge, 2004), chap. Introduction to the
Hidden-Variable Question, pp. 37--38, Collected papers on quantum philosophy.
Problem is that S is replaced with Š in the output and I don't know why nor how to fix it. I've tried different bibtex formats, encodings and {} placements but problem remains.
I know you could work around this with directly removing ~ in front of S in bbl file or formatting all the name as {J. S. Bell}, but I have lots of references in my thesis and I'd like clean solution.
I have googled a whole day and I really hope you know how to help me.
EDIT:
As it turned out, it was babel package shortcuts which is to blame. So now I know minimal example to reproduce the error is as fallows:
\documentclass[12pt]{article}
\usepackage[estonian]{babel} % Estonian babel!
\usepackage[utf8]{inputenc}
\usepackage{filecontents} % Just of infile .bib data
\begin{filecontents*}{\jobname.bib}
@article{Bli74,
author = {Blinder, Alan S.}
}
\end{filecontents*}
\begin{document}
\section{Foobars}
barbarbar \cite{Bli74}
\bibliographystyle{plain}
\bibliography{\jobname}
\end{document}
And corresponding output will be after latex + bibtex + latex + latex:
1 Foobars
barbarbar [1]
Viited
[1] AlanŠ. Blinder.
A:
The only explanation is that you're declaring
\usepackage[estonian]{babel}
If you don't need the combinations such as ~S for producing accented Estonian characters, because your document is UTF-8 encoded, then say
\addto\extrasestonian{\let~\nobreakspace}
just after loading babel. If you need the babel shortcuts, then the only options are:
Disable the shorthand before typesetting the bibliography
\let~\nobreakspace
\bibliography{<filename>}
Remove ~ from the .bib file. It's just a "search and replace", after all, and the ~ there does nothing, as names are surely at the start of a line.
(Note: I changed the way to disable the shortcut as \shorthandoff{~} has an undesired effect.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
TeamCity, version numbers from file in repo and assembly patcher
Context
In our repo, we have a file called version.txt that contains the major and minor version number: 0.7.
I added a TeamCity build step with a powershell script that sets this parameter into a config parameter, based on this answer:
$version = Get-Content version.txt
Write-Host "##teamcity[setParameter name='UserMajorDotMinor' value='$version']"
The UserMajorDotMinor parameter defaults to 0.6 on TeamCity.
I have a config parameter called %UserVersionNumber% that is used to set the actual version number, which is defined as
%UserMajorDotMinor%.0.%system.build.number%
The Problem
While prints 0.7 in the TeamCity build log, but it doesn't seem to properly set the UserVersionNumber, because the number that the assembly patcher writes into the dll is still 0.6.0.xxxxx.
What do I have to change so TeamCity will actually write the correct version number into the dlls?
A:
The Assembly Info Patcher build feature will run before any build step, therefore, changes made to parameters within a step won´t affect the Assembly Info Patcher
If you really need to use the Major.Minor Info from the version.txt file then i would setup a seperate build configuration that reads the file and provides the content as build parameter %UserMajorDotMinor%. Basicly what you already did.
Then you can add the newly created config as dependency to the actual build and set the %Version% Parameter to %dep.[buildconfigname].UserMajorDotMinor%.0.%system.build.number%
As a Alternative, use a Script to patch your AssemblyInfo.cs files as seperate build step instead of the Assembly Info Pather Feature.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
pipenv install giving Failed to load paths errors
I am running pipenv install --dev which is giving me the following errors
Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning.
Installing dependencies from Pipfile.lock (2df4c1)…
Failed to load paths: /bin/sh: /Users/XXXX/.local/share/virtualenvs/my-service-enGYxXYk/bin/python: No such file or directory
Output:
Failed to load paths: /bin/sh: /Users/XXXX/.local/share/virtualenvs/my-service-enGYxXYk/bin/python: No such file or directory
Output:
Failed to load paths: /bin/sh: /Users/XXXX/.local/share/virtualenvs/my-service-enGYxXYk/bin/python: No such file or directory
I don't really want to change the command around I would rather solve the underlying issue as it is part of a package.json file in a project others are using rather than something i am just trying to run on my own machine..
Thanks
A:
Remove your Pipfile.lock and try rerunning pipenv install to rebuild your dependencies from your Pipfile. It is looking for a virtual environment that does not exist. By removing your Pipfile.lock, you force pipenv to create a new environment.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Parameterizable return-from in Common Lisp
I'm learning blocks in Common lisp and did this example to see how blocks and the return-from command work:
(block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(return-from b1)
(print 6)
)
(print 7))
It will print 1, 2, 3, 4, and 5, as expected. Changing the return-from to (return-from b2) it'll print 1, 2, 3, 4, 5, and 7, as one would expect.
Then I tried turn this into a function and paremetrize the label on the return-from:
(defun test-block (arg) (block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(return-from (eval arg))
(print 6)
)
(print 7)))
and using (test-block 'b1) to see if it works, but it doesn't. Is there a way to do this without conditionals?
A:
Using a conditional like CASE to select a block to return from
The recommended way to do it is using case or similar. Common Lisp does not support computed returns from blocks. It also does not support computed gos.
Using a case conditional expression:
(defun test-block (arg)
(block b1
(print 1)
(print 2)
(print 3)
(block b2
(print 4)
(print 5)
(case arg
(b1 (return-from b1))
(b2 (return-from b2)))
(print 6))
(print 7)))
One can't compute lexical go tags, return blocks or local functions from names
CLTL2 says about the restriction for the go construct:
Compatibility note: The ``computed go'' feature of MacLisp is not supported. The syntax of a computed go is idiosyncratic, and the feature is not supported by Lisp Machine Lisp, NIL (New Implementation of Lisp), or Interlisp. The computed go has been infrequently used in MacLisp anyway and is easily simulated with no loss of efficiency by using a case statement each of whose clauses performs a (non-computed) go.
Since features like go and return-from are lexically scoped constructs, computing the targets is not supported. Common Lisp has no way to access lexical environments at runtime and query those. This is for example also not supported for local functions. One can't take a name and ask for a function object with that name in some lexical environment.
Dynamic alternative: CATCH and THROW
The typically less efficient and dynamically scoped alternative is catch and throw. There the tags are computed.
A:
I think these sorts of things boils down to the different types of namespaces bindings and environments in Common Lisp.
One first point is that a slightly more experienced novice learning Lisp might try to modify your attempted function to say (eval (list 'return-from ,arg)) instead. This seems to make more sense but still does not work.
Namespaces
A common beginner mistake in a language like scheme is having a variable called list as this shadows the top level definition of this as a function and stops the programmer from being able to make lists inside the scope for this binding. The corresponding mistake in Common Lisp is trying to use a symbol as a function when it is only bound as a variable.
In Common Lisp there are namespaces which are mappings from names to things. Some namespaces are:
The functions. To get the corresponding thing either call it: (foo a b c ...), or get the function for a static symbol (function foo) (aka #'foo) or for a dynamic symbol (fdefinition 'foo). Function names are either symbols or lists of setf and one symbol (e.g. (serf bar)). Symbols may alternatively be bound to macros in this namespace in which case function and fdefinition signal errors.
The variables. This maps symbols to the values in the corresponding variable. This also maps symbols to constants. Get the value of a variable by writing it down, foo or dynamically as (symbol-value). A symbol may also be bound as a symbol-macro in which case special macro expansion rules apply.
Go tags. This maps symbols to labels to which one can go (like goto in other languages).
Blocks. This maps symbols to places you can return from.
Catch tags. This maps objects to the places which catch them. When you throw to an object, the implementation effectively looks up the corresponding catch in this namespace and unwinds the stack to it.
classes (and structs, conditions). Every class has a name which is a symbol (so different packages may have a point class)
packages. Each package is named by a string and possibly some nicknames. This string is normally the name of a symbol and therefore usually in uppercase
types. Every type has a name which is a symbol. Naturally a class definition also defines a type.
declarations. Introduced with declare, declaim, proclaim
there might be more. These are all the ones I can think of.
The catch-tag and declarations namespaces aren’t like the others as they don’t really map symbols to things but they do have bindings and environments in the ways described below (note that I have used declarations to refer to the things that have been declared, like the optimisation policy or which variables are special, rather than the namespace in which e.g. optimize, special, and indeed declaration live which seems too small to include).
Now let’s talk about the different ways that this mapping may happen.
The binding of a name to a thing in a namespace is the way in which they are associated, in particular, how it may come to be and how it may be inspected.
The environment of a binding is the place where the binding lives. It says how long the binding lives for and where it may be accessed from. Environments are searched for to find the thing associated with some name in some namespace.
static and dynamic bindings
We say a binding is static if the name that is bound is fixed in the source code and a binding is dynamic if the name can be determined at run time. For example let, block and tags in a tagbody all introduce static bindings whereas catch and progv introduce dynamic bindings.
Note that my definition for dynamic binding is different from the one in the spec. The spec definition corresponds to my dynamic environment below.
Top level environment
This is the environment where names are searched for last and it is where toplevel definitions go to, for example defvar, defun, defclass operate at this level. This is where names are looked up last after all other applicable environments have been searched, e.g. if a function or variable binding can not be found at a closer level then this level is searched. References can sometimes be made to bindings at this level before they are defined, although they may signal warnings. That is, you may define a function bar which calls foo before you have defined foo. In other cases references are not allowed, for example you can’t try to intern or read a symbol foo::bar before the package FOO has been defined. Many namespaces only allow bindings in the top level environment. These are
constants (within the variables namespace)
classes
packages
types
Although (excepting proclaim) all bindings are static, they can effectively be made dynamic by calling eval which evaluates forms at the top level.
Functions (and [compiler] macros) and special variables (and symbol macros) may also be defined top level. Declarations can be defined toplevel either statically with the macro declaim or dynamically with the function proclaim.
Dynamic environment
A dynamic environment exists for a region of time during the programs execution. In particular, a dynamic environment begins when control flow enters some (specific type of) form and ends when control flow leaves it, either by returning normally or by some nonlocal transfer of control like a return-from or go. To look up a dynamically bound name in a namespace, the currently active dynamic environments are searched (effectively, ie a real system wouldn’t be implemented this way) from most recent to oldest for that name and the first binding wins.
Special variables and catch tags are bound in dynamic environments. Catch tags are bound dynamically using catch while special variables are bound statically using let and dynamically using progv. As we shall discuss later, let can make two different kinds of binding and it knows to treat a symbol as special if it has been defined with defvar or ‘defparameteror if it has been declared asspecial`.
Lexical environment
A lexical environment corresponds to a region of source code as it is written and a specific runtime instantiation of it. It (slightly loosely) begins at an opening parenthesis and ends at the corresponding closing parenthesis, and is instantiated when control flow hits the opening parenthesis. This description is a little complicated so let’s have an example with variables which are bound in a lexically environment (unless they are special. By convention the names special variables are wrapped in * symbols)
(defun foo ()
(let ((x 10))
(bar (lambda () x))))
(defun bar (f)
(let ((x 20))
(funcall f)))
Now what happens when we call (foo)? Well if x were bound in a dynamic environment (in foo and bar) then the anonymous function would be called in bar and the first dynamic environment with a binding for x would have it bound to 20.
But this call returns 10 because x is bound in a lexical environment so even though the anonymous function gets passed to bar, it remembers the lexical environment corresponding to the application of foo which created it and in that lexical environment, x is bound to 10. Let’s now have another example to show what I mean by ‘specific runtime instantiation’ above.
(defun baz (islast)
(let ((x (if islast 10 20)))
(let ((lx (lambda () x)))
(if islast
lx
(frob lx (baz t))))))
(defun frob (a b)
(list (funcall a) (funcall b)))
Now running (baz nil) will give us (20 10) because the first function passed to frob remembers the lexical environment for the outer call to baz (where islast is nil) whilst the second remembers the environment for the inner call.
For variables which are not special, let creates static lexical bindings. Block names (introduced statically by block), go tags (scopes inside a tagbody), functions (by felt or labels), macros (macrolet), and symbol macros (symbol-macrolet) are all bound statically in lexical environments. Bindings from a lambda list are also lexically bound. Declarations can be created lexically using (declare ...) in one of the allowed places or by using (locally (declare ...) ...) anywhere.
We note that all lexical bindings are static. The eval trick described above does not work because eval happens in the toplevel environment but references to lexical names happen in the lexical environment. This allows the compiler to optimise references to them to know exactly where they are without running code having to carry around a list of bindings or accessing global state (e.g. lexical variables can live in registers and the stack). It also allows the compiler to work out which bindings can escape or be captured in closures or not and optimise accordingly. The one exception is that the (symbol-)macro bindings can be dynamically inspected in a sense as all macros may take an &environment parameter which should be passed to macroexpand (and other expansion related functions) to allow the macroexpander to search the compile-time lexical environment for the macro definitions.
Another thing to note is that without lambda-expressions, lexical and dynamic environments would behave the same way. But note that if there were only a top level environment then recursion would not work as bindings would not be restored as control flow leaves their scope.
Closure
What happens to a lexical binding captured by an anonymous function when that function escapes the scope it was created in? Well there are two things that can happen
Trying to access the binding results in an error
The anonymous function keeps the lexical environment alive for as long as the functions referencing it are alive and they can read and write it as they please.
The second case is called a closure and happens for functions and variables. The first case happens for control flow related bindings because you can’t return from a form that has already returned. Neither happens for macro bindings as they cannot be accessed at run time.
Nonlocal control flow
In a language like Java, control (that is, program execution) flows from one statement to the next, branching for if and switch statements, looping for others with special statements like break and return for certain kinds of jumping. For functions control flow goes into the function until it eventually comes out again when the function returns. The one nonlocal way to transfer control is by using throw and try/catch where if you execute a throw then the stack is unwound piece by piece until a suitable catch is found.
In C there are is no throw or try/catch but there is goto. The structure of C programs is secretly flat with the nesting just specifying that “blocks” end in the opposite order to the order they start. What I mean by this is that it is perfectly legal to have a while loop in the middle of a switch with cases inside the loop and it is legal to goto the middle of a loop from outside of that loop. There is a way to do nonlocal control transfer in C: you use setjmp to save the current control state somewhere (with the return value indicating whether you have successfully saved the state or just nonlocally returned there) and longjmp to return control flow to a previously saved state. No real cleanup or freeing of memory happens as the stack unwinds and there needn’t be checks that you still have the function which called setjmp on the callstack so the whole thing can be quite dangerous.
In Common Lisp there’s a range of ways to do nonlocal control transfer but the rules are more strict. Lisp doesn’t really have statements but rather everything is built out of a tree of expressions and so the first rule is that you can’t nonlocally transfer control into a deeper expression, you may only transfer out. Let’s look at how these different methods of control transfer work.
block and return-from
You’ve already seen how these work inside a single function but recall that I said block names are lexically scoped. So how does this interact with anonymous functions?
Well suppose you want to search some big nested data structure for something. If you were writing this function in Java or C then you might implement a special search function to recurse through your data structure until it finds the right thing and then return it all the way up. If you were implementing it in Haskell then you would probably want to do it as some kind of fold and rely on lazy evaluation to not do too much work. In Common Lisp you might have a function which applies some other function passed as a parameter to each item in the data structure. And now you can call that with a searching function. How might you get the result out? Well just return-from to the outer block.
tagbody and go
A tagbody is like a progn but instead of evaluating single symbols in the body, they are called tags and any expression within the tagbody can go to them to transfer control to it. This is partly like goto, if you’re still in the same function but if your go expression happens inside some anonymous function then it’s like a safe lexically scoped longjmp.
catch and throw
These are most similar to the Java model. The key difference between block and catch is that block uses lexical scoping and catch uses dynamic scoping. Therefore their relationship is like that between special and regular variables.
Finally
In Java one can execute code to tidy things up if the stack has to unwind through it as an exception is thrown. This is done with try/finally. The Common Lisp equivalent is called unwind-protect which ensures a form is executed however control flow may leave it.
Errors
It’s perhaps worth looking a little at how errors work in Common Lisp. Which of these methods do they use?
Well it turns out that the answer is that errors instead of generally unwinding the stack start by calling functions. First they look up all the possible restarts (ways to deal with an error) and save them somewhere. Next they look up all applicable handlers (a list of handlers could, for example, be stored in a special variable as handlers have dynamic scope) and try each one at a time. A handler is just a function so it might return (ie not want to handle the error) or it might not return. A handler might not return if it invokes a restart. But restarts are just normal functions so why might these not return? Well restarts are created in a dynamic environment below the one where the error was raised and so they can transfer control straight out of the handler and the code that threw the error to some code to try to do something and then carry on. Restarts can transfer control using go or return-from. It is worth noting that it is important here that we have lexical scope. A recursive function could define a restart on each successive call and so it is necessary to have lexical scope for variables and tags/block names so that we can make sure we transfer control to the right level on the call stack with the right state.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SSL on node.js API running in StatefulSet on GKE
I have an app with the following structure:
An R Shiny app which functions as a UI and it lets the user upload files and stores them on a gcePersistentDisk.
A node.js server which reads those files on the gcePersistentDisk processes them and provides an API for the Shiny app to retrieve the results.
This runs in a GKE cluster with the following structure:
a StatefulSet with a pod containing two containers to allow simultaneous access for both the client and the server to the volume.
a headless service for the StatefulSet.
an Ingress with a fixed IP to where the domain points.
a NodePort as a backend for the Ingress with the selector pointing to the 0th pod of the StatefulSet
At least this is what I did to make this work, I'm not too good in DevOps or networking in general. Now the client came up with a request that a third party app would also use the node.js API but it wishes to do so on https.
My first try was to use greenlock-express.js, however, it needs a public facing IP, but the server can only see it's cluster IP.
I don't know if this could/should be changed and if not what other approaches should I take?
Thanks!
YAML's
apiVersion: v1
kind: Service
metadata:
name: plo-set-service
labels:
app: plo
spec:
clusterIP: None
selector:
app: plo
ports:
- name: web
port: 80
protocol: TCP
targetPort: ploweb-port
- name: api
port: 3300
protocol: TCP
targetPort: ploapi-port
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: plo-set
spec:
serviceName: plo-set-service
replicas: 1
selector:
matchLabels:
app: plo
template:
metadata:
labels:
app: plo
spec:
containers:
- name: plo-server
image:
readinessProbe:
httpGet:
path: /healthz
port: 3300
initialDelaySeconds: 15
periodSeconds: 15
ports:
- name: ploapi-port
containerPort: 3300
volumeMounts:
- mountPath: /data
name: plo-volume
- name: plo-client
image:
ports:
- name: ploweb-port
containerPort: 80
volumeMounts:
- mountPath: /data
name: plo-volume
volumeClaimTemplates:
- metadata:
name: plo-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 500Gi
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: plo-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: plo-ip
spec:
backend:
serviceName: plo-web
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: plo-web
spec:
type: NodePort
externalTrafficPolicy: Local
selector:
statefulset.kubernetes.io/pod-name: plo-set-0
ports:
- name: web
port: 80
protocol: TCP
targetPort: 80
A:
By default Nginx Ingress Controller serves on port "HTTP/80" and "HTTPS/443" regardless backend protocol and port.
So in your case, you don't have to change anything to use HTTPS protocol for your backend, if you just use Nginx Ingress Controller out of the box
For example I have Ingress Controller service as below serving on port 80 and 443 which created when deployed Nginx Controller
$ kubectl get svc nginx-ingress-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.15.254.182 <external-ip-addr> 80:32594/TCP,443:31949/TCP 2d
Also I have service for my-nginx deployment listening on port 80
$ kubectl get svc my-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx ClusterIP 10.15.252.11 <none> 80/TCP 30m
And I have deployed simple ingress resource similar to yours
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-nginx-ingress
spec:
backend:
serviceName: my-nginx
servicePort: 80
Now you can call your service via HTTP or HTTPS by requesting to Nginx Controller LoadBalancer IP from outside of cluster, or DNS/ClusterIP within cluster.
HTTP request within cluster from other pod:
# curl -I http://nginx-ingress-controller
HTTP/1.1 200 OK
Server: nginx/1.15.9
...
HTTPS request:
# curl -Ik https://nginx-ingress-controller
HTTP/2 200
server: nginx/1.15.9
...
Hope it could help you
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Remap of C-R registers and "\xx" syntax
I tried following this tip from vim wikia:
:inoremap \fn <C-R>=expand("%:t:r")<CR>
For me this doesn't make any difference to the functionality of Ctrlr% in insert mode (it still just gives the full path and file extension), so I was looking at that and some linked tips and vim help sections, to no avail.
Why doesn't this work? Should it? If so could there be something in my config or an extension that ruins it?
Somewhat tangentially, what is the \fn about? I've seen variations on this in other map commands but can't find any explanation of it. If someone could break this line down completely and explain the meaning that would be great.
A:
Ah, you misread the mapping. It doesn't map <c-r>, it maps \fn. Typing \fn while in insert mode is what gets you the filename.
If you wrote it this way, perhaps it might be clearer:
inoremap <leader>fn <c-r>=expand("%:t:r")<cr>
(The default leader is \.)
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.