_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d6801 | train | To get the content of cell (B,ROW()) do:
= INDIRECT(CONCATENATE("B", ROW()))
If you just want to calculate the average of a given line of numbers (e.g. first 10 cells in row 2):
= AVERAGE(A2:J2)
The ':' represents an area from the upper left corner (A2) to the lower right (J2).
As mentioned by @MattClarke, you can use =AVERAGE(ROW_NUMBER:ROW_NUMBER) to calculate the average of a whole row (in this case row ROW_NUMBER) where you don't know the exact number of data fields. Pay attention not to use this formula in that exact row (row ROW_NUMBER) to avoid a circular reference. (Thanks @MattClarke for the hint!)
A: Getting the average across a fixed range of cells is pretty easy: just use a formula like =average(A4:H4) where the parameter specifies the first and last cell. If you want the average of a whole row and you don't know how many columns of data there are, then you can use something like =average(8:8), where the number 8 is the row number of the data to be averaged. | unknown | |
d6802 | train | Use this documentation to draw the polygons
Use this to listen for map clicks
Use this to determine if a touch is inside one of the polygons
I'm not sure that geometry library can run on android, so feel free to replace the third component.
EDIT:
Misread the question and associated it with google maps, sorry.
A: Here is how to draw and fill a polygon (rectangle example):
ArrayList bgRectPoints = new ArrayList<>();
GeoPoint pt1 = new GeoPoint(-15.953548, 126.036911);
bgRectPoints.add(pt1);
GeoPoint pt2 = pt1.destinationPoint(10000, 0);
bgRectPoints.add(pt2);
GeoPoint pt3 = pt2.destinationPoint(10000, 90);
bgRectPoints.add(pt3);
GeoPoint pt4 = pt3.destinationPoint(10000, 180);
bgRectPoints.add(pt4);
bgRectPoints.add(pt1);
Polygon polygon = new Polygon();
polygon.setPoints(bgRectPoints);
polygon.setFillColor(Color.BLACK);
mapView.getOverlays().add(polygon);
To receive touch/Tap initialize polygon like this:
Polygon polygon = new Polygon(){
@Override
public boolean onSingleTapConfirmed(MotionEvent event, MapView mapView) {
Toast.makeText(context, "Polygon clicked!",Toast.LENGTH_SHORT).show();
return super.onSingleTapConfirmed(event, mapView);
}
};
More can be found here | unknown | |
d6803 | train | I made a trivial mistake which costed me hours of pain. Silly me the problem was that my class name in struts.xml and id in register.xml were not matching and hence the issue. | unknown | |
d6804 | train | You are printing the length of the input given by user, so that's why 10 is printed out (see statement no. 6 inside main() function).
phoneNumber = str(phoneNumber)
length = len(phoneNumber)
index = 0
print(length) # <--- this statement is printing the length of the input | unknown | |
d6805 | train | Your function has two parameters, so you need two placeholders in your bind expression.
std::bind(&ParentClass::someFunction, this, std::placeholders::_2)
needs to be
std::bind(&ParentClass::someFunction, this, std::placeholders::_1, std::placeholders::_2)
Alternatively you can simplify this with a lambda like
[this](auto a, auto b){ this->someFunction(a, b); }
A: You should also pass the first placeholder, not just change _1 to _2.
storageClass.AddFunction(std::bind(&ParentClass::someFunction, this, std::placeholders::_1, std::placeholders::_2)); | unknown | |
d6806 | train | So RECURSIVE is the property on FLATTEN you want to use here:
with data as (
select parse_xml('<Nodes>
<Node Id="1">
<Nodes>
<Node Id="2">
</Node>
<Node Id="3">
<Nodes>
<Node Id="4">
</Node>
<Node Id="5">
<Nodes>
<Node Id="6">
</Node>
</Nodes>
</Node>
<Node Id="7">
</Node>
</Nodes>
</Node>
<Node Id="8">
</Node>
</Nodes>
</Node>
<Node Id="9">
<Nodes>
<Node Id="10">
</Node>
</Nodes>
</Node>
</Nodes>') as xml
)
select
GET(f.value, '@Id') as id
,f.path as path
,len(path) as p_len
from data,
TABLE(FLATTEN(INPUT=>get(xml,'$'), recursive=>true)) f
where get(f.value, '@') = 'Node'
;
gives:
ID PATH P_LEN
1 [0] 3
2 [0]['$']['$'][0] 16
3 [0]['$']['$'][1] 16
4 [0]['$']['$'][1]['$']['$'][0] 29
5 [0]['$']['$'][1]['$']['$'][1] 29
6 [0]['$']['$'][1]['$']['$'][1]['$']['$'] 39
7 [0]['$']['$'][1]['$']['$'][2] 29
8 [0]['$']['$'][2] 16
9 [1] 3
10 [1]['$']['$'] 13
from this you can now rebuild the hierarchy by find all the matches of path and taking the longest match.
OR
you can do a double nested loop like:
select
GET(f1.value, '@Id') as id
,GET(f2.value, '@Id') as id
,f1.value
,f2.*
, get(f2.value, '@')
from data,
TABLE(FLATTEN(INPUT=>get(xml,'$'), recursive=>true)) f1,
TABLE(FLATTEN(INPUT=>GET(xmlget(f1.value,'Nodes'), '$'))) f2
where get(f1.value, '@') = 'Node'
;
BUT it doesn't give you the first row, and snowflake behaves differently with expanding the nodes
<node>
<nodes>
<node></node>
</nodes>
<node>
and
<node>
<nodes>
<node></node>
<node></node>
</nodes>
<node>
which means you have to try handle both which is really gross.
EDIT:
So you can get closer but noting that if the second sub-case happens you can get node name get(f2.value, '@') = 'Node' thus we have something we can stuff into IFF and in the first case, the value of the flatten is 'Node' thus we can hard code fetch the parents -> nodes -> node, thus:
select
GET(f1.value, '@Id') as parent_id
,iff(get(f2.value, '@') = 'Node', GET(f2.value, '@Id'), GET(xmlget(xmlget(f1.value,'Nodes'),'Node'), '@Id')) as child_id
from data,
TABLE(FLATTEN(INPUT=>get(xml,'$'), recursive=>true)) f1,
TABLE(FLATTEN(INPUT=>GET(xmlget(f1.value,'Nodes'), '$'))) f2
where get(f1.value, '@') = 'Node'
and (get(f2.value, '@') = 'Node' OR f2.value = 'Node')
;
gives you:
PARENT_ID CHILD_ID
1 2
1 3
1 8
3 4
3 5
3 7
5 6
9 10
which is only missing the NULL, 1 and NULL, 9 rows that you wanted.
EDIT 2
So going back to my original suggestion, pulling the node id's and the paths out and then doing a LEFT JOIN on the nodes with a QUALIFY to keep the longest match can be done like so, and gives the desired output:
with data as (
select parse_xml('<Nodes>
<Node Id="1">
<Nodes>
<Node Id="2">
</Node>
<Node Id="3">
<Nodes>
<Node Id="4">
</Node>
<Node Id="5">
<Nodes>
<Node Id="6">
</Node>
</Nodes>
</Node>
<Node Id="7">
</Node>
</Nodes>
</Node>
<Node Id="8">
</Node>
</Nodes>
</Node>
<Node Id="9">
<Nodes>
<Node Id="10">
</Node>
</Nodes>
</Node>
</Nodes>') as xml
), nodes AS (
select
GET(f1.value, '@Id') as id
,f1.path as path
,len(path) as l_path
from data,
TABLE(FLATTEN(INPUT=>get(xml,'$'), recursive=>true)) f1
where get(f1.value, '@') = 'Node'
)
SELECT p.id as parent_id
,c.id as child_id
FROM nodes c
LEFT JOIN nodes p
ON LEFT(c.path,p.l_path) = p.path AND c.id <> p.id
QUALIFY row_number() over (partition by c.id order by p.l_path desc ) = 1
;
gives:
PARENT_ID CHILD_ID
null 1
1 2
1 3
3 4
3 5
5 6
3 7
1 8
null 9
9 10 | unknown | |
d6807 | train | One reason for me is that I prefer writing this:
<div class="entry">
<h1>{{title}}</h1>
<div class="body">
{{body}}
</div>
</div>
Over writing this:
var createEntryTemplate = function(obj) {
return '<div class="entry">' +
'<h1>' + obj.title + '</h1>' +
'<div class="body">' + obj.body +
'</div>' +
'</div>';
};
The latter method is also more error prone - if not for you then maybe for another person. Imagine you're working with a designer who doesn't have a lot of programming experience and he needs to go in and replace a significant chunk of HTML.
Oh crap...
A: Basically, using a client-sided templating engine, trades server-sided rendering against client-sided execution, so these come to mind
*
*Pro: You might easily save significant bandwidth, as the raw data most often is much smaller than the HTML rendering
*Pro: You might easily save significant server CPU cycles by doing rendering work on the client
*Pro: The client might have more or easier accessible knowledge about the rendering restrictions (e.g. screen size)
*Con: You move the rendering from a well known and stable environment to a moving target outside your control
*Con: A non-interactive client (e.g. a search engine) will not see your final rendering, making SEO, indexing etc. hard | unknown | |
d6808 | train | Your function with some change:
myfunC1<-function(t1) {
n1<-13.8065/(1+exp(-(t1-11.8532)/26.4037))
y1<-unlist(lapply(n1*2.4, rpois, n=1))
c<-log(2.7/2.4)*(y1/n1-(2.7-2.4)/(log(2.7)-log(2.4)))
return(c)
}
Your output:
t1<-seq(1,10,1)
myfunC1(t1)
[1] -0.043210706 0.076575495 0.006905820 -0.139863770 -0.045328088 0.006866088 -0.037032547 -0.079171724
[9] -0.083574188 0.018280450
About the second part of your question, you can use an approach like this one:
L<-runif(10,1,10)
G<-runif(10,1,10)
myfunC2<-function(G,L,t)
{
return(max(0,0.85*G[t−1]+L[t]))
}
unlist(lapply(rep(1:length(L)),myfunC2, G=G, L=L))
[1] 0.000000 14.094739 7.489582 14.268056 16.318365 9.115776 11.729936 7.091494 16.030881 9.289892 | unknown | |
d6809 | train | You can use conditional sum:
SELECT
il.warehouse_id,
w.code as warehouse_code,
w.name as warehouse_name,
il.item_id,
i.code as item_code,
i.name as item_name,
il.lot,
il.date_expiry,
il.location_id,
sum( if( il.direction in (1,4,5), il.qty, 0 ) ) as positive_quantity,
sum( if( il.direction in (2, 3, 6), il.qty, 0 ) ) as negative_quantity
FROM warehouses w
INNER JOIN inventory_logs il
ON w.id = il.warehouse_id
INNER JOIN items as i
ON il.item_id = i.id
WHERE il.location_id IN (1,3) AND il.date_posted BETWEEN '2019-01-01' AND '2019-01-31'
GROUP BY
il.warehouse_id,
il.item_id,
il.lot,
il.date_expiry,
il.location_id,
Btw, your images dot match with the query. It is always best to post the question as text instead of an image, preferably as an SQLFiddle. | unknown | |
d6810 | train | *
*toString() method returns the String representation of an Object. The default implementation of toString() for an object returns the HashCode value of the Object. We'll come to what HashCode is.
Overriding the toString() is straightforward and helps us print the content of the Object. @ToString annotation from Lombok does that for us. It Prints the class name and each field in the class with its value.
The @ToString annotation also takes in configuration keys for various behaviours. Read here. callSuper just denotes that an extended class needs to call the toString() from its parent.
hashCode() for an object returns an integer value, generated by a hashing algorithm. This can be used to identify whether two objects have similar Hash values which eventually help identifying whether two variables are pointing to the same instance of the Object. If two objects are equal from the .equals() method, they must share the same HashCode
*They are supposed to be defined on the entity, if at all you need to override them.
*Each of the three have their own purposes. equals() and hashCode() are majorly used to identify whether two objects are the same/equal. Whereas, toString() is used to Serialise the object to make it more readable in logs.
From Effective Java:
You must override hashCode() in every class that overrides equals().
Failure to do so will result in a violation of the general contract
for Object.hashCode(), which will prevent your class from functioning
properly in conjunction with all hash-based collections, including
HashMap, HashSet, and HashTable. | unknown | |
d6811 | train | From the summary:
If there isn't such an ancestor, it returns null.
So:
if (e.target.closest('.my-class') !== null)
In the event that e.target itself may be a .my-class, and you want to exclude that, you need to start from the element's parent:
if (e.target.parentNode.closest('.my-class') !== null)
but if e.target is guaranteed never to be a .my-class the first example will suffice. | unknown | |
d6812 | train | I have created a jsFiddle below. Based on my understanding on your question, you want to add a class to the li if it contains a certain text on it. Please update me if this answers your question. Thanks
$(function(){
$('#availList li').each(function(i,val){
if($(this).text() == "Area Sold"){
$('#mapArea li:nth-child('+(i+1)+')').addClass('areaSold');
}else{
$('#mapArea li:nth-child('+(i+1)+')').addClass('notSold');
}
});
});
Check sample here:
http://jsfiddle.net/tBLhN/ | unknown | |
d6813 | train | Try this, it might be bulky :
<?php
function get_random_line($number, $file='file.txt'){
$trimmed = file($file, FILE_IGNORE_NEW_LINES | FILE_SKIP_EMPTY_LINES);
$string = "Text $number";
$array = array();
foreach ($trimmed as $key => $line) {
if($key % 2 == 0){
$arr_key = $line;
if(!array_key_exists($arr_key,$array)){
$array[$arr_key]['total'] = 0;
}
}else{
$array[$arr_key]['total']++;
$array[$arr_key]['rows'][] = $line;
}
}
if(key_exists($string,$array)){
$key = array_rand($array[$string]['rows'],1);
return $array[$string]['rows'][$key];
}else
{
return 'Provide a Valid Key';
}
}
echo '<pre>';
print_r(get_random_line(3,'file.txt')); // Get Random Lines based on Label Value
echo '</pre>';
?> | unknown | |
d6814 | train | Feels like you're trying to use the second array as a lookup into the first. Here's a way to do this by transforming it into an object:
function toLookupTable(shirtColors) {
//keys will be image names, values will be colors
const lookupTable = {};
shirtColors.forEach(shirtColor => {
//use array destructuring
const [ image, color ] = shirtColor;
lookupTable[image] = color;
});
return lookupTable;
}
const colorLookup = toLookupTable( [["image1","blue"],["image2","red"]] );
console.log(colorLookup["image2"]); //outputs "red"
A: Use Array#reduce and Array#findIndex
I want to return the color from the second array to a variable.
const arr1 = [["image1","shirt", "collared",40],["image3","shirt", "buttoned",40]]
const arr2 = [["image1", "blue"],["image2","red"]]
const res = arr2.reduce((a,[image,color])=>{
if(arr1.findIndex(([n])=>n===image) > -1) a.push(color);
return a;
}, []);
console.log(res);
A: You can use reduce
let arr1 = [["image1","shirt", "collared",40],["image3","shirt", "buttoned",40]];
let arr2 = [["image1","blue"],["image2","red"]];
let op = arr1.reduce((out,inp,index)=>{
if(arr2[index].includes(inp[0])){
out.push(arr2[index][1])
}
return out
},[] )
console.log(op) | unknown | |
d6815 | train | I think you might be looking for render_to_string.
from django.template.loader import render_to_string
context = {'foo': 'bar'}
rendered_template = render_to_string('template.html', context) | unknown | |
d6816 | train | The HTML snippet you provided belongs to iframe <iframe id="dnn_ctr1579_View_VoterLookupFrame" src="https://www.electionsfl.org/VoterInfo/vflookup.html?county=lee" width="100%" height="2000" frameborder="0"></iframe>, so you should navigate to URL https://www.electionsfl.org/VoterInfo/vflookup.html?county=lee instead of http://www.lee.vote/voters/check-your-registration-status/.
I navigated https://www.electionsfl.org/VoterInfo/vflookup.html?county=lee in Chrome and checked XHR logged after I submit the data via Developer Tools (F12), Network tab:
Seems that is simple POST XML HTTP request with payload in JSON format, like:
{'LastName':'Doe', 'BirthDate':'01/01/1980', 'StNumber':'10025', 'County':'lee', 'FirstName':'', 'challengeValue':'', 'responseValue':''}
That XHR uses no cookies or any other authorization data neither in headers nor payload, so I tried to reproduce the same request using the following code:
Option Explicit
Sub Test_Submit_VoterInfo()
Dim sLastName As String
Dim sBirthDate As String
Dim sStNumber As String
Dim sFormData As String
Dim bytFormData
Dim sContent As String
' Put the necessary data here
sLastName = "Doe"
sBirthDate = "01/01/1980"
sStNumber = "10025"
' Combine form payload
sFormData = "{" & _
"'LastName':'" & sLastName & "', " & _
"'BirthDate':'" & sBirthDate & "', " & _
"'StNumber':'" & sStNumber & "', " & _
"'County':'lee', " & _
"'FirstName':'', " & _
"'challengeValue':'', " & _
"'responseValue':''" & _
"}"
' Convert string to UTF-8 binary
With CreateObject("ADODB.Stream")
.Open
.Type = 2 ' adTypeText
.Charset = "UTF-8"
.WriteText sFormData
.Position = 0
.Type = 1 ' adTypeBinary
.Position = 3 ' skip BOM
bytFormData = .Read
.Close
End With
' Make POST XHR
With CreateObject("MSXML2.XMLHTTP")
.Open "POST", "https://www.electionsfl.org/VoterInfo/asmx/service1.asmx/FindVoter", False, "u051772", "mar4fy16"
.SetRequestHeader "Content-Length", LenB(bytFormData)
.SetRequestHeader "Content-Type", "application/json; charset=UTF-8"
.Send bytFormData
sContent = .ResponseText
End With
' Show response
Debug.Print sContent
End Sub
The response for me is {"d":"[]"}, the same as in browser, but unfortunately I can't check if it processed on the server correctly, since I have no valid voter record data.
A: This is the answer that I came up with after the (much needed) help determining that I was not really navigating to the right webpage for the form:
'creates a new internet explorer window
Dim IE As Object
Set IE = CreateObject("InternetExplorer.Application")
'opens Lee County registration check
With IE
.Visible = True
.navigate "https://www.electionsfl.org/VoterInfo/vflookup.html?county=lee"
End With
'waits until IE is loaded
Do Until IE.ReadyState = 4 And Not IE.busy
DoEvents
Loop
x = Timer + 2
Do While Timer < x
DoEvents
Loop
'sends data to the webpage
Call IE.Document.getelementbyid("NameID").setattribute("value", Last_Name.Value)
'formats DOB to correct output
Dim DOBMonth As Integer
Dim DOBDay As Integer
Dim DOBYear As Integer
DOBMonth = Month(Date_of_Birth.Value)
DOBDay = Day(Date_of_Birth.Value)
DOBYear = Year(Date_of_Birth.Value)
If DOBMonth < 10 Then
Call IE.Document.getelementbyid("BirthDate").setattribute("value", "0" & DOBMonth & "/" & DOBDay & "/" & DOBYear)
Else
Call IE.Document.getelementbyid("BirthDate").setattribute("value", DOBMonth & "/" & DOBDay & "/" & DOBYear)
End If
Call IE.Document.getelementbyid("StNumber").setattribute("value", Street_Number.Value)
'"clicks" the button to display the results
IE.Document.getelementbyid("ButtonForm").Click | unknown | |
d6817 | train | Just like this:
class Vector:
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
def __str__(self):
return '<{},{},{}>'.format(self.x,self.y,self.z) | unknown | |
d6818 | train | We ended up taking a different approach much simpler. Wanted to post here in case anyone else ever needs something similar
exports.command = function customSetValue(selector, txt) {
txt.split('').forEach(char => {
this.setValue(selector, char);
this.pause(200); // type speed in milliseconds
});
return this;
}; | unknown | |
d6819 | train | //onload event
$(document).ready(function(){
/*show alert on load time*/
alert($('[name="pet_chipped"]:checked').val());
})
// on change radio button value fire event
$(document).on('change', '[name="pet_chipped"]', function(){
//show value of radio after changed
alert($('[name="pet_chipped"]:checked').val());
})
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="chips">
<label> <input type="radio" name="pet_chipped" class="chipped" value="Yes" id="chipyes" checked> Yes</label>
<label><input type="radio" name="pet_chipped" class="chipped" value="No" id="chipno"> No</label>
</div>
A: Try this
HTML :
<input type="radio" name="pet_chipped" class="chipped" value="Yes"> Yes</label>
<label>
<input type="radio" name="pet_chipped" class="chipped" value="No" checked> No</label>
<input type="submit" onclick="getChecked()" />
Jquery :
<script type="text/javascript">
function getChecked(){
var checked = $("input[name='pet_chipped']:checked").val();
alert(checked);
}
</script> | unknown | |
d6820 | train | It does indeed look like a bug. It's like it's seeing the file input in front of the text and treating that as part of the word, so not seeing the "r" in "required" as the first character in need of capitalization.
Adding
label:before {
content: " ";
}
to force the space seems to work: http://jsfiddle.net/Nc27q/4/ Since this is a Chrome-specific issue, you don't have to worry about pseudo-elements not being supported... (Of course, you may want to target it a bit more tightly than I have above.)
A: I'm guessing that this is caused by the way Chrome detects what to capitalize and what not. You don't have any spaces in your code, so it simply says file<input>required. Chrome's logic would probably determine that this is one word (or sentence), causing it to intentionally ignore it.
You might be able to use label:first-letter { text-transform: uppercase; } instead.
A: text-transform property doesn't have capitalize as one of its keyword values; you want uppercase. it works, check it here: http://jsfiddle.net/jalbertbowdenii/Nc27q/2/ | unknown | |
d6821 | train | posts = Array.new
posts << {:title => "title 1"}
posts << {:title => "title 2"}
Post.create(posts)
A: is this what you're trying to do?
posts = []
posts << Post.new(:title => "title 1")
posts << Post.new(:title => "title 2")
posts.each do |post|
post.save
end | unknown | |
d6822 | train | [To supplement the comment you received]
While in this case with the small code sample it's hard to say, in most scenarios you'll see non-trivial types passed around by pointer to enable modification. As an anti-example, consider this code which uses a variable of a struct type by value:
type S struct {
ID int
}
func (s S) UpdateID(i int) {
s.ID = i
}
func main() {
s := S{}
s.UpdateID(99)
fmt.Println(s.ID)
}
What do you think this will print? It will print 0, because methods with value receivers cannot modify the underlying type.
There's much information about this in Go - read about pointers, and about how methods should be written. This is a good reference: https://golang.org/doc/faq#methods_on_values_or_pointers, and also https://golang.org/doc/effective_go#pointers_vs_values
Back to your example: typically non-trivial types such as those representing a "client" for some services will be using pointers because method calls on such types should be able to modify the types themselves. | unknown | |
d6823 | train | What LDAP server is this? If it supports SSHA-256, then the same style i.e. {SSHA-256}+"Encrypted Password" should work. | unknown | |
d6824 | train | the code you're using looks like it's vb.net, not VBA. The syntax is similar, but not the same. In VBA, you don't script a class, you insert a special type of code module that contains the class's code. Sub Whatever resides in that.
Insert a class module, name it "GameClass" (classes are typically proper-cased, not lower-cased). Add your methods and any properties (here is a good overview of property getters/setters) in that module:
Then you can instantiate your GameClass and call its methods from elsewhere: | unknown | |
d6825 | train | If you plan to debug the service application from the beginning of its execution, including its initialization code, this preparatory step is required.
http://msdn.microsoft.com/en-us/library/windows/hardware/ff553427(v=vs.85).aspx
A: When WinDbg is running as postmortem debugger it is launched by the process that is crashing. In case of a service it is launched by a process running in session 0 and has no access to the desktop.
You can configure AeDebug registry to launch a process that creates a crash dump and debug the crash dump. You can use ntsd -server and connect to the server.
A: You should be able to use WinDbg to attach or launch any service even those not run by the user: http://support.microsoft.com/kb/824344 | unknown | |
d6826 | train | Issue
My guess is that you've placed a "/" path first within the Switch component:
<PrivateRoute path="/" component={MainPage} />
The redirect to "/login" works but then the Switch matches the "/" portion and tries rendering this private route again.
Your private route is also malformed, it doesn't pass on all the Route props received.
Solution
Fix the private route component to pass on all props.
export default function PrivateRoute({ component: Component, ...rest }) {
const { currentUser } = useAuth();
return (
<Route
{...rest}
render={(props) => {
return currentUser ? <Admin {...props} /> : <Redirect to="/login" />;
}}
/>
);
}
Within the Switch component path order and specificity matter, you want to order more specific paths before less specific paths. "/" is a path prefix for all paths, so you want that after other paths. Nested routes also need to be rendered within a Switch so only a single match is returned and rendered.
<BrowserRouter>
<AuthProvider>
<Switch>
<Container
className="d-flex align-items-center justify-content-center"
style={{ minHeight: "100vh" }}
>
<div className="w-100" style={{ maxWidth: "400px" }}>
<Switch>
<Route path="/signup" component={Signup} />
<Route path="/login" component={Login} /> //The component part
<Route path="/forgot-password" component={ForgotPassword} />
</Switch>
</div>
</Container>
<PrivateRoute path={["/admin", "/"]} component={MainPage} />
</Switch>
</AuthProvider>
</BrowserRouter>
Update
I'm a bit confused by your private route though, you specify the MainPage component on the component prop, but then render an Admin component within. Typically an more agnostic PrivateRoute component may look something more like:
const PrivateRoute = props => {
const { currentUser } = useAuth();
return currentUser ? <Route {...props} /> : <Redirect to="/login" />;
}
This allows you to use all the normal Route component props and doesn't limit to using just the component prop.
Usages:
*
*
<PrivateRoute path="/admin" component={Admin} />
*
<PrivateRoute path="/admin" render={props => <Admin {...props} />} />
*
<PrivateRoute path="/admin">
<Admin />
</PrivateRoute>
A: I also had same problem once. This solved me. Wrap your three routes with a Switch.
<Switch>
<Route path="/signup" component={Signup} />
<Route path="/login" component={Login} />
<Route path="/forgot-password" component={ForgotPassword} />
</Switch>
As the first private route has the root path it will always go to the route. You can use exact for the first private route. But the best way should be placing the first private route
<PrivateRoute path={["/admin", "/"]} component={MainPage} />
at the bottom. So that when there is no match it goes there only.
A: You are not passing the path to the Route in the ProtectedRoute .
<Route
{...rest} // spread the test here
render={(props) => {
return currentUser ? <Admin {...props} /> : <Redirect to="/login" />; // it redirects when the user does not log in.
}}
></Route>
Also switch the Route of the path as @Drew Reese mentioned . | unknown | |
d6827 | train | Alexa for Apps is a currently only available to select developers as part of a developer preview program. To use this feature, you must register for the preview.
For more information, please see the documentation here:
https://developer.amazon.com/en-US/docs/alexa/alexa-for-apps/use-developer-console.html | unknown | |
d6828 | train | Well, the [10:[1],11:[2,3]] is invalid JavaScript, but if you need something approximately, you can use [{"10":1,"11":[2,3]}].
You don't need AngularJS or any third party library like jQuery to build a dynamic form. You can implement by using pure JavaScript through by DOM manipulation.
This is a simple demo where you can see how it works.
(function() {
var data = [{
"id": 10,
"question": "Gender?",
"type": 1,
"options": [{
"id": 1,
"name": "Male"
}, {
"id": 2,
"name": "Female"
}]
}, {
"id": 11,
"question": "Witchvideogamesdoyouhave?",
"type": 2,
"options": [{
"id": 1,
"name": "PS4"
}, {
"id": 2,
"name": "XBoxOne"
}, {
"id": 3,
"name": "Wii"
}, {
"id": 4,
"name": "SuperNintendo"
}]
}];
function buildFields(data) {
var form, count, i, j, div, label, labelOpt, field, option, content, button;
form = document.getElementById("form");
count = data.length;
for (i = 0; i < count; i++) {
div = document.createElement("div"); // Creates a DIV.
div.classList.add("question"); // Adds a css class to your new DIV.
div.setAttribute("data-id", data[i].id); // Adds data-id attribute with question id in the new DIV.
div.setAttribute("data-type", data[i].type); // Adds data-type attribute with question type in the new DIV.
label = document.createElement("label"); // Adds a label to wrap the question content.
label.innerText = data[i].id + "." + data[i].question; // Adds the question id in the label with the current question.
if (data[i].type === 1) { // Check for the question type. In this case 1 is for the select tag.
field = document.createElement("select"); // Creates a select tag.
field.id = "field_" + data[i].id; // Adds an identifier to your select tag.
field.name = "field_" + data[i].id; // Adds a name to the current select tag.
if (data[i].options.length > 0) { // Checks for the options to create an option tag for every option with the current options values.
option = document.createElement("option");
option.value = "";
option.text = ".:: Please select an option ::.";
field.appendChild(option);
for (j = 0; j < data[i].options.length; j++) {
option = document.createElement("option");
option.value = data[i].options[j].id;
option.text = data[i].options[j].name;
field.appendChild(option);
}
}
div.appendChild(field);
} else {
if (data[i].options.length > 0) {
content = document.createElement("span");
for (var k = 0; k < data[i].options.length; k++) {
labelOpt = document.createElement("label");
labelOpt.innerText = data[i].options[k].name;
field = document.createElement("input");
field.type = "checkbox";
field.value = data[i].options[k].id;
labelOpt.insertBefore(field, labelOpt.firstChild); // Inserts a field before the label.
content.appendChild(labelOpt);
}
div.appendChild(content);
}
}
div.insertBefore(label, div.firstChild);
form.appendChild(div);
}
button = document.createElement("button");
button.type = "button";
button.innerText = "Send";
button.addEventListener("click", function() {
var form, dataId, dataType, values, array, obj, i, result,
form = document.getElementById("form");
values = [];
array = [];
obj = {};
for (i = 0; i < form.children.length; i++) { // Iterates for every node.
if (form.children[i].tagName === "DIV") {
dataId = parseInt(form.children[i].getAttribute("data-id"), 10);
dataType = parseInt(form.children[i].getAttribute("data-type"), 10);
if (dataType === 1) {
obj[dataId] = parseInt(form.children[i].children[1].value, 10);
} else {
for (var j = 0; j < form.children[i].children[1].children.length; j++) {
if (form.children[i].children[1].children[j].children[0].checked) {
array.push(parseInt(form.children[i].children[1].children[j].children[0].value, 10));
}
}
obj[dataId] = array;
}
}
}
values.push(obj);
result = document.getElementById("result");
result.innerText = JSON.stringify(values) + "\nTotal answers from question 11: " + values[0]["11"].length + ((values[0]["11"].length === 1) ? " answer." : " answers.");
});
form.appendChild(button);
}
buildFields(data);
})()
.question {
border: solid 1px #000;
border-radius: 5px;
padding: 5px;
margin: 10px;
}
.question label {
display: block;
}
#result {
background-image: linear-gradient(#0CC, #fff);
border-radius: 10px;
padding: 10px;
}
<form id="form" name="form">
</form>
<div id="result">
</div> | unknown | |
d6829 | train | Even though you didn't specify the error I can see that you never defined "channel".
If you want to delete the channel in which the reaction was added use:
reaction.message.channel.delete(); | unknown | |
d6830 | train | You don't set the title of the DetailView when it's displayed using a UINavigationController by using self.title, you need to set the UINavigationItem title property in the DetailView initializer.
e.g. in the DetailView initializer :-
self.navigationItem.title = @"Hello";
You're right you shouldn't need to add the detailViewController view as a subview of the current view - you should just need the pushViewController call. I'm not sure why it's not appearing though.
Obvious questions are is everything connected OK in the nib, and what does the DetailView initializer do? | unknown | |
d6831 | train | This is one of the most commonly asked type of question here. The tools to do this are in the standard library and require only a few lines of setup code. However, the result is not 100% robust and needs to be used with care. This is probably why it's not already a high-level function.
The basic problem with running an async function from a sync function is that async functions contain await expressions. Await expressions pause the execution of the current task and allow the event loop to run other tasks. Therefore async functions (coroutines) have special properties that allow them to yield control and resume again where they left off. Sync functions cannot do this. So when your sync function calls an async function and that function encounters an await expression, what is supposed to happen? The sync function has no ability to yield and resume.
A simple solution is to run the async function in another thread, with its own event loop. The calling thread blocks until the result is available. The async function behaves like a normal function, returning a value. The downside is that the async function now runs in another thread, which can cause all the well-known problems that come with threaded programming. For many cases this may not be an issue.
This can be set up as follows. This is a complete script that can be imported anywhere in an application. The test code that runs in the if __name__ == "__main__" block is almost the same as the code in the original question.
The thread is lazily initialized so it doesn't get created until it's used. It's a daemon thread so it will not keep your program from exiting.
The solution doesn't care if there is a running event loop in the main thread.
import asyncio
import threading
_loop = asyncio.new_event_loop()
_thr = threading.Thread(target=_loop.run_forever, name="Async Runner",
daemon=True)
# This will block the calling thread until the coroutine is finished.
# Any exception that occurs in the coroutine is raised in the caller
def run_async(coro): # coro is a couroutine, see example
if not _thr.is_alive():
_thr.start()
future = asyncio.run_coroutine_threadsafe(coro, _loop)
return future.result()
if __name__ == "__main__":
async def hel():
await asyncio.sleep(0.1)
print("Running in thread", threading.current_thread())
return 4
def i():
y = run_async(hel())
print("Answer", y, threading.current_thread())
async def h():
i()
asyncio.run(h())
Output:
Running in thread <Thread(Async Runner, started daemon 28816)>
Answer 4 <_MainThread(MainThread, started 22100)>
A: In order to call an async function from a sync method, you need to use asyncio.run, however this should be the single entry point of an async program so asyncio makes sure that you don't do this more than once per program, so you can't do that.
That being said, this project https://github.com/erdewit/nest_asyncio patches the asyncio event loop to do that, so after using it you should be able to just call asyncio.run in your sync function. | unknown | |
d6832 | train | Using Swift 3, here's what I have. My code is meant to have (1) a select view controller, which uses the UIImagePickerController to either use the camera or select from the camera roll, then (2) sequel to an edit view controller I stripped out the code for the buttons, as I'm not using IB.
class SelectViewController: UIViewController {
// selection and pass to editor
let picker = ImagePickerController()
var image = UIImage()
override func viewDidLoad() {
super.viewDidLoad()
picker.delegate = self
}
extension SelectViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
// MARK: Camera App
func openCameraApp() {
if UIImagePickerController.availableCaptureModes(for: .rear) != nil {
picker.allowsEditing = false
picker.sourceType = UIImagePickerControllerSourceType.camera
picker.cameraCaptureMode = .photo
picker.modalPresentationStyle = .fullScreen
present(picker,
animated: true,
completion: nil)
} else {
noCamera()
}
}
func noCamera(){
let alertVC = UIAlertController(
title: "No Camera",
message: "Sorry, this device has no camera",
preferredStyle: .alert)
let okAction = UIAlertAction(
title: "OK",
style:.default,
handler: nil)
alertVC.addAction(okAction)
present(
alertVC,
animated: true,
completion: nil)
}
// MARK: Photos Albums
func showImagePicker() {
picker.allowsEditing = false
picker.sourceType = .photoLibrary
present(picker,
animated: true,
completion: nil)
picker.popoverPresentationController?.sourceView = self.view
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
let chosenImage = info[UIImagePickerControllerOriginalImage] as! UIImage
image = chosenImage
self.performSegue(withIdentifier: "ShowEditView", sender: self)
dismiss(animated: true, completion: nil)
}
func imagePickerControllerDidCancel(_ picker: UIImagePickerController) {
dismiss(animated: false, completion: nil)
}
// MARK: Seque to EditViewController
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
if segue.identifier == "ShowEditView" {
if let vc = segue.destination as? EditViewController {
vc.image = image
}
}
}
}
If you aren't segueing to another VC, remove the .performSegue call and the code below the the final MARK: notation. (The camera/selected image is in the image var.) | unknown | |
d6833 | train | I don't want to display multiple markers using latitude and longitude, Only by addresses which are stored in mysql database.
Unfortunately, Google Maps API requires latitude/longitude in order to add a marker to a map. You should consider using the GeoLocation API to convert your addresses into coordinates first, then you can add the markers in the traditional fashion.
If this isn't possible, you can use AJAX to retrieve the latitude/longitude in a loop (like in this answer), but you might hit the rate limit as mentioned in this question's comments.
A: You are only outputting one address in your PHP code:
var address = "<?php echo $eloc; ?>,<?php echo $ecity; ?>, PK";
You need to create an array of addresses and process that array.
Proof of issues with array of more than 11 addresses
See OVER_QUERY_LIMIT in Google Maps API v3: How do I pause/delay in Javascript to slow it down? (Andrew Leach's answer) for a way to deal with the OVER_QUERY_LIMIT. | unknown | |
d6834 | train | You need modify your adapter to set background for checked items. For Example:
@Override
public View getView(int position, View convertView, ViewGroup parent) {
// creating view
if(item.isChecked()){
veiw.setBackgroundResource(android.R.drawable.btn_default);
}
return view;
}
A: You actually need to modify the Adapter and for that i will refer you to this question here
How to set an image as background image on a click?
Check the accepted answer and do it as described and you are good to go | unknown | |
d6835 | train | git revert creates a new commit undoing the changes from a given commit. It seems that the operation you described produces the desired result.
Read also: How to undo (almost) anything with Git .
A: Git is:
*
*distributed, meaning, there is more than one repository; and
*built specifically to make removing commits a little difficult: it "likes" to add commits, not remove them.
Your original question left off the fact that, having added two commits, you published them to GitHub. (You did eventually note this in a comment.)
If you really do want to remove some commits, you must now convince every Git that has them to give them up. Even if/when you manage this, they won't go away immediately: in general, you get at least 30 days to change your mind and bring them back again. But we won't get into that here. Let's just look at how you convince one Git to discard some commit(s), using git reset.
We'll use git reset --hard here for reasons we won't get into properly. Be aware that git reset --hard destroys uncommitted work, so before you do that, make sure you do not have anything uncommitted that you do want to save.
As Tarek Dakhran commented, you probably wanted git reset --hard HEAD^^. You'll need more than that, though—and before you blindly any of these commands, it is crucially important that you understand two things:
*
*what this kind of git reset is going to do;
*what HEAD^^ means.
To get there, and see how to reset your GitHub repository too, let's do a quick recap of some Git basics.
Git stores commits
Git is all about commits. Git is not really about branches, nor is it about files. It's about commits. The commit is your fundamental unit of storage in Git.1 What a commit contains comes in two parts.
First, each commit holds a full snapshot of all of your files. This isn't a set of changes since a previous commit! It's just the entire set of files, as a snapshot. That's the main bulk of a typical commit, and because it is, the files that are stored in a commit like this are stored in a special, read-only, Git-only, frozen form. Only Git can actually use these files, but the up-side of this is that making hundreds or thousands of snapshots doesn't take lots of space.
Meanwhile, the rest of the commit consists of metadata, or information about the commit: who made it, when, and so on. This metadata ties each new commit to some existing commit. Like the snapshot, this metadata is frozen forever once you make the commit. Nobody and nothing can change it: not you, and not even Git itself.
The unchangeable-ness of every part of a commit is what makes commits great for archival ... and quite useless for getting any new work done. This is why Git normally extracts one commit for you to work on. The compressed, dehydrated, and frozen-for-all-time files come out and get defrosted and rehydrated and turned back into ordinary files. These files aren't in Git at all, but they let you get your work done. That's part of why a --hard reset is relatively dangerous (but not the whole story). We'll ignore these usable files here and just concentrate on the commits themselves.
Every commit has a unique hash ID. The hash ID of a commit is the commit, in a sense. That hash ID is reserved for that commit, and only that commit can ever use it. In a way, that hash ID was reserved for that commit before you made the commit, and is still reserved for that commit even after you manage to remove it. Because the ID has to be unique—and reserved for all time for that commit—it has to be a truly enormous number. That's why commit hash IDs are so big and ugly, like 51ebf55b9309824346a6589c9f3b130c6f371b8f (and even these aren't big enough any more—they only count to about 1048—and Git is moving to bigger ones).
The uniqueness of these hash IDs means that every Git that has a clone of some repository can talk to any other Git that has a clone of the same repository, and just exchange hash IDs with the other Git. If your Git has 51ebf55b9309824346a6589c9f3b130c6f371b8f and their Git doesn't, then you have a commit they don't, and they can get it from you. Now you both have 51ebf55b9309824346a6589c9f3b130c6f371b8f.
That would be all there is to it, except for the part where we said that every commit can store the hash ID(s) of some earlier commit(s). Commit 51ebf55b9309824346a6589c9f3b130c6f371b8f stores commit hash ID f97741f6e9c46a75b4322760d77322e53c4322d7. That's its parent: the commit that comes just before it.
1A commit can be broken down into smaller units: commit objects, trees and sub-trees, and blob objects. But that's not the level on which you normally deal with Git.
This is the key to Git and branches
We now see that:
*
*Every commit has a big ugly hash ID, unique to that commit.
*Every commit contains the hash ID of some earlier commit, which is the parent of that commit. (A commit can contain two parent hash IDs, making the commit a merge commit, but we won't get into these here.)
When anything contains a commit hash ID, we say that thing points to the commit. So commits form backwards-pointing chains.
If we use single uppercase letters to stand in for hash IDs, we can draw this kind of chain of commits like this. Imagine we have a nearly-new repository with just three commits in it. We'll call them A, B, and C, even though in reality they have some random-looking hash IDs. Commit C is the last commit, so it points back to B:
B <-C
But B has a backwards-pointing arrow (really, a stored hash ID) pointing to A:
A <-B <-C
Commit A is special. Being the very first commit ever made in this repository, it can't point back, so it doesn't.
Adding new commits to a repository just consists of writing out the new commit such that it points back to the previous last commit. For instance, to add a new commit we'll call D, we just draw it in:
A <-B <-C <-D
No part of any existing commit can change, but none does: C doesn't point to D, D points to C. So we're good here, except for one thing. These hash IDs, in a real repository, don't just increment like this. We have big ugly things like 51ebf55b9309824346a6589c9f3b130c6f371b8f and f97741f6e9c46a75b4322760d77322e53c4322d7. There's no obvious way to put them in order. How do we know which one is last?
Given any one hash ID, we can use that commit to find the previous commit. So one option would be: extract every commit in the repository, and see which one(s) don't have anything pointing back to them. We might extract C first, then A, then B, then D, or some other order, and then we look at all of them and realize that, hey, D points to C points to B points to A, but nobody points to D, so it must be the last one.
This works, but it's really slow in a big repository. It can take multiple minutes to check all that stuff.
We could write down the hash ID of the last commit (currently D), perhaps on a scrap of paper or whiteboard. But we have a computer! Why not write it down in the computer? And that's what a branch name does: it's just a file, or a line in a file, or something like that,2 where Git has scribbled down the hash ID of the last commit in the chain. Since this has the hash ID of a commit, it points to a commit:
A--B--C--D <-- master
We can get lazy now and stop drawing the arrows between commits, since they can never change, but let's keep drawing the ones from the branch names, because they do change. To add a new commit to master, we make a new commit. It gets some random-looking hash ID that we'll call E, and it points back to D. Then we'll have Git write the hash ID of new commit E into the name master, so that master points to E now:
A--B--C--D--E <-- master
Let's create a new branch name, feature, now. It too points to existing commit E:
A--B--C--D--E <-- feature, master
Now let's create a new commit F. Which branch name gets updated? To answer that last question, Git uses the special name HEAD. It stores the name of the branch into the HEAD file.3 We can draw this by attaching the special name HEAD to one of the two branches:
A--B--C--D--E <-- feature, master (HEAD)
Here, we're using commit E because we're on branch master, as git status would say. If we now git checkout feature, we continue using commit E, but we're now on branch feature, like this:
A--B--C--D--E <-- feature (HEAD), master
Now let's create new commit F. Git will make F point back to E, the current commit, and will then make the current branch name point to new commit F:
A--B--C--D--E <-- master
\
F <-- feature (HEAD)
We've just seen how branches grow, within one repository.
2Git currently uses both methods to store branch-to-hash-ID information. The information may be in its own file, or may be a line in some shared file.
3This HEAD file is currently always a file. It's a very special file: if your computer crashes, and when it recovers, it removes the file HEAD, Git will stop believing that your repository is a repository. As the HEAD file tends to be pretty active, this is actually a bit common—not that the computer crashes often, but that when it does, you have to re-create the file to make the repository function.
Git is distributed
When we use Git, we hardly ever have just have one repository. In your case, for instance, you have at least two repositories:
*
*yours, on your laptop (or whatever computer); and
*your GitHub repository, stored over in GitHub's computers (that part of "the cloud").
Each of these repositories has its own branch names. This is what is about to cause all our problems.
To make your laptop repository, you probably ran:
git clone <url>
This creates a new, empty repository on your laptop. In this empty repository, it creates a remote, using the name origin. The remote stores the URL you put in on the command line (or entered into whatever GUI you used), so that from now on, instead of typing in https://github.com/... you can just type in origin, which is easier and shorter. Then your Git calls up that other Git, at the URL.
The other Git, at this point, lists out its branch names and the hash IDs that these branch names select. If they have master and develop, for instance, your Git gets to see that. Whatever hash IDs those names select, your Git gets to see those, too.
Your Git now checks: do I have this hash ID? Of course, your Git has nothing yet, so the answer this time is no. (In a future connection, maybe the answer will be yes.) Your Git now asks their Git to send that commit. Their Git is required to offer your Git the parent of that commit, by its hash ID. Your Git checks: do I have this commit? This goes on until they've sent the root commit—the commit A that has no parent—or they get to a commit you already have.
In this way, your Git and their Git agree on what you have already and what you need from them. They then package up everything you need—commits and all their snapshotted files, minus any files you might already have in commits you already have—and send all that over. Since this is your very first fetch operation, you have nothing at all and they send everything over, but the next time you get stuff from them, they'll only send anything new.
Let's suppose they have:
A--B--C--D--E <-- master (HEAD)
\
F <-- feature
(your Git does get to see their HEAD). Your Git will ask for and receive every commit, and know their layout. But your Git renames their branches: their master becomes your origin/master, and their feature becomes your origin/feature. These are your remote-tracking names. We'll see why your Git does this in a moment.
Your Git is now done talking with their Git, so your Git disconnects from them and creates or updates your remote-tracking names. In your repository, you end up with:
A--B--C--D--E <-- origin/master
\
F <-- origin/feature
Note that you don't have a master yet!
As the last step of git clone, your Git creates a branch name for you. You can choose which branch it should create—e.g., git clone -b feature means create feature—but by default, it looks at their HEAD and uses that name. Since their HEAD selected their master, your Git creates your master branch:
A--B--C--D--E <-- master (HEAD), origin/master
\
F <-- origin/feature
Note that the only easy way you have of finding commit F is by your remote-tracking name origin/feature. You have two easy ways to find commit E: master and origin/master.
Now, let's make a new commit on master in the usual way. This new commit will have, as its parent, commit E, and we'll call the new commit G:
G <-- master (HEAD)
/
A--B--C--D--E <-- origin/master
\
F <-- origin/feature
Note that your master, in your laptop Git, has now diverged from their master, over on GitHub, that your laptop Git remembers as origin/master.
Your master is your branch. You get to do whatever you want with it! Their master is in their repository, which they control.
fetch vs push
Let's make a couple more commits in our master now:
G--H--I <-- master (HEAD)
/
A--B--C--D--E <-- origin/master
\
F <-- origin/feature
Note that, so far, we have not sent any of these commits to the other Git. We have commits G-H-I on our master, on the laptop, or wherever our Git is, but they don't have these at all.
We can now run git push origin master. Our Git will call up their Git, like we did before, but this time, instead of getting stuff from them we will give stuff to them, starting with commit I—the commit to which our master points. We ask them: Do you have commit I? They don't, so we ask them if they have H, and G, and then E. They do have E so we'll give them the G-H-I chain of commits.
Once we've given them these three commits (and the files that they need and don't have—obviously they have E so they have its snapshot, so our Git can figure out a minimal set of files to send), our Git asks them: Please, if it's OK, set your master to point to I.
That is, we ask them to move their master branch, from the master name we used in our git push origin master. They don't have a mat1/master to keep track of your repository. They just have their branches.
When we ask them to move their master like this, if they do, they'll end up with:
A--B--C--D--E--G--H--I <-- master (HEAD)
\
F <-- feature
They won't lose any commits at all, so they'll accept our polite request. Their name master now points to commit I, just like our name master, so our Git adjusts our remote-tracking name:
G--H--I <-- master (HEAD), origin/master
/
A--B--C--D--E
\
F <-- origin/feature
(Exercise: why is our G on a line above the line with E? Why didn't we draw it like this for their repository?)
git reset makes removing commits locally easy
Now, let's suppose commits H and I are bad and you'd like to get rid of them. What we can do is:
git checkout master # if needed
git reset --hard <hash-of-G>
What this will do, in our repository, we can draw like this:
H--I <-- origin/master
/
A--B--C--D--E--G <-- master (HEAD)
\
F <-- origin/feature
Note that commits H and I are not gone. They are still there, in our repository, findable by starting with the name origin/master, which points to commit I. From there, commit I points back to commit H, which points back to commit G.
Our master, however, points to commit G. If we start from here, we'll see commit G, then commit E, then D, and so on. We won't see commits H-I at all.
What git reset does is let us point the current branch name, the one to which HEAD is attached, to any commit anywhere in our entire repository. We can just select that commit by its hash ID, which we can see by running git log. Cut and paste the hash ID from git log output into a git reset --hard command and you can move the current branch name to any commit.
If we do this, and then decide that, gosh, we want commits H-I after all, we can just git reset --hard again, with the hash ID of commit I this time, and make master point to I again. So aside from wrecking any uncommitted work, git reset --hard is sort of safe. (Of course, wrecking uncommitted work is a pretty big aside!)
We still have to get their Git to change: we need more force
Moving our master back to G only gets us updated. The other Git, over on GitHub ... they still have their master pointing to commit I.
If we run git push origin master now, we'll offer then commit G. They already have it, so they'll say so. Then we'll ask them, politely: Please, if it's OK, make your master point to commit G.
They will say no! The reason they'll say no is simple: if they did that, they'd have:
H--I [abandoned]
/
A--B--C--D--E--G <-- master (HEAD)
\
F <-- feature
Their master can no longer find commit I. In fact, they have no way to find commit I. Commit I becomes lost, and so does commit H because commit I was how they found commit H.
To convince them to do this anyway, we need a more forceful command, not just a polite request. To get that forceful command, we can use git push --force or git push --force-with-lease. This changes the last step of our git push. Instead of asking please, if it's OK, we send a command: Set your master! Or, with --force-with-lease, we send the most complicated one: I think your master points to commit I. If so, make it point to G instead! Then tell me whether I was right.
The --force-with-lease acts as a sort of safety check. If the Git over on GitHub lists not just to you, but to other Gits as well, maybe some other Git had them set their master to a commit J that's based on I. If you're the only one who sends commits to your GitHub repository, that obviously hasn't happened, and you can just use the plain --force type of push.
Again: --force allows you to tell another Git: move a name in some arbitrary way, that doesn't necessarily just add commits. A regular git push adds commits to the end of a branch, making the branch name point to a commit "further to the right" if you draw your graph-of-commits the way I have been above. A git push that removes commits makes the branch name point to an earlier commit, losing some later commit, or—in the most complicated cases—does both, removes some and adds other commits. (We haven't drawn this case and we'll leave that for other StackOverflow answers ... which already exist, actually, regarding rebasing.)
Recap
*
*Branch names find the last commit in a branch (by definition).
*Making new commits extends a branch, by having the new commit point back to the previous tip, and writing the new commit's big ugly random-looking hash ID into the branch name.
*git reset allows you to move branch names around however you like. This can "remove" commits from the branch (they still exist in the repository, and maybe there's another way to find them).
*git push asks some other Git to move its branch names. It defaults to only allowing branch-name-moves that *add8 commits.
Hence, since you have pushed the commits you want to "remove", you'll need to git reset your own Git, but then git push --force or git push --force-with-lease to get the GitHub Git to do the same thing.
What HEAD^ etc are about
When we were looking for ways to "remove" commits with git reset, I suggested:
*
*Run git log (and here I'll suggest using git log --decorate --oneline --graph, which is so useful that you should probably make an alias, git dog, to do it: D.O.G. stands for Decorate Oneline Graph).
*Cut-and-paste commit hash IDs.
This works, and is a perfectly fine way to deal with things. It's a bit clumsy sometimes, though. What if there were a way to say: starting from the current commit, count back two commits (or any other number)?
That is, we have some chain of commits:
...--G--H--I--...
We can pick out one commit in the chain, such as H, by its hash ID, or any other way. Maybe it has a name pointing to it:
...--G--H <-- branchname
\
I--J <-- master (HEAD)
Using the name branchname will, in general, select commit H. If Git needs a commit hash ID, as in git reset, the name branchname will get the hash ID of commit H.
Using the name HEAD tells Git: look at the branch to which HEAD is attached, and use that name. Since HEAD is attached to master, and master points to J, the name HEAD means "commit J*, in places where Git needs a commit hash ID.
This means we could run:
git reset --hard branchname
right now and get:
...--G--H <-- branchname, master (HEAD)
\
I--J [abandoned]
The name branchname selects commit H, and git reset moves the branch name to which HEAD is attached—master—to the commit we select.
Note, by the way, that:
git reset --hard HEAD
means: look at the name HEAD to figure out which commit we're using now, then do a hard reset that moves the current branch name to that commit. But since that's the commit we're using now, this "moves" the name to point to the same commit it already points to. In other words, move to where you're standing already ... ok, don't move after all. So this git reset --hard lets us throw out uncommitted work, if that's what we need to do.
In any case, adding a single caret ^ character after any branch name or hash ID means select that commit's parent. So since branchname means commit H, branchname^ means commit G.
If we have:
...--F--G--H--I <-- master (HEAD)
then master^ and HEAD^ both mean commit H. Adding another ^ tells Git to do that again: master^^ or HEAD^^ means commit G. Adding a third ^ selects commit F, and so on.
If you want to count five commits back, you can write master^^^^^ or HEAD^^^^^, but it's easier to use master~5 or HEAD~5. Once you get past "two steps back", the tilde ~ notation is shorter. Note that you can always use the tilde notation anyway: master~1 and master^ both count one commit back. You can omit the 1 here too, and write master~ or HEAD~.
(There is a difference between ^2 and ~2. For more about this, see the gitrevisions documentation.) | unknown | |
d6836 | train | Use & (or) | operators in your filter query and enclose each statement with brackets ().
df.filter((col("dim1") == '101') | (col("dim2").isin(['302','402']))).show()
#+----+----+-------+------+------+
#|dim1|dim2| byvar|value1|value2|
#+----+----+-------+------+------+
#| 101| 201|MTD0001| 1| 10|
#| 301| 302|MTD0003| 3| 13|
#| 401| 402|MTD0004| 5| 19|
#+----+----+-------+------+------+
df.filter((col("dim1") == '101') & (col("dim2").isin(['302','402']))).show()
#+----+----+-----+------+------+
#|dim1|dim2|byvar|value1|value2|
#+----+----+-----+------+------+
#+----+----+-----+------+------+
Using expr:
Here we need to convert list to tuple to perform in on value_list
#using filter_str
value_list = ['302', '402']
filter_str = "dim1 = '101' or dim2 in {0}".format(tuple(value_list))
filter_str
#"dim1 = '101' or dim2 in ('302', '402')"
df.filter(expr(filter_str)).show()
#+----+----+-------+------+------+
#|dim1|dim2| byvar|value1|value2|
#+----+----+-------+------+------+
#| 101| 201|MTD0001| 1| 10|
#| 301| 302|MTD0003| 3| 13|
#| 401| 402|MTD0004| 5| 19|
#+----+----+-------+------+------+
filter_str = "dim1 = '101' and dim2 in {0}".format(tuple(value_list))
df.filter(expr(filter_str)).show()
#+----+----+-----+------+------+
#|dim1|dim2|byvar|value1|value2|
#+----+----+-----+------+------+
#+----+----+-----+------+------+ | unknown | |
d6837 | train | The sphereInsideFrustum was part of the game engine which is no longer a part of blender. If you are looking for a real-time solution you will need to look at alternative game engines.
If you search blender.stackexchange for pixel+scene you will find several answers about associating geometry with the final rendered image. | unknown | |
d6838 | train | Make sure to add add_theme_support( 'title-tag' ); in functions.php and remove any <title></title> tags from header.php | unknown | |
d6839 | train | If you look at the default template for the Expander, you can see why none of your property setters are working:
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="20" />
<ColumnDefinition Width="*" />
</Grid.ColumnDefinitions>
<ToggleButton IsChecked="{Binding Path=IsExpanded,Mode=TwoWay,
RelativeSource={RelativeSource TemplatedParent}}"
OverridesDefaultStyle="True"
Template="{StaticResource ExpanderToggleButton}"
Background="{StaticResource NormalBrush}" />
<ContentPresenter Grid.Column="1"
Margin="4"
ContentSource="Header"
RecognizesAccessKey="True" />
</Grid>
The ToggleButton's VerticalAlignment is what you are after, and there are no setters for it.
It seems to me that there is no way to change this alignment property through the Style. You must provide a new template. | unknown | |
d6840 | train | After quite a bit of experimentation, I concluded that there is no way to directly handle the JavaScript exception from Silverlight. In order to be able to process the exception, the JavaScript code needs to be changed slightly.
Instead of throwing the error, I return it:
function MyMethod()
{
try
{
// Possible exception here
}
catch (ex)
{
return new Error(ex);
}
}
Then on the Silverlight side, I use a wrapper around ScriptObject to turn the return value into an exception again. The key here is the TryInvokeMember method:
public class ScriptObjectWrapper : DynamicObject
{
private ScriptObject _scriptObject;
public ScriptObjectWrapper(ScriptObject scriptObject)
{
_scriptObject = scriptObject;
}
public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result)
{
result = _scriptObject.Invoke(binder.Name, args);
ScriptObject s = result as ScriptObject;
if (s != null)
{
// The JavaScript Error object defines name and message properties.
string name = s.GetProperty("name") as string;
string message = s.GetProperty("message") as string;
if (name != null && message != null && name.EndsWith("Error"))
{
// Customize this to throw a more specific exception type
// that also exposed the name property.
throw new Exception(message);
}
}
return true;
}
public override bool TrySetMember(SetMemberBinder binder, object value)
{
try
{
_scriptObject.SetProperty(binder.Name, value);
return true;
}
catch
{
return false;
}
}
public override bool TryGetMember(GetMemberBinder binder, out object result)
{
try
{
result = _scriptObject.GetProperty(binder.Name);
return true;
}
catch
{
result = null;
return false;
}
}
}
Potentially you could improve this wrapper so it actually injects the JavaScript try-catch mechanism transparently, however in my case I had direct control over the JavaScript source code, so there was no need to do this.
Instead of using the built in JavaScript Error object, it's possible to use your custom objects, as long as the name property ends with Error.
To use the wrapper, the original code would change to:
public MyWrapper()
{
_myJSObject = new ScriptObjectWrapper(
HtmlPage.Window.CreateInstance("MyJSObject"));
} | unknown | |
d6841 | train | You may take a look to this beautiful blog post about Read/Write the registry
I may draw your attention to this passage of the code:
/**
* Write a value in a given key/value name
* @param hkey
* @param key
* @param valueName
* @param value
* @throws IllegalArgumentException
* @throws IllegalAccessException
* @throws InvocationTargetException
*/
public static void writeStringValue
(int hkey, String key, String valueName, String value)
throws IllegalArgumentException, IllegalAccessException,
InvocationTargetException
{
if (hkey == HKEY_LOCAL_MACHINE) {
writeStringValue(systemRoot, hkey, key, valueName, value);
}
else if (hkey == HKEY_CURRENT_USER) {
writeStringValue(userRoot, hkey, key, valueName, value);
}
else {
throw new IllegalArgumentException("hkey=" + hkey);
}
}
I think this solution is very elegant in onder to manage read/write operations against the registry.
A:
How do I get it to follow the absolute path
You can use reflection to access the private methods in the java.util.prefs.Preferences class. See this answer for details: https://stackoverflow.com/a/6163701/6256127
However, I don't recommend using this approach as it may break at any time.
why is it putting extra slashes in?
Have a look at this answer: https://stackoverflow.com/a/23632932/6256127
Registry-Keys are case-preserving, but case-insensitive. For example if you have a key "Rbi" you cant make another key named "RBi". The case is saved but ignored. Sun's solution for case-sensitivity was to add slashes to the key. | unknown | |
d6842 | train | I don't think that is possible.
On Eureka the microservices get registered with their spring application name. So if you want to achieve what you are saying then you will have to create a separate microservice for each of your functionality - like addition, subtraction etc, get them registered on Eureka and then use them.
A: It is not possible. Posted the same question as an issue in github. Seems like it is not possible. Refer link | unknown | |
d6843 | train | You can use GROUP_CONCAT() to aggregate and count he distinct Column2 values:
SELECT
Column1,
GROUP_CONCAT(DISTINCT Column2),
COUNT(DISTINCT Column2)
FROM yourTable
GROUP BY Column1
Output:
Demo here:
Rextester
A: Try this.
select Column1 , group_concat(distinct column2) ,count(distinct column2)
from your_table
group by column1 | unknown | |
d6844 | train | Since It seems that your problem is only the derivative, you can get rid of it by means of partial integration:
Edit
Not applicable solution for lower integration bound 0. | unknown | |
d6845 | train | I found the answer by experimenting and it is trivial.
def plot_sympy():
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
from matplotlib.figure import Figure
import io
from sympy import symbols
from sympy.plotting import plot
x = symbols('x')
p1 = plot(x*x)
p2 = plot(x)
p1.append(p2[0])
s = io.BytesIO()
p1.save(s)
fig=Figure()
canvas=FigureCanvas(fig)
canvas.print_tif(s)
return s.getvalue()
def myplot4():
response.headers['Content-Type']='image/png'
return plot_sympy() | unknown | |
d6846 | train | You can use the have_attributes assertion.
match_attributes = first_user.attributes.except(:id, :created_at, :updated_at)
expect(second_user).to have_attributes(match_attributes)
A: You could go about it like so:
it 'creates a duplicate' do
record = Record.find(1)
new_record = record.dup
new_record.save
expect(Record.where(id: [record.id,new_record.id],
**new_record.attributes.except(:id, :created_at, :updated_at)
).count
).to eq 2
end
This should quantify the duplication and persistence of both records in one shot. | unknown | |
d6847 | train | The problem is, Guard is adding focus_on_failed: true by default. In the Guard file, you have to add focus_on_failed: false. Here's how it will look like :
guard :rspec, notification: true, all_on_start: true, focus_on_failed: false, cmd: 'spring rspec' do
Solution : https://github.com/guard/guard/issues/511 | unknown | |
d6848 | train | Yes, working with "strings" in C was rather verbose, wasn't it!
Fortunately, C++ is not so limited:
const char* in = "tag1=123456789!!!tag2=111222333!!!10=240";
std::string num1{in+5, in+15};
If you can't use a std::string, or don't want to, then simply wrap the logic you have described into a function, and call that function.
As below, if not explicitly appending '\0' to num1, the char array may have other characters in the later portion.
Not quite correct. There is no "later portion". The "later portion" you thought you observed was other parts of memory that you had no right to view. By failing to null-terminate your would-be C-string, your program has undefined behaviour and the computer could have done anything, like travelling back in time and murdering my great-great-grandmother. Thanks a lot, pal!
It's worth noting, then, that because it's C library functions doing that out-of-bounds memory access, if you hadn't used those library functions in that way then you didn't need to null-terminate num1. Only if you want to treat it as a C-style string later is that required. If you just consider it to be an array of 10 bytes, then everything is still fine. | unknown | |
d6849 | train | Yes. Go to Preferences -> Keyboard.
There you will find "Command Window keybindings" and "Editor/Debugger" keybindings.
These are most likely set to "Emacs" style for you -- you should change them to "Windows" style to copy and paste with Ctrl-C and Ctrl-V, respectively.
Source: http://blogs.mathworks.com/community/2007/05/11/setting-up-keybindings-for-the-command-window-and-editor/
A: In Matlab version R2020a, you can change the keyboard shortcuts by following these steps:
*
*Go to Home > Preferences > Keyboard > Shortcuts
*Change the Active Settings to Windows Default Set
*Apply the changes by clicking on Apply and then Ok
It will look like this screenshot.
Now you can use ctrl-C / ctrl-V to copy / paste as usual. | unknown | |
d6850 | train | Ok, I solve it...
This code:
using <span class="skimlinks-unlinked">System.Web</span>;
using <span class="skimlinks-unlinked">System.Web.Mvc</span>;
namespace <span class="skimlinks-unlinked">AdminRole.HtmlHelpers</span>
Rewrite to:
using System.Web;
using System.Web.Mvc;
namespace AdminRole.HtmlHelpers
Now it works :) | unknown | |
d6851 | train | Wow... I just figured it out.
I was trying to add my subgroups to the parent via just assigning properties, but I should have been using FormGroup.addControl(new <FormGroup>).
Works perfectly now. | unknown | |
d6852 | train | Assuming you want to remove the "!do" then you can do the following:
set args "!do dance"
regsub -all {(!do)} $args "" output
puts $output
A: I'm not sure why you're using regexp here, and it seems like you're using eggdrop or something. You can easily use:
set prefix [lindex $args 0]
set command [lindex $args 1]
Though you should be careful with $args. It's usually used in procs to mean all the other arguments passed on to the proc aside from the already defined arguments.
% puts $prefix
!do
% puts $command
dance | unknown | |
d6853 | train | The seed is definitely missing from your model definition. A detailed documentation can be found here: https://keras.io/initializers/.
In essence your layers use random variables as their basis for their parameters. Therefore you get different outputs every time.
One example:
model.add(Dense(1, activation='linear',
kernel_initializer=keras.initializers.RandomNormal(seed=1337),
bias_initializer=keras.initializers.Constant(value=0.1))
Keras themselves have a section about getting reproduceable results in their FAQ section: (https://keras.io/getting-started/faq/#how-can-i-obtain-reproducible-results-using-keras-during-development). They have the following code snippet to produce reproducable results:
import numpy as np
import tensorflow as tf
import random as rn
# The below is necessary in Python 3.2.3 onwards to
# have reproducible behavior for certain hash-based operations.
# See these references for further details:
# https://docs.python.org/3.4/using/cmdline.html#envvar-PYTHONHASHSEED
# https://github.com/fchollet/keras/issues/2280#issuecomment-306959926
import os
os.environ['PYTHONHASHSEED'] = '0'
# The below is necessary for starting Numpy generated random numbers
# in a well-defined initial state.
np.random.seed(42)
# The below is necessary for starting core Python generated random numbers
# in a well-defined state.
rn.seed(12345)
# Force TensorFlow to use single thread.
# Multiple threads are a potential source of
# non-reproducible results.
# For further details, see: https://stackoverflow.com/questions/42022950/which-seeds-have-to-be-set-where-to-realize-100-reproducibility-of-training-res
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
from keras import backend as K
# The below tf.set_random_seed() will make random number generation
# in the TensorFlow backend have a well-defined initial state.
# For further details, see: https://www.tensorflow.org/api_docs/python/tf/set_random_seed
tf.set_random_seed(1234)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
A: Keras + Tensorflow.
Step 1, disable GPU.
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = ""
Step 2, seed those libraries which are included in your code, say "tensorflow, numpy, random".
import tensorflow as tf
import numpy as np
import random as rn
sd = 1 # Here sd means seed.
np.random.seed(sd)
rn.seed(sd)
os.environ['PYTHONHASHSEED']=str(sd)
from keras import backend as K
config = tf.ConfigProto(intra_op_parallelism_threads=1,inter_op_parallelism_threads=1)
tf.set_random_seed(sd)
sess = tf.Session(graph=tf.get_default_graph(), config=config)
K.set_session(sess)
Make sure these two pieces of code are included at the start of your code, then the result will be reproducible.
A: I resolved this issue by adding os.environ['TF_DETERMINISTIC_OPS'] = '1'
Here an example:
import os
os.environ['TF_DETERMINISTIC_OPS'] = '1'
#rest of the code
#TensorFlow version 2.3.1 | unknown | |
d6854 | train | According to the specification, an access to a texel which doesn't exist has no effect.
See OpenGL 4.6 API Core Profile Specification - 8.26. TEXTURE IMAGE LOADS AND STORES; page 193:
If the individual texel identified for an image load, store, or atomic operation doesn’t exist, the access is treated as invalid. Invalid image loads will return zero.
Invalid image stores will have no effect. Invalid image atomics will not update any texture bound to the image unit and will return zero. An access is considered invalid if:
[...]
*
*the selected texel doesn’t exist | unknown | |
d6855 | train | There aren't conditional operators in jquery selectors, you just need to separate the selectors with a comma.
$(oRoot).find('step person[color=red] , step person[color=black]');
More on jQuery selectors http://api.jquery.com/category/selectors/
You can easily apply an attribute using jQuery's .attr():
$('step person', oRoot).attr('foo', 'bar');
More on jQuery attr: http://api.jquery.com/attr/ | unknown | |
d6856 | train | I realize that the first convolutional layers are essential for feature extraction. I, however, have additional input parameters which could help in classification. The idea is to append additional nodes to the first fully connected layer so that I may use a feed-forward neural network for the eventual classification. Is this in anyway possible with the keras API?
Yes it's possible, please refer sample code with an intermediate input in same network using Keras Functional API
from keras.layers import Dense, Input, Conv2D, MaxPooling2D, Flatten, Dense, concatenate
from keras.models import Model
# feature extraction from gray scale image
inputs = Input(shape = (28,28,1))
conv1 = Conv2D(16, (3,3), activation = 'relu', padding = "SAME")(inputs)
pool1 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv1)
conv2 = Conv2D(32, (3,3), activation = 'relu', padding = "SAME")(pool1)
pool2 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv2)
flat_1 = Flatten()(pool2)
# feature extraction from RGB image
inputs_2 = Input(shape = (28,28,3))
conv1_2 = Conv2D(16, (3,3), activation = 'relu', padding = "SAME")(inputs_2)
pool1_2 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv1_2)
conv2_2 = Conv2D(32, (3,3), activation = 'relu', padding = "SAME")(pool1_2)
pool2_2 = MaxPooling2D(pool_size = (2,2), strides = 2)(conv2_2)
flat_2 = Flatten()(pool2_2)
# concatenate both feature layers and define output layer after some dense layers
concat = concatenate([flat_1,flat_2])
dense1 = Dense(512, activation = 'relu')(concat)
dense2 = Dense(128, activation = 'relu')(dense1)
dense3 = Dense(32, activation = 'relu')(dense2)
output = Dense(10, activation = 'softmax')(dense3)
# create model with two inputs
model = Model([inputs,inputs_2], output)
I would also like to know if there is a way of pulling the output of
intermediate layer through the Sequential model architecture.
Yes, For any operation which is to be carried out on the layers of a Keras model, first, we need to access the list of keras.layers object which a model holds
model_layers = model.layers
Each Layer object in this list has its own inputandoutput tensors (if you're using the TensorFlow backend)
input_tensor = model.layers[ layer_index ].input
output_tensor = model.layers[ layer_index ].output
Below am showing how to pull output from intermediate layer from Sequential Network
model = Sequential()
model.add(Conv2D(8, [3,3], input_shape=(28,28,1), activation='relu', padding ='same'))
model.add(MaxPool2D([2,2], 2, padding='valid'))
model.add(Conv2D(16, [3,3], activation='relu', padding='same'))
model.add(MaxPool2D([2,2], 2, padding='valid'))
model.add(Flatten())
model.add(Dense(256))
model.add(Dense(64))
model.add(Dense(1, activation='sigmoid'))
model.summary()
To pull output of 4th Layer,
output_tensor = model.layers[3].output
print(output_tensor)
Output:
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 28, 28, 8) 80
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 14, 14, 8) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 14, 14, 16) 1168
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 7, 7, 16) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 784) 0
_________________________________________________________________
dense_1 (Dense) (None, 256) 200960
_________________________________________________________________
dense_2 (Dense) (None, 64) 16448
_________________________________________________________________
dense_3 (Dense) (None, 1) 65
=================================================================
Total params: 218,721
Trainable params: 218,721
Non-trainable params: 0
_________________________________________________________________
Tensor("max_pooling2d_2/MaxPool:0", shape=(None, 7, 7, 16), dtype=float32) | unknown | |
d6857 | train | According to this benchmark and others, Bottle performs significantly faster than some of its peers, which is worth taking into account when comparing web frameworks' performance:
1. wheezy.web........52,245 req/sec or 19 μs/req (48x)
2. Falcon............30,195 req/sec or 33 μs/req (28x)
3. Bottle............11,977 req/sec or 83 μs/req (11x)
4. webpy..............6,950 req/sec or 144 μs/req (6x)
5. Werkzeug...........6,837 req/sec or 146 μs/req (6x)
6. pyramid............4,035 req/sec or 248 μs/req (4x)
7. Flask..............3,300 req/sec or 303 μs/req (3x)
8. Pecan..............2,881 req/sec or 347 μs/req (3x)
9. django.............1,882 req/sec or 531 μs/req (2x)
10. CherryPy..........1,090 req/sec or 917 μs/req (1x)
But bear in mind that your web framework may not be your bottleneck.
A: From the creator of web2py, Massimo Di Pierro:
If you have simple app with lots of models, bottle+gluino may be
faster than web2py because models are executed only once and not at
every request.
Reference:
groups.google.com/forum/#!topic/web2py/4gB9mVPKmho | unknown | |
d6858 | train | *
*Bar<Foo>::mul() isn't a virtual function, so it cannot be overridden.
*Yes, if you don't use a template member function then it does not get instantiated and you don't get any errors that would result from instantiating it.
You can hide Bar<Foo>::mul() by providing a function of the same signature in a subclass, and because of 2, Bar<Foo>::mul() won't be instantiated. However this is probably not a good practice. Readers are likely to get confused about the hiding vs. overriding, and there's not much benefit to doing this over simply using a different function name and never using mul(), or providing an explicit specialization of Bar for Foo.
A: *
*sure
*sure
Templates are really a kind of smart preprocessor, they're not compiled. If you don't use something, you can write complete (syntactically correct) rubbish, i.e you may inherit from
template <class T>
struct Bar
{
T x, y;
T add() const { return x + y; }
T mul() const { return x.who cares what-s in here; }
};
P.S. since your + operator is used in a const function, it should be declared as const too.
EDIT: OK, not all compilers support this, here's one that compiles with gcc:
template <class T>
struct Bar
{
T x, y;
T add() const { return x + y; }
T mul() const { T::was_brillig & T::he::slith(y.toves).WTF?!0:-0; }
}; | unknown | |
d6859 | train | This is my working solution
if (userInfo["aps"] != nil) {
if let notification = userInfo["aps"] as? NSDictionary,
let alert = notification["alert"] as? String {
let alert1 = UIAlertController(title: "Notification", message: alert, preferredStyle: .alert)
alert1.addAction(UIAlertAction(title: "Ok", style: .default, handler: { (action) -> Void in
}))
self.window?.rootViewController?.present(alert1, animated: true, completion: nil)
completionHandler(.newData)
}
} | unknown | |
d6860 | train | add id="name" on form tag,
or change
var frm = $('#trade');
to
var frm = $('form[name="trade"]') | unknown | |
d6861 | train | This issue is known about, and still open, in Jenkins. See https://issues.jenkins-ci.org/browse/JENKINS-40564 | unknown | |
d6862 | train | You should use this:
TextField("name").fielddata(true).analyzer("ngram_analyzer")
You also need to make sure to properly create the ngram_analyzer in your index settings. | unknown | |
d6863 | train | You should always validate any user input of course, but you could in this case simply check that the current user's username matches the name being used as the filename (assuming you authenticate the users prior to allowing them to upload), and ensure they have no means to specify the filename via anything they input.
In short, authenticate, and name your files using known user info', server-side data only. | unknown | |
d6864 | train | A Q object [Django-doc] can take a 2-tuple with as first item a string that specifies the "key" and as second item the "value", so you can filter with:
from django.db.models import Q
x = 'person_id'
y = 14
Membership.objects.filter(Q((x, y)))
to obtain the Memberships with person_id=14.
It however does not make much sense to use this in get_queryset in a class-based view, because that function has to respect its code contract, and adding extra parameters will not work: it simply expects a self, and an optional queryset. You can add extra optional parameters, but when the view calls the get_queryset it will not use these parameters, or at least not if you do not alter the boilerplate logic. | unknown | |
d6865 | train | what about changing
[store saveEvent:event span:EKSpanThisEvent commit:YES error:&err];
to
if (![store saveEvent:event span:EKSpanThisEvent commit:YES error:&err]) {
NSLog([NSString stringWithFormat:@"Error saving event: %@", error.localizedDescription]);
} else {
NSLog(@"Successfully saved event.");
}
You could as well do something different than writing to NSLog, like use an UIAlertView or such.
Also you may have a look a the Return Value section of the saveEvent:span:commit:error Apple documentation.
It says:
Return Value
If successful, YES; otherwise, NO. Also returns NO if event does not need to be saved because it has not been modified. | unknown | |
d6866 | train | This does not work; the And Operator cannot be used this way:
If i > 10 Then k = 54 And p = 70
If i < 11 Then k = 56 And p = 66
Change it to:
If i > 10 Then
k = 54
p = 70
Else
k = 56
p = 66
End If
A: I don't know what's in the cell that you're referencing, but based on what I can see here, I'm guessing it contains an integer. If so, then you need to access the value of the cell:
d = Worksheets(A(i)).Cells(B(j), l).Value2 + d
You'll need to do the same with your last couple of lines
If d = 100 Then Worksheets(Worksheets.Count).Cells(i, j + 1).Value2 = "Fine"
If d <> 100 Then Worksheets(Worksheets.Count).Cells(i, j + 1).Value2 = "Error" | unknown | |
d6867 | train | You can use selectors, that's correct:
var first_name = $('#'+parentForm+' input[name=first_name]').val();
alert (first_name);
Another way:
var first_name = $('input[name=first_name]', '#'+parentForm).val();
alert (first_name); | unknown | |
d6868 | train | I have solution, the problem was with pagination and lack of authentication function, with the extension for pagination posted below everything works like a charm.
@BrandCampaignsPagination = new Meteor.Pagination Campaigns,
availableSettings:
filters: true
sort: true
perPage: 10
templateName: 'campaignPaginate'
itemTemplate: 'singleCampaign'
navShowFirst: false
navShowLast: false
maxSubscriptions: 100
divWrapper: false
auth: (skip,subscription) ->
alwaysFilters =
userId: subscription.userId
userPagination = BrandCampaignsPagination.userSettings[subscription._session.id] || {}
userFilters = userPagination.filters || {}
userSort = userPagination.sort || {}
unless _.contains _.values(CampaignStatuses), userFilters.status
userFilters.status = CampaignStatuses.PUBLISHED
filters = _.extend alwaysFilters,
status: userFilters.status
options =
sort: userSort,
skip: skip,
limit: @perPage
[filters,options] | unknown | |
d6869 | train | The information you provided is a bit lacking. From what I understood, these could be possible aggregation options.
Using date_trunc
from pyspark.sql import functions as F
df = df.groupBy(
F.date_trunc('hour', 'tpep_pickup_datetime').alias('hour'),
'PULocationID',
).count()
df.show()
# +-------------------+------------+-----+
# | hour|PULocationID|count|
# +-------------------+------------+-----+
# |2020-01-01 00:00:00| 238| 1|
# |2020-01-01 02:00:00| 238| 2|
# |2020-01-01 02:00:00| 193| 1|
# |2020-01-01 01:00:00| 238| 2|
# |2020-01-01 00:00:00| 7| 1|
# +-------------------+------------+-----+
Using window
from pyspark.sql import functions as F
df = df.groupBy(
F.window('tpep_pickup_datetime', '1 hour').alias('hour'),
'PULocationID',
).count()
df.show(truncate=0)
# +------------------------------------------+------------+-----+
# |hour |PULocationID|count|
# +------------------------------------------+------------+-----+
# |[2020-01-01 02:00:00, 2020-01-01 03:00:00]|238 |2 |
# |[2020-01-01 01:00:00, 2020-01-01 02:00:00]|238 |2 |
# |[2020-01-01 00:00:00, 2020-01-01 01:00:00]|238 |1 |
# |[2020-01-01 02:00:00, 2020-01-01 03:00:00]|193 |1 |
# |[2020-01-01 00:00:00, 2020-01-01 01:00:00]|7 |1 | | unknown | |
d6870 | train | You can work around this by using a custom MSBuild task.
Instead of adding the assembly to the lib directory create an MSBuild .targets file named after the package id and put your xyz assembly next to it.
\build
\Net45
\MyPackage.targets
\xyz.dll
\xyz.xml
Then in the MSBuild .targets file add the reference exactly how you want it to be. Something like:
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Reference Include="xyz">
<HintPath>$(MSBuildThisFileDirectory)\xyz.dll</HintPath>
</Reference>
</ItemGroup>
</Project>
The above shows how to specify a hint path relative to the MSBuild .targets file. You said that you do want to use a hint path so you could remove that if xyz.dll can be resolved by MSBuild somehow, such as it being in the GAC. | unknown | |
d6871 | train | Sorry this took so long, pressed for time.
The data you provided don't seem to fit your description of what the trumpet curve's suppose to represent or I'm missing something big. I would appreciate it if you could, in short, describe what needs to be done with the data.
When you manage to shape your data for output, you can stick it into the code below and it should produce a plot. I'll make it up to you to customize it to your needs.
# generate some random data
trump <- data.frame(
curve1 = rev(sort(rchisq(100, df = 2))) * rnorm(100, mean = 5, sd = 0.1) + 3,
curve2 = -rev(sort(rchisq(100, df = 2))) * rnorm(100, mean = 5, sd = 0.1) - 3
)
trump <- trump[seq(from = 1, to = 100, by = 3), ]
ggplot(trump, aes(x = 1:nrow(trump), y = curve1)) +
geom_line() +
geom_point() +
geom_line(aes(y = curve2)) +
geom_point(aes(y = curve2)) +
geom_hline(aes(yintercept = 0), linetype = "solid") + # overall percentage error
geom_hline(aes(yintercept = 5), linetype = "dashed") + # set rate
xlab("Observation windows (in minutes)") +
ylab("% error") +
annotate("text", x = 8, y = -1.5, label = "overall percentage error") +
annotate("text", x = 5, y = 3, label = "set rate") +
annotate("text", x = 10, y = -24, label = "min. error") +
annotate("text", x = 10, y = 24, label = "max. error") +
theme_bw()
A: Many thanks to Roman for his help. This is the way we ended up doing it.
flow<-read.table("iv.txt",comment.char="#",header=TRUE)
Q<-as.data.frame(c(0,(60*diff(flow$wt,1,1)/0.5))) # Q is used for flow in the standard
names(Q)<-"Q"
# Now combine them
l<-list(flow,Q)
dflow<-as.data.frame(l)
t1flow<-subset(dflow, dflow$secs>60*60 & dflow$secs<=120*60) # require the second hour of the data
t1flow$QE<-((t1flow$Q-setflow)/setflow)*100 # calculate the error
library(TTR) # use this for moving averages
#
# Avert your eyes! I know there must be a slicker way of doing this .....
#
# Calculate the moving average using a sample rate of every 4 readings (2 mins of 30sec readings)
QE2<-SMA(t1flow$QE,4)
minQE2<-min(QE2,na.rm=TRUE)
maxQE2<-max(QE2,na.rm=TRUE)
# now for the 5 minute window
QE5<-SMA(t1flow$QE,10)
minQE5<-min(QE5,na.rm=TRUE)
maxQE5<-max(QE5,na.rm=TRUE)
# Set window to 11 mins
QE11<-SMA(t1flow$QE,22)
minQE11<-min(QE11,na.rm=TRUE)
maxQE11<-max(QE11,na.rm=TRUE)
# Set window to 19 mins
QE19<-SMA(t1flow$QE,38)
minQE19<-min(QE19,na.rm=TRUE)
maxQE19<-max(QE19,na.rm=TRUE)
# Set window to 31 mins
QE31<-SMA(t1flow$QE,62)
minQE31<-min(QE31,na.rm=TRUE)
maxQE31<-max(QE31,na.rm=TRUE)
#
# OK - you can look again :-)
#
# create a data frame from this data
trump<-data.frame(c(2,5,11,19,31),c(minQE2,minQE5,minQE11,minQE19,minQE31),c(maxQE2,maxQE5,maxQE11,maxQE19,maxQE31))
names(trump)<-c("T","minE","maxE")
A<-mean(t1flow$QE) # calculate the overall mean percentage error
error_caption<-paste("overall percentage error = ",A,"%") # create the string to label the error line
# plot the graph
ggplot(trump, aes(x = T, y = minE)) +
geom_line() +
geom_point(color="red") +
geom_line(aes(y = maxE)) +
geom_point(aes(y = maxE),colour="red") +
geom_hline(aes(yintercept = 0), linetype = "dashed") + # overall percentage error
geom_hline(aes(yintercept = A), linetype = "solid") + # set rate
xlab("Observation windows (in minutes)") +
ylab("% error") +
scale_x_continuous(breaks=c(0,2,5,11,19,31),limits=c(0,32)) + # label the x axis only at the window values
annotate("text", x = 10, y = A-0.5, label = error_caption) + # add the error line label
opts(title="Trumpet curve for Test Data")
Also available hYou can see the final graph [here][2]
A: i'm currently try to solve trumpet curve calculation error. based on IEC 60601-2-24 : 1998 - SECTION EIGHT – ACCURACY OF OPERATING DATA
AND PROTECTION AGAINST HAZARDOUS OUTPUT. here my R code :
flow<-read.table("c:\\ciringe.txt",comment.char="#",header=TRUE)
#parameters
# setflow = 1
# P = 1,2,5,11,19,31
P1<-1
P2<-2
P5<-5
P11<-11
P19<- 19
P31<-31
P1m = ((60 - P1) / 0.5 ) + 1
P2m = ((60 - P2) / 0.5 ) + 1
P5m = ((60 - P5) / 0.5 ) + 1
P11m = ((60 - P11) / 0.5 ) + 1
P19m = ((60 - P19) / 0.5 ) + 1
P31m = ((60 - P31) / 0.5 ) + 1
setflow<-1
mQE1<-0.5
mQE2<-0.25
mQE5<-0.1
mQE11<-0.045
mQE19<-0.0263
mQE31<-0.0161
Q<-as.data.frame(c(0,(60*diff(flow$wt,1,1)/0.5*0.998))) # Q is used for flow in the standard
names(Q)<-"Q"
# Now combine them
l<-list(flow,Q)
dflow<-as.data.frame(l)
t1flow<-subset(dflow, dflow$secs>=3600 & dflow$secs<=7200) # require the second hour of the data
#overall
t1flow$QE<-(((t1flow$Q-setflow)/setflow)*100) # calculate the error
t1flow$QE1<-(((t1flow$Q-setflow)/setflow)*100) * mQE1 # calculate the error
t1flow$QE2<-(((t1flow$Q-setflow)/setflow)*100) * mQE2 # calculate the error
t1flow$QE5<-(((t1flow$Q-setflow)/setflow)*100) * mQE5 # calculate the error
t1flow$QE11<-(((t1flow$Q-setflow)/setflow)*100) * mQE11 # calculate the error
t1flow$QE19<-(((t1flow$Q-setflow)/setflow)*100) * mQE19 # calculate the error
t1flow$QE31<-(((t1flow$Q-setflow)/setflow)*100) * mQE31 # calculate the error
library(TTR) # use this for moving averages
#
# Avert your eyes! I know there must be a slicker way of doing this .....
#
# Calculate the moving average using a sample rate of every
# 4 readings (2 mins of 30sec readings)
# now for the 1 minute window
QE1<-SMA(t1flow$QE1,2)
minQE1<-min(QE1,na.rm=TRUE)
maxQE1<-max(QE1,na.rm=TRUE)
# now for the 2 minute window
QE2<-SMA(t1flow$QE2,4)
minQE2<-min(QE2,na.rm=TRUE)
maxQE2<-max(QE2,na.rm=TRUE)
# now for the 5 minute window
QE5<-SMA(t1flow$QE5,10)
minQE5<-min(QE5,na.rm=TRUE)
maxQE5<-max(QE5,na.rm=TRUE)
# Set window to 11 mins
QE11<-SMA(t1flow$QE11,22)
minQE11<-min(QE11,na.rm=TRUE)
maxQE11<-max(QE11,na.rm=TRUE)
# Set window to 19 mins
QE19<-SMA(t1flow$QE19,38)
minQE19<-min(QE19,na.rm=TRUE)
maxQE19<-max(QE19,na.rm=TRUE)
# Set window to 31 mins
QE31<-SMA(t1flow$QE31,62)
minQE31<-min(QE31,na.rm=TRUE)
maxQE31<-max(QE31,na.rm=TRUE)
#
# OK - you can look again :-)
#
# create a data frame from this data
trump<- data.frame(c(1,2,5,11,19,31),c(minQE1,minQE2,minQE5,minQE11,minQE19,minQE31), c(maxQE1,maxQE2,maxQE5,maxQE11,maxQE19,maxQE31))
names(trump)<-c("T","minE","maxE")
A<-mean(t1flow$QE) # calculate the overall mean percentage error
error_caption<-paste("overall percentage error = ",A,"%") # create the string to label the error line
# plot the graph
library(ggplot2)
ggplot(trump, aes(x = T, y = minE)) +
geom_line() +
geom_point(color="red") +
geom_line(aes(y = maxE)) +
geom_point(aes(y = maxE),colour="red") +
geom_hline(aes(yintercept = 0), linetype = "dashed") + # overall percentage error
geom_hline(aes(yintercept = A), linetype = "solid") + # set rate
xlab("Observation windows (in minutes)") +
ylab("% error") +
scale_x_continuous(breaks=c(0,2,5,11,19,31),limits=c(0,32)) + # label the x axis only at the window values
annotate("text", x = 10, y = A-0.5, label = error_caption) # add the error line label
actually, i'm not understand summation series as stated in the ISO docs (percentage variation)
A: Your formulae don't match the equations in ISO 60601-2-24. The spec is a bit tedious to go thru but there are no 2-minute windows or moving averages. First, get your data formatted as 1-minute samples (per the spec). Do a data array with 1 minute intervals and columns of:
colTotalLiquid = 1 #total liquid (precalculated from weight)
colProgRate = 2 #prog rate
colQi = 3 # rate Q(i) from IEC60601-2-24
obsWindow = (2, 5, 11, 19, 31) # observation windows in minutes
analysisT1start = 60 # start of analysis period T1
analysisT1end = 120 # end of analysis period T1
t1err = dict.fromkeys(obsWindow, None)
progRate = data[analysisT1start][colProgRate]
A = 100 * (((data[analysisT1end][colTotalLiquid] - data[analysisT1start][colTotalLiquid]) * hours / (analysisT1end - analysisT1start)) - progRate) / progRate
t1TrumpetX = [] #mintes
t1TrumpetY1 = [] #Ep(max)
t1TrumpetY2 = [] #Ep(min)
t1TrumpetY3 = [] #mean err
t1TrumpetY4 = [] #prog Rate
for p in obsWindow:
m = (analysisT1end - analysisT1start) - p + 1
t1err[p] = {'m': m}
EpMax = 0
EpMin = 0
for j in range(1, m + 1):
errSum = 0
for i in range(j, j + p):
errSum += (1.0/p) * 100 * ( data[analysisT1start + i][colQi] - progRate ) / progRate
if errSum > EpMax:
EpMax = errSum
if errSum < EpMin:
EpMin = errSum
t1err[p]['EpMax'] = EpMax
t1err[p]['EpMin'] = EpMin
t1TrumpetX.append(p)
t1TrumpetY1.append(EpMax)
t1TrumpetY2.append(EpMin)
t1TrumpetY3.append(A)
t1TrumpetY4.append(0)
tplot = PdfPages('trumpet curve.pdf')
p1 = plt.figure()
plt.plot(t1TrumpetX, t1TrumpetY1)
plt.plot(t1TrumpetX, t1TrumpetY2)
plt.plot(t1TrumpetX, t1TrumpetY3)
plt.plot(t1TrumpetX, t1TrumpetY4, '--')
plt.legend(('Ep(max)','Ep(min)', 'overall error', 'set rate'), fontsize='xx-small', loc='best')
plt.title(baseName + ': IEC60601-2-24 Trumpet Curve for Second Hour', fontsize='x-small')
plt.xlabel('Observation window (min)', fontsize='x-small')
plt.ylabel('Percentage error of flow', fontsize='x-small')
plt.ylim((-15, 15))
ax = plt.axes()
ax.set_xticks(obsWindow)
tplot.savefig(p1)
plt.close(p1)
tplot.close(p1)
That should get you the standard's trumpet curve. There's no requirement to put a numerical label on the overall error A, but you could do that with an annotation. | unknown | |
d6872 | train | In your password validation lambda, you're calling
u.user.password_digest_changed? && !u.password.nil?
ie, you're sending a user method to the u object, which is your User instance. That object doesn't respond to user. You probably just want
u.password_digest_changed? && !u.password.nil? | unknown | |
d6873 | train | Using apply:
df['temp'] = df['sentences'].apply(lambda x:[j for j in di.keys() if j in x]
df['shortly'] = df['temp'].apply(lambda a:','.join([di[key] for key in a]))
df.drop(['temp'],axis = 1,inplace=True)
Output:
>>> df
sentences shortly
0 btw I have to go By The Way
1 i am afk now Away From Keyboard
The above would work even if there are multiple short forms in a single sentence, only the output will be separated with , in shortly column (you can change the separator in second apply statement).
eg.
sentences temp shortly
0 btw I have to go [btw] By The Way
1 i am afk btw [btw, afk] By The Way,Away From Keyboard
If the short-forms can be a different case than in the dictionary, then just add .lower() in the first apply statement like so:
df['temp'] = df['sentences'].apply(lambda x:[j for j in di.keys() if j in x.lower()]
keep all the shortforms in dictionary as lower case though | unknown | |
d6874 | train | I've been writing code recently that accesses the PhotosLibrary. I did this by writing a native module that calls the PhotoKit API. If you go that direction there's going to be a steep learning curve as you'll likely be using Objective-C++ with the features and quirks of both C++ and Objective-C while trying to write something in JavaScript.
https://github.com/nodejs/node-addon-api
https://developer.apple.com/documentation/photokit?language=objc
With Apple's recent security features, you'll also need to somehow run your code inside of an app with the correct entitlements and values set in its Info.plist that will allow access to APIs.
In build/entitlements.mac.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<!-- https://github.com/electron/electron-notarize#prerequisites -->
<key>com.apple.security.cs.allow-jit</key>
<true/>
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
<true/>
<!-- https://github.com/electron-userland/electron-builder/issues/3940 -->
<key>com.apple.security.cs.disable-library-validation</key>
<true/>
<!-- Allow app to access Photos Library using PhotoKit API -->
<key>com.apple.security.personal-information.photos-library</key>
<true/>
</dict>
</plist>
The following will be needed in your App's Info.plist:
<plist version="1.0">
<dict>
...
<key>NSPhotoLibraryUsageDescription</key>
<string>This app needs access to the photos library</string>
</dict>
</plist>
This can be done using electron-builder by adding the following value to the extendInfo extendInfo of your mac build settings in package.json.
{
...
"build": {
...
"mac": {
...
"extendInfo": {
...
"NSPhotoLibraryUsageDescription": "This app needs access to the photos library"
}
}
}
}
I hope this is gives you something to start with. Like I said above, this will come with a steep learning curve unless you're already familiar with JavaScript, native module development, Objective-C, C++ and Apple's APIs.
A: If you just need to read data from it for a one-off project, the photolibrary is really just a directory-like container around your photos, and an sqlite database for the metadata (faces, places and albums/folders). Right click-the photolibrary, show package contents, orig photos are under Masters/YYYY/MM/DD/IMG_XXXX.JPG format, and metadata is in database/photos.db, some tables you can query are RKMaster (filename/uuid of master files), RKAlbum, RKFace, RKMemory, RKPlace, RKFolder, RKVersion, RKKeyword, etc., get any free sqlite browser and you can figure out the rest.
You can also copy the .photolibrary file to Linux and scan its folder/sqlite file using pure node, you don't need any native modules to read it. Writing to it may try to fire some sqlite triggers that seem to belong to some macosx-proprietary extensions, so make backups first, try writes with extensions disabled, or just read and extract the images and metadata to some other format (raw jpeg/json files in a bucket somewhere) that are easy to collate and then (if you have to), reverse the process and write out to another photolibrary file once you get the relation between the tables and filesystem paths inside its container. | unknown | |
d6875 | train | Here, we use lifecycle method componentDidMount this is the best place to make API calls and set up subscriptions
export default class CountryPage extends React.Component {
constructor(props) {
super(props);
this.state = {
countries: []
}
}
componentDidMount() {
HTTP.get('https://restcountries.eu/rest/v2/all', (err, result) => {
this.setState({countries:result.data})
});
}
render() {
const {countries} = this.state;
return (
<div>
<Input type="select" name="countrySelect" id="countrySelect">
{countries.map(country => (
<option>{country.name}</option>
))}
</Input>
</div>
);
}
}
A: HTTP request is an asynchronous task. You have to wait for the API's response so
export default class CountryPage extends React.Component {
constructor(props) {
super(props);
}
render() {
HTTP.get('https://restcountries.eu/rest/v2/all', (err, result) => {
const countries = result.data;
console.log('countries', countries);
return (
<div>
<Input type="select" name="countrySelect" id="countrySelect">
{countries.map(country => (
<option>{country.name}</option>
))}
</Input>
</div>
);
});
}
} | unknown | |
d6876 | train | I would create a base class Validation and just create derived classes from it if it is necessary to add new validation:
public abstract class Validation
{
public Validation(string config)
{
}
public abstract string Validate();
}
and its concrete implementations:
public class Phase1Validation : Validation
{
public Phase1Validation(string config) : base(config)
{}
public override string Validate()
{
if (true)
return null;
return "There are some errors Phase1Validation";
}
}
public class Phase2Validation : Validation
{
public Phase2Validation(string config) : base(config)
{
}
public override string Validate()
{
if (true)
return null;
return "There are some errors in Phase2Validation";
}
}
and then just create a list of validators and iterate through them to find errors:
public string Validate()
{
List<Validation> validations = new List<Validation>()
{
new Phase1Validation("config 1"),
new Phase2Validation("config 2")
};
foreach (Validation validation in validations)
{
string error = validation.Validate();
if (!string.IsNullOrEmpty(error))
return error;
}
return null; // it means that there are no errors
}
UPDATE:
I've little bit edited my classes to fit your new question requirements:
*
*validations should be ordered. Added Order property
*get config from previous validation and send it to the next validation
It can be seen that this approach allows to avoid to write nested classes like this:
new Phase4Validation(
new Phase3Validation(
new Phase2Validation(...).validate()
).validate()
).validate()
So you can add new classes without editing validation classes and it helps to keep Open CLosed Principle of SOLID principles.
So the code looks like this:
Abstractions:
public abstract class Validation
{
// Order to handle your validations
public int Order { get; set; }
// Your config file
public string Config { get; set; }
public Validation(int order)
{
Order = order;
}
// "virtual" means that method can be overriden
public virtual string Validate(string config)
{
Config = config;
if (true)
return null;
return "There are some errors Phase1Validation";
}
}
And its concrete implementations:
public class Phase1Validation : Validation
{
public Phase1Validation(int order) : base(order)
{
}
}
public class Phase2Validation : Validation
{
public Phase2Validation(int order) : base(order)
{
}
}
And method to validate:
string Validate()
{
List<Validation> validations = new List<Validation>()
{
new Phase1Validation(1),
new Phase2Validation(2)
};
validations = validations.OrderBy(v => v.Order).ToList();
string config = "";
foreach (Validation validation in validations)
{
string error = validation.Validate(config);
config = validation.Config;
if (!string.IsNullOrEmpty(error))
return error;
}
return null; // it means that there are no errors
}
A: I leave here my own answer, but I'm not going to select it as correct because I think there exist better answers (besides the fact that I am not very convinced of this implementation).
A kind of Decorator design pattern allowed me to do chain validation with greater use of the dependency injection approach.
I leave here the code but only for Python (I have reduced the number of phases from 4 to 2 to simplify the example).
from __future__ import annotations
import abc
from typing import cast
from typing import Any
from typing import TypedDict
NotValidatedConfig = dict
ValidatedConfig = TypedDict("ValidatedConfig", {"foo": Any, "bar": Any})
class InvalidConfig(Exception):
...
# This class is abstract.
class ValidationHandler(abc.ABC):
_handler: ValidationHandler | None
def __init__(self, handler: ValidationHandler = None):
self._handler = handler
# This method is abstract.
@abc.abstractmethod
def _validate(self, not_validated_config: NotValidatedConfig):
...
def _chain_validation(self, not_validated_config: NotValidatedConfig):
if self._handler:
self._handler._chain_validation(not_validated_config)
self._validate(not_validated_config)
def get_validated_config(self, not_validated_config: NotValidatedConfig) -> ValidatedConfig:
self._chain_validation(not_validated_config)
# Here we convert (in a forced way) the type `NotValidatedConfig` to
# `ValidatedConfig`.
# We do this because we already run all the validations chain.
# Force a type is not a good way to deal with a problem, and this is
# the main downside of this implementation (but it works anyway).
return cast(ValidatedConfig, not_validated_config)
class Phase1Validation(ValidationHandler):
def _validate(self, not_validated_config: NotValidatedConfig):
if "foo" not in not_validated_config:
raise InvalidConfig('Config miss "foo" attr')
class Phase2Validation(ValidationHandler):
def _validate(self, not_validated_config: NotValidatedConfig):
if not isinstance(not_validated_config["foo"], str):
raise InvalidConfig('"foo" must be an string')
class Validator:
_validation_handler: ValidationHandler
def __init__(self, validation_handler: ValidationHandler):
self._validation_handler = validation_handler
def validate_config(self, not_validated_config: NotValidatedConfig) -> ValidatedConfig:
return self._validation_handler.get_validated_config(not_validated_config)
if __name__ == "__main__":
# "Pure Dependency Injection"
validator = Validator((Phase2Validation(Phase1Validation())))
validator.validate_config({"foo": 1, "bar": 1})
What is the problem with this approach?: the lightweight way in which the types are concatenated. In the original example, the Phase1Validation generates a ValidatedPhase1Config, which is safely used by the Phase2Validation. With this implementation, each decorator receives the same data type to validate, and this creates safety issues (in terms of typing). The Phase1Validation gets NotValidatedConfig, but the Phase2Validation can't use that type to do the validation, they need the Phase1Validation. | unknown | |
d6877 | train | I am not able to remove the texts. Any help will be appreciated. I am
posting my code here.
To clean the content of the TextView you could pass null to setText. E.g
text_heading.setText(null);
If you want to change the content every time you click on the button, you have to move
int_text = random.nextInt(array_heading.length);
your onClick callback,
You should be aware of the fact that next int returns an int between [0, n). array_heading.length -1 is necessary only if you want to exclude R.string.source_text9_explain from the possible texts you want to show. Keep also in mind that if array_heading contains more items than array_explain you could get ArrayIndexOutBoundException
A: You must use ArrayList for it.
ArrayList<Integer> heading = Arrays.asList(array_heading);
ArrayList<Integer> explain = Arrays.asList(array_explain);
Now set text from this arraylists. And when its set once remove it from the arraylist so it cannot be shown again.
Use like this
random = new Random();
array_heading = new Integer []{R.string.source_text1, R.string.source_text2, R.string.source_text3,
R.string.source_text6, R.string.source_text5, R.string.source_text4, R.string.source_text7,
R.string.source_text8, R.string.source_text9};
array_explain = new Integer []{R.string.source_text1_explain, R.string.source_text2_explain,
R.string.source_text3_explain,
R.string.source_text4_explain, R.string.source_text5_explain, R.string.source_text6_explain,
R.string.source_text7_explain,
R.string.source_text8_explain, R.string.source_text9_explain};
ArrayList<Integer> array_headingList = new ArrayList<Integer>(Arrays.asList(array_heading));
ArrayList<Integer> array_explainList = new ArrayList<Integer>(Arrays.asList(array_explain));
int_text = random.nextInt(array_headingList.size());
click.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
text_heading.setText(array_headingList.get(int_text));
text_explain.setText(array_explainList.get(int_text));
array_headingList.remove(int_text);
array_explainList.remove(int_text);
if(array_headingList.size() == 0){
click.setEnabled(false);
Toast.makeText(getApplicationContext(),"All text finished",Toast.LENGTH_SHORT).show();
} else if(array_headingList.size() == 1){
int_text = 0;
} else {
int_text = random.nextInt(array_headingList.size());
}
}
});
A: TextView text_heading,text_explain;
Button click;
Random random;
Integer [] array_heading ,array_explain ;
Integer int_text;
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
text_heading = (TextView) findViewById(R.id.text_heading);
text_explain = (TextView) findViewById(R.id.text_explain);
click = (Button) findViewById(R.id.click);
random = new Random();
array_heading = new Integer []{R.string.source_text1, R.string.source_text2, R.string.source_text3,
R.string.source_text6, R.string.source_text5, R.string.source_text4, R.string.source_text7,
R.string.source_text8, R.string.source_text9};
array_explain = new Integer []{R.string.source_text1_explain, R.string.source_text2_explain,
R.string.source_text3_explain,
R.string.source_text4_explain, R.string.source_text5_explain, R.string.source_text6_explain,
R.string.source_text7_explain,
R.string.source_text8_explain, R.string.source_text9_explain};
ArrayList<Integer> array_headingList = new ArrayList<Integer>(Arrays.asList(array_heading));
ArrayList<Integer> array_explainList = new ArrayList<Integer>(Arrays.asList(array_explain));
int_text = random.nextInt(array_headingList.size() - 1);
click.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
text_heading.setText(array_headingList.get(int_text)); //getting error
text_explain.setText(array_explainList.get(int_text)); //getting error
array_headingList.remove(int_text); //getting error
array_explainList.remove(int_text); //getting error
}
});
}
}
A: I would keep the strings together in an object:
public class Item {
private final int textId;
private final int textExplanationId;
public class Item(int textId, int textExplanationId){
this.textId = textId;
this.textExplanationId = textExplanationId;
}
public int getTextId(){return textId;}
public int getTextExplanationId(){return textExplanationId;}
}
Then I would store those in an ArrayList:
List<Item> items = new ArrayList<Item>(new Item[]{
new Item(R.string.source_text1, R.string.source_text1_explain),
new Item(R.string.source_text2, R.string.source_text2_explain),
//etc
});
Then I would shuffle that array once:
Collections.shuffle(items);
And read from it in order:
Item current = items.get(currentIndex++);
text_heading.setText(current.getTextId());
text_explain.setText(current.getTextExplanationId()); | unknown | |
d6878 | train | You're not actually running your get_url calls as tasks; you call them in the main thread, and pass the result to executor.submit, experiencing the concurrent.futures analog to this problem with raw threading.Thread usage. Change:
results = {executor.submit( get_url(url)) : url for url in urls}
to:
results = {executor.submit(get_url, url) : url for url in urls}
so you pass the function to call and its arguments to the submit call (which then runs them in threads for you) and it should parallelize your code. | unknown | |
d6879 | train | You need libmysqlclient.so library to be able to install MySQLdb,
Which come with MySQL Server and client and can also be downloaded with MySQL Connector/C.
Locate the library and set your DYLD_LIBRARY_PATH to have the path where libmysqlclient.so is present. | unknown | |
d6880 | train | Is this what you mean?
UPDATE products
SET Product_Desc_Alt = (
SELECT TOP 1 Product_Desc_Alt
FROM products P2
WHERE P2.Product_Desc = products.Product_Desc
GROUP BY Product_Desc_Alt
ORDER BY COUNT(*) DESC
) | unknown | |
d6881 | train | Your code isn't actually making any of the requests.
from zipfile import ZipFile
import hashlib
import requests
def md5(fname):
hash_md5 = hashlib.md5()
hash_md5.update( open(fname,'rb').read() )
return hash_md5.hexdigest()
url_datasets = 'http://files.grouplens.org/datasets/movielens/ml-25m.zip'
datasets = 'datasets.zip'
url_checksum = 'http://files.grouplens.org/datasets/movielens/ml-25m.zip.md5'
checksum = 'datasets.zip.md5'
ds = requests.get( url_datasets, allow_redirects=True)
cs = requests.get( url_checksum, allow_redirects=True)
open( datasets, 'wb').write( ds.content )
ds_md5 = md5(datasets)
cs_md5 = cs.content.decode('utf-8').split()[0]
print( ds_md5 )
print( cs_md5 )
if ds_md5 == cs_md5:
print( "MATCH" )
with ZipFile(datasets, 'r') as zipObj:
listOfiles = zipObj.namelist()
for elem in listOfiles:
print(elem)
else:
print( "Checksum fail" ) | unknown | |
d6882 | train | How about this?
result = find(~cellfun(@isempty, regexp(strings, 'ghi')) & ...
~cellfun(@isempty, regexp(strings, 'AB')));
Or, using a single regular expression,
result = find(~cellfun(@isempty, regexp(strings, '(ghi.*AB|ghi.*AB)'))); | unknown | |
d6883 | train | You are having this problem because you are adding fields after the DOM has loaded and after $(".calpicker").datepicker(); has run so the new fields are not included to have a datepicker.
You will need to use the .live function to achieve this functionality so have a look at this article it might help: http://www.vancelucas.com/blog/jquery-ui-datepicker-with-ajax-and-livequery/
Extracted and modified source:
jQuery(function() {
$('input.calpicker').live('click', function() {
$(this).datepicker({showOn:'focus'}).focus();
});
}); | unknown | |
d6884 | train | This line:
be.World = GameCamera.World * Translation * modelTransforms[mesh.ParentBone.Index];
is usually arrainged the other way around, and the order that you multiply matrices in will make the results different. Try this:
be.World = modelTransforms[mesh.ParentBone.Index] * GameCamera.World * Translation; | unknown | |
d6885 | train | The default cache is 15min and is stored in the HttpContext.Cache, this is all managed by the System.Web.Mvc.DefaultViewLocationCache class. Since this uses standard ASP.NET caching you could use a custom cache provider that gets its cache from WAZ AppFabric Cache or the new caching preview (there is one on NuGet: http://nuget.org/packages/Glav.CacheAdapter). Using a shared cache will make sure that only 1 instance needs to do the work of resolving the view. Or you could go and build your own cache provider.
Running your application in release mode, clearing unneeded view engines, writing the exact path instead of simply calling View, ... are all ways to speed up the view lookup process. Read more about it here:
*
*http://samsaffron.com/archive/2011/08/16/Oh+view+where+are+thou+finding+views+in+ASPNET+MVC3+
*http://blogs.msdn.com/b/marcinon/archive/2011/08/16/optimizing-mvc-view-lookup-performance.aspx
You can pre-load the view locations by adding a key for each view to the cache. You should format it as follows (where this is the current VirtualPathProviderViewEngine):
string.Format((IFormatProvider) CultureInfo.InvariantCulture, ":ViewCacheEntry:{0}:{1}:{2}:{3}:{4}:", (object) this.GetType().AssemblyQualifiedName, (object) prefix, (object) name, (object) controllerName, (object) areaName);
I don't have any figures if MVC4 is faster, but it looks like the DefaultViewLocationCache code is the same as for MVC3.
A: To increase my cachetime to 24 hours I used the following in the Global.asax
var viewEngine = new RazorViewEngine
{ViewLocationCache = new DefaultViewLocationCache(TimeSpan.FromHours(24))};
//Only allow Razor view to improve for performance
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(viewEngine);
Also this article ASP.NET MVC Performance Issues with Render Partial was also interesting.
Will look at writing my own ViewLocationCache to take advantage of shared Azure caching. | unknown | |
d6886 | train | You could try enabling either the Storage-Engine Independent Column Compression or InnoDB page compression. Both provides ways to have a smaller on-disk database which is especially useful for the large text fields.
Since there's only one table with one particular field that's taking up space, trying out individual column compression seems like the easiest first step.
A: According to me you should just store the path of log files instead of the complete logs in the database. By using those paths you can access the files anytime you want.
It will decrease the size of database too.
Your new table would look like this,
LogID, BuildID, JenkinsJobName,LogTextData. | unknown | |
d6887 | train | Apparently, all AsyncTasks share one thread:
By default, yes. Use executeOnExecutor() to opt into a thread pool. In the documentation, the next paragraph after your quoted one is:
If you truly want parallel execution, you can invoke executeOnExecutor(java.util.concurrent.Executor, Object[]) with THREAD_POOL_EXECUTOR.
But if I have multiple IntentServices, does each get its own thread?
Yes. The source code to IntentService shows that it creates its own HandlerThread in onCreate(). | unknown | |
d6888 | train | Did you try run this code as snippet using plugin "Code Snippets"? Maybe at this way the code will work fine. | unknown | |
d6889 | train | So here is what I have to do to close the handle. I have added the following lines after opening the MSI file:
Marshal.FinalReleaseComObject(oRecord)
oView.Close()
Marshal.FinalReleaseComObject(oView)
Marshal.FinalReleaseComObject(oDb)
oRecord = Nothing
oView = Nothing
oDb = Nothing
So my final code looked like the following:
Function GetMsiVersion() As String
Try
Dim oInstaller As WindowsInstaller.Installer
Dim oDb As WindowsInstaller.Database
Dim oView As WindowsInstaller.View
Dim oRecord As WindowsInstaller.Record
Dim sSQL As String
Dim Version As String
oInstaller = CType(CreateObject("WindowsInstaller.Installer"), WindowsInstaller.Installer)
DownloadMsiFile()
If File.Exists(My.Computer.FileSystem.SpecialDirectories.Temp & "\ol.msi") Then
oDb = oInstaller.OpenDatabase(My.Computer.FileSystem.SpecialDirectories.Temp & "\ol.msi", 0)
sSQL = "SELECT `Value` FROM `Property` WHERE `Property`='ProductVersion'"
oView = oDb.OpenView(sSQL)
oView.Execute()
oRecord = oView.Fetch
Version = oRecord.StringData(1).ToString()
Marshal.FinalReleaseComObject(oRecord)
oView.Close()
Marshal.FinalReleaseComObject(oView)
Marshal.FinalReleaseComObject(oDb)
oRecord = Nothing
oView = Nothing
oDb = Nothing
Else
Version = Nothing
End If
Return Version
Catch ex As Exception
MessageBox.Show("File couldn't be accessed: " & ex.Message)
End Try
End Function | unknown | |
d6890 | train | UPDATED :
As per your Error and Tested
Private Sub CommandButton1_Click()
Dim i As Integer 's
Dim j As Integer
Dim Count1 As Integer
Dim Count2 As Integer
Dim cell As Range
Count1 = Worksheets("Sheet1").Range("A1").CurrentRegion.Rows.Count
Count2 = Worksheets("Sheet2").Range("A1").CurrentRegion.Rows.Count
For i = 2 To Count1
For j = 2 To Count2
If Worksheets("Sheet1").Cells(i, 1).Value = Worksheets("Sheet2").Cells(j, 1).Value Then
Worksheets("Sheet2").Cells(j, 2).Value = Worksheets("Sheet1").Cells(i, 2).Value
Worksheets("Sheet2").Cells(j, 3).Value = 0
Exit For
End If
Next j
Next i
For Each cell In Range("C2:" & "C" & Cells(Rows.Count, "C").End(xlUp).Row)
cell.Value = cell.Value + 1
Next
End Sub
Let me know, if you have gotten issues.
A: Private Sub CommandButton1_Click()
Dim i As Integer
Dim j As Integer
Total1 = Worksheets("Sheet1").Range("A1").CurrentRegion.Rows.Count
Total2 = Worksheets("Sheet2").Range("A1").CurrentRegion.Rows.Count
MsgBox Total1
For i = 2 To Total1
For j = 2 To Total2
If Worksheets("Sheet1").Cells(i, 1).Value = Worksheets("Sheet2").Cells(j, 1).Value Then
Worksheets("Sheet2").Cells(j, 2).Value = Worksheets("Sheet1").Cells(i, 2).Value
Worksheets("Sheet2").Cells(j, 3).Value = 1
End If
Next j
Next i
For j = 2 To Total2
If Worksheets("Sheet1").Cells(i, 1).Value <> 1 Then
Worksheets("Sheet2").Cells(j, 3).Value = Worksheets("Sheet2").Cells(j, 3).Value + 1
End If
Next j
End Sub
This code is also working fine for the above solution. | unknown | |
d6891 | train | According to this thread,you can do like below
Open Job activity monitor
In the left pane you can see "View refresh settings"
Click on it and you have a check box for "Auto refresh"
Enable the check box and provide the refresh interval.
Then click ok.
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/d040105c-9d09-46d3-bb37-c69e771190ab/refresh-settings-job-activity-monitor?forum=sqlkjmanageability | unknown | |
d6892 | train | You expect to receive twice as much data as you send.
print "Server says: " + s.recv(1024);
if data=="bye" or s.recv(1024)=="bye":
Calling receive each time will wait for data on the socket. Store the message received first, then manipulate that message.
msg = s.recv(1024)
print "Server says: " + msg
if data=="bye" or msg=="bye": | unknown | |
d6893 | train | The error because your columns x and y are factor. You must transform a factor to approximately its original numeric values.
map$x <- as.numeric(gsub(",",".", map$x))
map$y <- as.numeric(gsub(",",".", map$y))
Krig(map, sigma, theta=100)
Call:
Krig(x = dat2, Y = sigma, theta = 100)
Number of Observations: 23
Number of parameters in the null space 3
Parameters for fixed spatial drift 3
Model degrees of freedom: 19.2
Residual degrees of freedom: 3.8
GCV estimate for sigma: 9.943
MLE for sigma: 8.931
MLE for rho: 8808
lambda 0.0091
User rho NA
User sigma^2 NA
I don't konw how you had read map structure , but you can easily avoid teh conversion if you read it as a numeric. for example :
map <- read.table(files,sep=',') | unknown | |
d6894 | train | A changelog topic is a Kafka topic configured with log compaction. Each update to the KTable is written into the changelog topic. Because the topic is compacted, no data is ever lost and re-reading the changelog topic allows to re-create the local store.
The assumption of this optimization is, that the source topic is a compacted topic. For this case, the source topic and the corresponding changelog topic would contain the exact same data. Thus, the optimization removes the changelog topic and uses the source topic to re-create the state store during recovery.
If your input topic is not compacted but applies a retention time, you might not want to enable the optimization as this could result in data loss.
About the history: Initially, Kafka Streams had this optimization hardcoded (and thus "forced" users to only read compacted topics as KTables if potential data loss is not acceptable). However, in version 1.0 a regression bug was introduced (via https://issues.apache.org/jira/browse/KAFKA-3856: the new StreamsBuilder behavior was different to old KStreamBuilder and StreamsBuilder would always create a changelog topic) "removing" the optimization. In version 2.0, the issue was fixed and the optimization is available again. (cf https://issues.apache.org/jira/browse/KAFKA-6874)
Note: the optimization is only available for source KTables. For KTables that are the result of an computation, like an aggregation or other, the optimization is not available and a changelog topic will be created (if not explicitly disabled what disables fault-tolerance for the corresponding store). | unknown | |
d6895 | train | Since Susy is simply a Sass/Compass library, there is usually no need to integrate Susy directly with other build tools. Use the Sass/Compass-guard setup, install Susy like you would without guard (see the docs), and it should all just work. | unknown | |
d6896 | train | the problem is resolved. Changed the default excludes in plexus-utils-2.0.5.jar/org/codehaus/plexus/util/AbstractScanner.java which was excluding **/RCS & **/RCS/**. Commented the RCS line and voila, it worked.
A: Try adding this to your pom.xml
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>2.6</version>
<configuration>
<includes>
<include>**/RCS/*</include>
</includes>
</configuration>
</plugin>
</plugins>
</build> | unknown | |
d6897 | train | var features=layer.getSource().getFeatures();
for(var i=0;i<features.length;i++){
if(features[i].get('id')==id){
layer.getSource().removeFeature(features[i]);
break;
}
}
}
or from @sox:
layer.getSource().removeFeature(layer.getSource().getFeatureById(id)); | unknown | |
d6898 | train | After a while, checking different places, I came across with the problem and therefore could be able to solve it. The problem was that $exists must be enclosed with quotation marks ("$exists"). So the code would be like this:
dtc$find('{
"payload.fields.MDI_CC_DIAG_DTC_LIST" : {
"$exists" : true
},
"payload.asset" : {
"$exists" : true
}
}') | unknown | |
d6899 | train | You need some changes. Let's start with database related code. Instead of mixing database related things (MySqlConnection, MySqlCommand etc.) with presentation layer things (SelectListItem, List<SelectListItem> etc.) and doing that also inside a Controller, you should
*
*Create a separate class for accessing the database and fetching the data.
*The method that would be called should return a List of some kind of domain/entity object that would represent the Fruit.
So, let's define initially our class, Fruit:
public class Fruit
{
public string Code { get; }
public string Name { get; }
public Fruit(string code, string name)
{
Code = code;
Name = name;
}
}
Then let's create a class that would be responsible for accessing the database and fetch the fruits:
public class FruitsRepository
{
public List<Fruits> GetAll()
{
string sql;
var fruits = new List<Fruit>();
string constr = ConfigurationManager.ConnectionStrings["cn"].ConnectionString;
using (MySqlConnection con = new MySqlConnection(constr))
{
sql = @String.Format("SELECT * FROM `dotable`; ");
using (MySqlCommand cmd = new MySqlCommand(sql))
{
cmd.Connection = con;
con.Open();
using (MySqlDataReader sdr = cmd.ExecuteReader())
{
while (sdr.Read())
{
var fruit = new Fruit(sdr["sCode"].ToString(), sdr["sName"].ToString());
fruits.Add(fruit);
}
}
con.Close();
}
}
return fruits;
}
}
Normally, this class should implement an interface, in order we decouple the controller from the actual class that performs the database operations, but let's not discuss this at this point.
Then at your controller:
*
*We should use the above class to fetch the fruits.
*We should create a list of SelectListItem objects and you would provide that list to the model.
*We should change the model, in a way that it would hold an info about the selected fruit (check below).
*We should change the view.
Changes in the model
public class PersonModel
{
[Required]
[Display(Name = "Fruits")]
public string SelectedFruitCode { get; set; }
public List<SelectListItem> Fruits { get; set; }
public string Namex { get; set; }
public string Codex { get; set; }
[Required]
[Display(Name = "CL")]
public string CL { get; set; }
[Required]
[Display(Name = "Ticket")]
public string Ticket { get; set; }
}
Changes in the View
@Html.DropDownListFor(model => model.SelectedFruitCode, Model.Fruits, "[ === Please select === ]", new { @Class = "textarea" })
@Html.ValidationMessageFor(model => model.SelectedFruitCode, "", new { @class = "text-danger" })
Changes in the Controller
[HttpGet]
public ActionResult Index()
{
var personModel = new PersonModel();
// THIS IS **BAD CODE**...Normaly, you should create an interface that describes
// what is expected from the class that communicates with the DB for operations
// related with the Fruit and then inject the dependency in the HomeController
// Constructor.
var fruitsRepo = new FruitsRepository();
var fruits = fruitsRepo.GetAll();
var fruitsSelecteListItems = fruits.Select(fruit => new SelectListItem
{
Text = fruit.Name,
Value = fruit.Code
}).ToList();
personModel.Fruits = fruitsSelecteListItems;
return View(personModel);
}
Please check thoroughly the comments in the code above ^^. As a starting point for that mentioned in the comments, you could see this.
UPDATE
We have also to change the post action:
[HttpPost]
public ActionResult Index(PersonModel person)
{
// Removed the Model.IsValid check since it's redundant in your case
// Usually we use it and when it is valid we perform a task, like update
// the corresponding object in the DB or doing something else. Otherwise,
// we return a view with errors to the client.
var fruitsRepo = new FruitsRepository();
var fruits = fruitsRepo.GetAll();
var fruitsSelecteListItems = fruits.Select(fruit => new SelectListItem
{
Text = fruit.Name,
Value = fruit.Code,
Selected = String.Equals(fruit.Code,
person.SelectedFruitCode,
StringComparison.InvariantCultureIgnoreCase)
}).ToList();
person.Fruits = fruitsSelecteListItems;
return View(person);
} | unknown | |
d6900 | train | struct myclass {
bool operator() (cv::Point pt1, cv::Point pt2) { return (pt1.y < pt2.y); }
} myobject;
sort(pnt.begin(), pnt.end(), myobject);
use this simple code and replace pnt to your vector name and you can find max/min value in vector vecotr[0] have mix value and vector[last] have max value | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.