text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: Golang SQL Query Syntax Getting syntax erro with sql query in golang code. Required proper syntax for this SQL query in golang:
rows, errQuery := dbCon.Query("SELECT B.LatestDate
,A.SVRName AS ServerName
,A.DRIVE
,A.TotalSpace_GB AS TotalSpaceGB
,(ISNULL(A.TotalSpace_GB, 0) - ISNULL(A.FreeSpace_GB, 0)) AS
UsedSpaceGB
,A.FreeSpace_GB AS FreeSpaceGB
,CASE
WHEN ((A.FreeSpace_GB / A.TotalSpace_GB) * 100) between 25 and
35
THEN 1
WHEN ((A.FreeSpace_GB / A.TotalSpace_GB) * 100) <= 25 THEN 2
ELSE 0
END AS WARNINGSTATUS
FROM Table_ServerDiskSpaceDetails A WITH (NOLOCK)
INNER JOIN (
SELECT SVRName
,MAX(Dt) LatestDate
FROM Table_ServerDiskSpaceDetails WITH (NOLOCK)
GROUP BY SVRName
) B ON A.Dt = B.LatestDate
AND A.SVRName = B.SVRName
ORDER BY WARNINGSTATUS DESC
,ServerName
,A.Drive")
A: Your SQL statement is on multiple lines, but you're not using the correct multi-line syntax. The correct syntax would be:
someLongString := "Line 1 " + // Don't forget the trailing space
"Second line." // This is on the next line.
Currently you're just trying to stuff everything between a set of quotes on different lines.
EDIT: Per as @Kaedys says below, the following also works and may be more performant.
someLongString := `Line 1
Second line.`
A: change both your first and last "\"" to "`", or quote your each line of the query string and then add a "+" between each line like
"select" +
" *" +
" from" +
" table"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43853506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-8"
} |
Q: Presto - How to Transform the Second Appearance in an Array Input: Array['a', 'b', 'a']
Expected output: Array['a', 'b', 'a*2']
I've tried this following code, but I couldn't figure out how to represent the current x's index.
select transform(array['a', 'b', 'a'], x -> cast(ARRAY_POSITION(array['a', 'b', 'a'], x) as varchar))
UPD
A little bit complicated, but it did the work.
with cte as (
select ctr, ZIP_WITH(ctr, SEQUENCE(1, CARDINALITY(ctr), 1), (x, y) -> x || ':' || cast(y as varchar)) as col
from (values array['a', 'b', 'a', 'b', 'c']) as t(ctr)
)
select
array_duplicates(ctr),
col,
SUBSTR('a:3', strpos('a:3', ':') + 1, 1),
transform(col, x -> case when contains(array_duplicates(ctr), SUBSTR(x, 1, strpos(x, ':') - 1)) and SUBSTR(x, strpos(x, ':') + 1, 2) = cast(ARRAY_POSITION(ctr, SUBSTR(x, 1, strpos(x, ':') - 1), 2) as varchar)
then SUBSTR(x, 1, strpos(x, ':') - 1) || '*2' else SUBSTR(x, 1, strpos(x, ':') - 1) end)
from cte
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73707674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Copy the two most recent files to new dir I would like to write a VBScript or .bat file to move the two most recent files of a specific extension *.sch in directory a to a different directory.
I have experimented with $newest How do I find second newest?
Thanks
A: In VBScript you could do it like this:
src = "C:\source\folder"
dst = "C:\destination\folder"
Set fso = CreateObject("Scripting.FileSystemObject")
mostRecent = Array(Nothing, Nothing)
For Each f In fso.GetFolder(src).Files
If LCase(fso.GetExtensionName(f.Name)) = "sch" Then
If mostRecent(0) Is Nothing Then
Set mostRecent(0) = f
ElseIf f.DateLastModified > mostRecent(0).DateLastModified Then
Set mostRecent(1) = mostRecent(0)
Set mostRecent(0) = f
ElseIf mostRecent(1) Is Nothing Or f.DateLastModified > mostRecent(1).DateLastModified Then
Set mostRecent(1) = f
End If
End If
Next
For i = 0 To 1
If Not mostRecent(i) Is Nothing Then mostRecent(i).Copy dst & "\"
Next
Edit: The above code isn't too extensible, though. If you need more than just the most recent 2 files you may want to take a slightly different approach. Create an array the size of the number of files you want to handle, and do a sorted insert as long as you have free slots or the current file is newer than the oldest file already in the array.
src = "C:\source\folder"
dst = "C:\destination\folder"
num = 2
last = num-1
Function IsNewer(a, b)
IsNewer = False
If b Is Nothing Then
IsNewer = True
Exit Function
End If
If a.DateLastModified > b.DateLastModified Then IsNewer = True
End Function
Set fso = CreateObject("Scripting.FileSystemObject")
ReDim mostRecent(last)
For i = 0 To last
Set mostRecent(i) = Nothing
Next
For Each f In fso.GetFolder(src).Files
If LCase(fso.GetExtensionName(f.Name)) = "sch" Then
If IsNewer(f, mostRecent(last)) Then Set mostRecent(last) = Nothing
For i = last To 1 Step -1
If Not IsNewer(f, mostRecent(i-1)) Then Exit For
If Not mostRecent(i-1) Is Nothing Then
Set mostRecent(i) = mostRecent(i-1)
Set mostRecent(i-1) = Nothing
End If
Next
If mostRecent(i) Is Nothing Then Set mostRecent(i) = f
End If
Next
For i = 0 To num-1
If Not mostRecent(i) Is Nothing Then mostRecent(i).Copy dst & "\"
Next
An alternative would be shelling out to the CMD-builtin dir command and reading its output:
num = 2
Set fso = CreateObject("Scripting.FileSystemObject")
Set sh = CreateObject("WScript.Shell")
cmd = "cmd /c dir /a-d /b /o-d """ & sh.CurrentDirectory & """\*.*"
Set dir = sh.Exec(cmd)
Do While dir.Status = 0
WScript.Sleep 100
Loop
i = num
Do Until i = 0 Or dir.StdOut.AtEndOfStream
f = dir.StdOut.ReadLine
fso.CopyFile f, dst & "\"
i = i - 1
Loop
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18065976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: When should the label element be used? I was learning some HTML and I got confused about use of the label element because I found it in many places, with inputs in a form, with optgroup tag for the sections in a select element, before the textarea elelemt, etc.
So, is there a rule when to use it and when to avoid using it in the wrong way? especially in HTML5?
A: The <label> element should be used with form fields: most types of <input>, <select> and <textarea>. If has a for attribute that holds the id of the related element. So, if you click the label, the related element is focused.
Example Usage at Jsbin
<label for="textinput">Enter data here</label>
<input id="textinput>"
<input type="checkbox" id="checkbox">
<label for="checkbox">What this box does</label>
<input type="radio" id="radio_opt1" name="radiogroup">
<label for="radio_opt1">Option description</label>
<input type="radio" id="radio_opt2" name="radiogroup">
<label for="radio_opt2">Option description</label>
<label for="select">Select an option</label>
<select id="select">
<option>Some option</option>
</select>
<label for="textarea">Enter data into the textarea</label>
<textarea id="textarea"></textarea>
In <optgroup> elements, there is a label attribute, which is not the same as the label elements, although its function is similar: identifying a certain group of options:
<select>
<optgroup label="First group">
<option>Some option</option>
</optgroup>
<optgroup label="First group">
<option>Some option</option>
</optgroup>
</select>
A:
Label: This attribute explicitly associates the label being defined with another control.
So the label attribute should use when you want to show some text or label for another controls like textbox, checkbox etc.
And the important thing is
When present, the value of this attribute must be the same as the value of the id attribute of some other control in the same document. When absent, the label being defined is associated with the element's contents.
Look at here for the documentation
A: No, it's not HTML5 exclusive:)
Label could be used in connection with form element such as <input>, <select>, <textarea>. Clicking on label would automatically change focus to connected element.
There are two ways connecting label with element:
*
*Put element inside label
*Add for attribute for label, where for value is id of element need to be connected
Example (taken from http://htmlbook.ru/html/label):
<!DOCTYPE HTML>
<html>
<head>
<meta charset="utf-8">
<title>LABEL</title>
</head>
<body>
<form action="handler.php">
<p><b>Lorem ipsum dolor sit amet...</b></p>
<p><input type="checkbox" id="check1"><label for="check1">Lorem</label><Br>
<input type="checkbox" id="check2"><label for="check2">Ipsum</label><Br>
<input type="checkbox" id="check3"><label for="check3">Dolor</label><Br>
<input type="checkbox" id="check4"><label for="check4">Sit amet</label></p>
</form>
</body>
</html>
A: It should be used in forms with other elements only. It can be before, after, or around existing form control.
Here's an example by W3Schools.
<form action="demo_form.asp">
<label for="male">Male</label>
<input type="radio" name="sex" id="male" value="male"><br>
<label for="female">Female</label>
<input type="radio" name="sex" id="female" value="female"><br>
<input type="submit" value="Submit">
</form>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19067703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Windows Phone 7.1, Silverlight NUnit Project -template and Moq: TypeLoadException it's time to do some unit testing with MVVM Light ViewModels.
Setup:
*
*Visual Studio 2010 SP 1
*Windows Phone 7.1 SDK Release Candidate
*Silverlight NUnit Project -template
*Moq (4.0.10827 Final)
Steps:
*
*Create a new MvvmLight (WP7) -project
*Convert project to WP7.1
*Create a new Silverlight NUnit -project
*Reference WP7-project to Silverlight NUnit -project
*Add a dummy method to the MainViewModel (f.ex. public string DoSomething())
*Add a test that instantiates MainViewModel, calls the dummy method and asserts.
*Run tests -> everything should work as expected
*Add reference to Moq
*Add second test method with some Moq. I simply copy-pasted this demo code from Moq's site:
var mock = new Mock();
// WOW! No record/replay weirdness?! :)
mock.Setup(framework => framework.DownloadExists("2.0.0.0"))
.Returns(true)
.AtMostOnce();
// Hand mock.Object as a collaborator and exercise it,
// like calling methods on it...
ILoveThisFramework lovable = mock.Object;
bool download = lovable.DownloadExists("2.0.0.0");
// Verify that the given method was indeed called with the expected value
mock.Verify(framework => framework.DownloadExists("2.0.0.0"));
*Run tests.
This is what i get via NUnit runner at step 9:
SilverlightNUnitProject2.SilverlightTests.TestSomething:
System.TypeLoadException : Could not load type 'System.Action' from assembly 'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.
Question:
What is going on here? Which part of my setup is failing and why?
Update!
I found a blog post related to this problem here. This inspired me to download and try the exactly same version of Moq (3.1.416.3) used in that article. And whaddayouknow? It works.
I'm not going to put this up as an answer, because i still don't know what's going on here. Original question still stands, i think.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7315955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: iOS 16.1 breaks UITextView tokenizer for words and sentences? The built-in UITextView tokenizer function rangeEnclosingPosition(_:with:inDirection:) seems to have broken in iOS 16.1 for the word and sentence granularity.
*
*word doesn't seem to ever return a range
*sentence only works for the very last sentence in the text view
Is anyone else using the tokenizer (UITextInputTokenizer) property of UITextView to parse sentences, and is there another way?
I'm using it to select a full sentence in one tap.
Minimal reproduction
import UIKit
class ViewController: UIViewController {
let textView = UITextView()
override func viewDidLoad() {
super.viewDidLoad()
textView.translatesAutoresizingMaskIntoConstraints = false
textView.isScrollEnabled = false
textView.isEditable = false
textView.font = .preferredFont(forTextStyle: .headline)
textView.text = "Lorem ipsum dolor sit amet consectetur adipisicing elit. Odit, asperiores veniam praesentium repellat doloribus ut und. Soluta et hic velit aliquid totam aperiam ipsam ex odio, voluptatem iste saepe sit."
self.view.addSubview(textView)
NSLayoutConstraint.activate([
textView.leftAnchor.constraint(equalTo: view.leftAnchor, constant: 16),
textView.rightAnchor.constraint(equalTo: view.rightAnchor, constant: -16),
textView.centerYAnchor.constraint(equalTo: view.centerYAnchor, constant: 0),
])
let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(tappedLabel(sender:)))
textView.addGestureRecognizer(tapGestureRecognizer)
}
@objc func tappedLabel(sender: UITapGestureRecognizer) {
guard sender.state == .ended else { return }
let location = sender.location(in: textView)
let textposition = textView.closestPosition(to: location)!
/// This works to grab a text range for a tapped sentence in iOS < 16.1
/// but returns null in 16.1 for all but the final sentence.
let expandedRange = textView.tokenizer.rangeEnclosingPosition(textposition, with: .sentence, inDirection: .layout(.right))
textView.becomeFirstResponder()
textView.selectedTextRange = expandedRange
}
}
A: Give it a try to change the inDirection: to UITextWritingDirection.rightToLeft.rawValue
This worked for me (even that it is logically wrong to me). Hope it helps:
guard let wordRange = textView.tokenizer.rangeEnclosingPosition(tapPos, with: .word, inDirection: UITextDirection(rawValue: UITextWritingDirection.rightToLeft.rawValue) ) else {
return nil
}
return textView.text(in: wordRange)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74281213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to watch window scope variable from angular controller properly I am aware that I can wrap out of scope variable in functions to somewhat achieve this
$scope.$watch(
function () {
return $window.mArray
}, function(){
// code
}
);
But this doesn't get triggered unless something else triggers the digest cycle, is there a more proper way to do this ?
A: Here is the demo of how you should use
angular.module('myapp', []).controller('ctrl', function($scope, $window){
$scope.data = 0;
$scope.changeData = function(){
$scope.data = Math.random();
}
$scope.$watch('data', function(newValue, oldValue){
console.log(newValue);
}, true);
});
Hope this may help you
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33516983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: JSF 2.0 URL parameter needed on larger scope I have found a few good replies to similar content so far, but never something that solves my issue. I am trying to accomplish this in the best manner possible.
Within my application (JSF 2.0 running on Glasshfish), I have a list of events (let's call this the EventPage). It is possible to click on each event to then show a page of "results" (ResultPage), showing a list of people who have attended this event.
On the EventPage, the link is made as such :
<h:link value="#{event.eventName}" outcome="displayResults">
<f:param name="eventCode" value="#{event.eventCode}"/>
</h:link>
Then, on the outcome - displayResult, I have code such as this in my backing bean (inspiried by a similar):
@ManagedBean
@RequestScoped
public class DisplayResults {
@ManagedProperty(value="#{param.eventCode}")
...
This works well. The results are displayed in a Datatable. Now I want the ability to sort them. So I've followed this example : http://www.mkyong.com/jsf2/jsf-2-datatable-sorting-example/.
But, once I change the scope of my backing bean to be something else the "request", I can't use the ManagedProperty anymore. And thus am thinking I have to refer to something less elegant such as :
public String getPassedParameter() {
FacesContext facesContext = FacesContext.getCurrentInstance();
this.passedParameter = (String) facesContext.getExternalContext().
getRequestParameterMap().get("id");
return this.passedParameter;
}
Aslo reading on this forum I share the opinion that if you have to dig down into the FacesContext, you are probably doing it wrong.
SO: 1. Is it possible to sort a Datatable without refreshing the view? Only the datatable in question? 2. Is there another good solution to get the url parameter (or use diffrent means)?
Thanks!
A: Use <f:viewParam> (and <f:event>) in the target view instead of @ManagedProperty (and @PostConstruct).
<f:metadata>
<f:viewParam name="eventCode" value="#{displayResults.eventCode}" />
<f:event type="preRenderView" listener="#{displayResults.init}" />
</f:metadata>
As a bonus, this also allows for more declarative conversion and validation without the need to do it in the @PostConstruct.
See also:
*
*ViewParam vs @ManagedProperty(value = "#{param.id}")
*Communication in JSF2 - Processing GET request parameters
| {
"language": "en",
"url": "https://stackoverflow.com/questions/8933197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Instantiating with a Pool class via PoolObjectFactory interface Here is an example of using Java Pool (pool of generics) in order to instantiate TouchEvents for Android:
import java.util.ArrayList;
import java.util.List;
public class Pool<T> {
public interface PoolObjectFactory<T> {
public T createObject();
}
private final List<T> freeObjects;
private final PoolObjectFactory<T> factory;
private final int maxSize;
public Pool(PoolObjectFactory<T> factory, int maxSize) {
this.factory = factory;
this.maxSize = maxSize;
this.freeObjects = new ArrayList<T>(maxSize);
}
public T newObject() {
T object = null;
if (freeObjects.isEmpty()) {
object = factory.createObject();
} else {
object = freeObjects.remove(freeObjects.size() - 1);
}
return object;
}
public void free(T object) {
if (freeObjects.size() < maxSize) {
freeObjects.add(object);
}
}
}
However, I don't really understand how this code works:
if (freeObjects.isEmpty()) {
object = factory.createObject();
} else {
object = freeObjects.remove(freeObjects.size() - 1);
}
Lets say we have:
touchEventPool = new Pool<TouchEvent>(factory, 100);
Does this mean it is going to store an Array of 100 events (and when #101 comes inside, will dispose #1, like first-in-first-out)?
I assume it supposed to hold some maximum number of objects and then dispose the extra. I read book's description like 10 times.. and couldn't get it. Maybe someone explain how this works?
A:
I assume it supposed to hold some maximum number of objects and then dispose the extra. I read book's description like 10 times.. and couldn't get it. Maybe someone explain how this works?
Sort of. The class keeps a cache of pre-created objects in a List called pool. When you ask for a new object (via the newObject method) it will first check the pool to see if an object is available for use. If the pool is empty, it just creates an object and returns it to you. If there is an object available, it removes the last element in the pool and returns it to you.
Annotated:
if (freeObjects.isEmpty()) {
// The pool is empty, create a new object.
object = factory.createObject();
} else {
// The pool is non-empty, retrieve an object from the pool and return it.
object = freeObjects.remove(freeObjects.size() - 1);
}
And when you return an object to the cache (via the free() method), it will only be placed back into the pool if the maximum size of the pool has not been met.
Annotated:
public void free(T object) {
// If the pool is not already at its maximum size.
if (freeObjects.size() < maxSize) {
// Then put the object into the pool.
freeObjects.add(object);
}
// Otherwise, just ignore this call and let the object go out of scope.
}
If the pool's max size has already been reached, the object you are freeing is not stored and is (presumably) subject to garbage collection.
A: The idea of any pool is in creating controlled environment where (usually) no need to create new (event) instances when some unused free instances can be re-used from the pool.
When you create
touchEventPool = new Pool<TouchEvent>(factory, 100);
you hope 100 instances will be enough in any particular moment of the program live.
So when you want to get 101'st event the process probably will free first 5, 20 or even 99 events and the pool will be able to reuse any of them.
If there will be no free instances then depending on the pool policy the new one will be created or the requestor thread will wait other threads to release one and return to the pool. In this particular implementation the new one will be created.
A: I think that the main concept of object pool is to reduce frequency of object instanciations.
Does this mean it is going to store an Array of 100 events (and when #101 comes inside, will dispose #1, like first-in-first-out)?Does this mean it is going to store an Array of 100 events (and when #101 comes inside, will dispose #1, like first-in-first-out)?
I don't think so. The maximum number 100 means that of freeObjects but of currently using objects. When an object is not used any more, you shall free it. Then the freed object won't be descarded but be stocked as a freeObject (the max num means that of these spared objects). Next time you need another new object, you don't have to instanciate a new object. All you need is just reusing one of spared freeObjects.
Thus you can avoid costly object instanciations. It can improve in performance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27803465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: SMTP send mail is not working for office365 Here is a peculiar problem. The intention is to send a mail via SMTP for office365.
I have been able to consistently send mail from my local laptop.
But when deployed on our server (behind a firewall), it does not succeed. Note: The port 587 for smtp.office365.com is accessible and confirmed on the server. Here are the properties via which it successfully works from my local computer.
Properties props = new Properties();
props.put("mail.smtp.starttls.enable", "true");
props.put("mail.smtp.connectiontimeout", MAIL_TIMEOUT);
props.put("mail.smtp.timeout", MAIL_TIMEOUT);
props.put("mail.debug", true);
this.session = Session.getInstance(props);
session.setDebug(true);
Transport transport = session.getTransport();
transport.connect("smtp.office365.com", 587, email, pass);
But fails on server. Here are the server debug logs:
DEBUG: setDebug: JavaMail version 1.6.2
DEBUG: getProvider() returning javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Oracle]
DEBUG SMTP: useEhlo true, useAuth false
DEBUG SMTP: trying to connect to host "smtp.office365.com", port 587, isSSL false
220 PN1PR0101CA0017.outlook.office365.com Microsoft ESMTP MAIL Service ready at Fri, 28 Jun 2019 06:39:41 +0000
DEBUG SMTP: connected to host "smtp.office365.com", port: 587
EHLO appqa
250-PN1PR0101CA0017.outlook.office365.com Hello [182.73.191.100]
250-SIZE 157286400
250-PIPELINING
250-DSN
250-ENHANCEDSTATUSCODES
250-STARTTLS
250-8BITMIME
250-BINARYMIME
250-CHUNKING
250 SMTPUTF8
DEBUG SMTP: Found extension "SIZE", arg "157286400"
DEBUG SMTP: Found extension "PIPELINING", arg ""
DEBUG SMTP: Found extension "DSN", arg ""
DEBUG SMTP: Found extension "ENHANCEDSTATUSCODES", arg ""
DEBUG SMTP: Found extension "STARTTLS", arg ""
DEBUG SMTP: Found extension "8BITMIME", arg ""
DEBUG SMTP: Found extension "BINARYMIME", arg ""
DEBUG SMTP: Found extension "CHUNKING", arg ""
DEBUG SMTP: Found extension "SMTPUTF8", arg ""
STARTTLS
220 2.0.0 SMTP server ready
Exception in thread "main" javax.mail.MessagingException: Could not convert socket to TLS;
nested exception is:
java.net.SocketTimeoutException: Read timed out
at com.sun.mail.smtp.SMTPTransport.startTLS(SMTPTransport.java:2155)
at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:752)
at javax.mail.Service.connect(Service.java:366)
at com.company.app.MailReader.getTransport(MailReader.java:269)
at io.vavr.control.Try.of(Try.java:75)
at com.company.app.MailReader.<init>(MailReader.java:59)
at com.company.services.MailService.getNewMailReader(MailService.java:82)
at com.company.services.MailService.start(MailService.java:46)
at com.company.Main.main(Main.java:34)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
at sun.security.ssl.InputRecord.read(InputRecord.java:529)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
at com.sun.mail.util.SocketFetcher.configureSSLSocket(SocketFetcher.java:626)
at com.sun.mail.util.SocketFetcher.startTLS(SocketFetcher.java:553)
at com.sun.mail.smtp.SMTPTransport.startTLS(SMTPTransport.java:2150)
... 8 more
A: Check whether the server has the same set of certificates as your local computer.
The 220 response from the server does not mean that the TLS session is already established, it just means that the client may start negotiating it:
After receiving a 220 response to a STARTTLS command, the client MUST start the TLS negotiation before giving any other SMTP commands. If, after having issued the STARTTLS command, the client finds out that some failure prevents it from actually starting a TLS handshake, then it SHOULD abort the connection.
(from RFC 3207)
At this point, a missing certificate is the most likely problem.
A: Check your JRE version on the server and compare it to the version of your local computer.
This is an environment related issue as the same code behaves differently on different machines. Without the full picture, I cannot answer with certainty. But I hope providing some insight for further investigation. My analysis follows:
*
*First, I don't think it's a SSL certificate issue, the root cause error is clear:
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
...
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
...
this means the socket has been established, but the handshake phase of converting the socket to TLS has failed. If the certificate is not valid, it will be reported after the handshake, let's look at the code from the SocketFetcher.java class:
/*
* Force the handshake to be done now so that we can report any
* errors (e.g., certificate errors) to the caller of the startTLS
* method.
*/
sslsocket.startHandshake();
/*
* Check server identity and trust.
*/
boolean idCheck = PropUtil.getBooleanProperty(props,
prefix + ".ssl.checkserveridentity", false);
if (idCheck)
checkServerIdentity(host, sslsocket);
if (sf instanceof MailSSLSocketFactory) {
MailSSLSocketFactory msf = (MailSSLSocketFactory)sf;
if (!msf.isServerTrusted(host, sslsocket)) {
throw cleanupAndThrow(sslsocket,
new IOException("Server is not trusted: " + host));
}
}
}
the socket encountered the timeout at this line: sslsocket.startHandshake(), which is before the certificate validation.
*
*Second, you have already mentioned that firewalls are disabled, and we can see that the previous socket is correctly established so is the telnet command, so I don't think it's a firewall issue neither.
*It seems like a protocol issue, mostly because this happened during the handshake phase, otherwise we should see different and more explicit error, such as certificate error, connection timeout, etc. This is a socketRead timeout, which indicates the client (your server) is expecting some information from the server (office365), but the server doesn't respond, it's like they are not talking together.
*The compiled code is not the issue here, but some part of this process is environment related: the SSLSocketImpl.class class is from the JRE not from the compilation. And this is the exact code (decompiled) where the protocol is implemented:
private void performInitialHandshake() throws IOException {
Object var1 = this.handshakeLock;
synchronized(this.handshakeLock) {
if (this.getConnectionState() == 1) {
this.kickstartHandshake();
if (this.inrec == null) {
this.inrec = new InputRecord();
this.inrec.setHandshakeHash(this.input.r.getHandshakeHash());
this.inrec.setHelloVersion(this.input.r.getHelloVersion());
this.inrec.enableFormatChecks();
}
this.readRecord(this.inrec, false);
this.inrec = null;
}
}
}
The above code is from JRE_1.8.0_181, your code or the code from your server may be different. This is way it's necessary to check your server's JRE version.
*
*Using the same code you provided at the beginning, I could correctly connect to office365
A: Try adding this to your properties, and it should do the trick.
props.getProperties().put("mail.smtp.ssl.trust", "smtp.office365.com");
A: The problem was one particular rule in firewall.
Deleting the rule in the firewall fixed this issue. No specific code change was needed to make it work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56802175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Why does a:hover get overriden in CSS? If I have this CSS:
a:link { color: blue; }
a:hover { color: red; }
#someID a:link { color: black; }
Links under the ID always appears in black on hover. I'm aware that using an ID gives a higher priority, however, I'm not overriding the :hover selector, only the :link selector, so shouldn't the hover display in red?
A: The :link pseudo class applies to the link even when you are hovering over it. As the style with the id is more specific it overrides the others.
The only reason that the :hover style overrides the :link style at all is that it comes later in the style sheet. If you place them in this order:
a:hover { color: red; }
a:link { color: blue; }
the :link style is later in the style sheet and overrides the :hover style. The link stays blue when you hover over it.
To make the :hover style work for the black link you have to make it at least as specific as the :link style, and place it after it in the style sheet:
a:link { color: blue; }
a:hover { color: red; }
#someID a:link { color: black; }
#someID a:hover { color: red; }
A: There's an order issue, as explained in W3Schools:
Note: a:hover MUST come after a:link
and a:visited in the CSS definition in
order to be effective!!
Note: a:active MUST come after a:hover
in the CSS definition in order to be
effective!!
http://www.w3schools.com/CSS/css_pseudo_classes.asp
| {
"language": "en",
"url": "https://stackoverflow.com/questions/718226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Curl FTPS upload with username:password syntax So the curl man pages say to do this for an upload:
curl --upload-file "{file1,file2}" http://www.example.com
or
curl -T "img[1-1000].png" ftp://ftp.example.com/upload/
I know how to establish a connection...here's my command which works.
curl -k --user myusername:mypassword ftps://ftp.yourftp.com
But the man pages don't list how to establish a connection and upload a file. Only how to upload a file. So what's the correct syntax here for connecting and uploading? How do I combine the last command of establishing a connection with the curl upload commands?
A: Just an update, here's what I use for MacOS, for anyone else that's wondering. You can include variables too. (This is Zsh)
curl -g -k -T ~/Documents/myfile.txt ftps://myusername:[email protected]/mydirectorypath/
curl -g -k -T ~/Documents/$(echo "$myfile")_.txt ftps://myusername:[email protected]/mydirectorypath/
A: curl -k -T /localfilepath/temptestfile.txt ftps://yourusername:[email protected]/somedirectory/
A: In case someone is looking the answer for the ftp protocol (I googled it and found this question):
curl -T local_file.txt -u "login:password" ftp://my-ftp-server.com/remote/ftp/path/
It is also possible to use -n option instead of -u with curl to use ~/.netrc file for storing password information (might be safer in scripts).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62333910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Would I be able to run Java programs without having the entire kit downloaded? I have recently started coding in Java.
I was wondering, is it mandatory to have the entire Java kit to run the computer from a different computer? Because, I am not using the resources from the entire API, just a few particular ones.
I wouldn't want to get the entire JDK pack for a single project. Any help would be nice,
Thank you.
A: If you want to develop your own Java applications, then yes, you need the entire JDK. However, there are several JDK alternatives available and I believe the SE JDK should be enough for you. You can find it here
A: Yes, you can, you only need JRE (runtime). Hoverwer, to develop program you do need to have JDK.
A: To run an already completed app in other machines, you just need the JRE (Java Runtime Environment) installed in those machines. You need the JDK (Java Development Kit) for development. The JDK already comes with the JRE.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23838326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Should extending a es2015 class copy the properties on the constructor? Apologies if this seems poorly researched, but I'm having trouble finding any mention of this behavior:
class ClassA {...}
ClassA.property = 123;
class ClassB extends ClassA {...}
ClassB.property //undefined
Is there a preferred/idiomatic way to copy these properties to the subclass's constructor?
Edit: As it turns out, this was a premature distillation of the problem which was actually caused by react-css-modules not copying over React-specific properties. A more accurate distillation:
class ClassA extends SomeClass {...}
console.log('ClassA', ClassA.defaultProps);
//ClassA Object {propA: ""}
console.log('ClassA CSSMod', CSSModules(ClassA, styles).defaultProps);
//ClassA CSSMod undefined
ClassA.defaultProps = SomeClass.defaultProps;
console.log('ClassA CssMod after explicit copy', CSSModules(ClassA, styles).defaultProps);
//ClassA CssMod after explicit copy Object {propA: ""}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36993129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: arrange() not working when dealing with letters and numbers I've seen many posts related to arrange() issues, but none of them solved my situation, hopefully, this is not a duplicate. I have some columns named "Q1", "Q2", "Q3" and so on. After calculating some basic descriptive stats with rstatix::get_summary_stats(), I need to arrange the new column variable in ascending order (ie, Q1 before Q2 before Q3, etc). I'm sure this is a silly problem, but I can't see what I'm doing wrong.
*
*the raw data looks like this:
ID Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14 Q15
1 PART1 4 1 1 5 5 5 1 5 1 1 3 5 5 1 5
2 PART2 5 4 1 5 5 4 1 5 2 1 3 5 4 1 5
3 PART3 2 4 3 5 5 4 1 5 2 1 3 5 4 1 5
so on...
*
*My attempt:
descriptive <- data %>%
rstatix::get_summary_stats(show = c("mean", "sd", "median", "iqr", "min", "max")) %>%
mutate_if(is.numeric, round, 2) %>%
dplyr::arrange(variable)
*
*The first 10 lines:
A tibble: 15 x 8
variable n mean sd median iqr min max
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Q1 63 3.94 1.03 4 2 2 5
2 Q10 63 1.84 0.88 2 2 1 3
3 Q11 63 2.62 1.31 3 3 1 5
4 Q12 63 3.98 1.01 4 2 2 5
5 Q13 63 4.33 0.8 5 1 2 5
6 Q14 63 1.91 0.88 2 2 1 4
7 Q15 63 4.25 0.95 5 1 2 5
8 Q2 63 2.86 1.58 3 3 1 5
9 Q3 63 1.97 1.06 2 2 1 4
10 Q4 63 3.98 1.04 4 2 2 5
Note: I've already tried ungroup() and across(starts_with("Q*"))), but nothing works. Any thoughts would be much appreciated, thanks in adv.
*
*data:
> dput(descriptive)[1:10, ]
structure(list(variable = c("Q1", "Q10", "Q11", "Q12", "Q13",
"Q14", "Q15", "Q2", "Q3", "Q4", "Q5", "Q6", "Q7", "Q8", "Q9"),
n = c(63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63,
63, 63), mean = c(3.94, 1.84, 2.62, 3.98, 4.33, 1.91, 4.25,
2.86, 1.97, 3.98, 4.21, 4.05, 2.38, 4.03, 2.25), sd = c(1.03,
0.88, 1.31, 1.01, 0.8, 0.88, 0.95, 1.58, 1.06, 1.04, 0.94,
1.04, 1.36, 1.05, 1.12), median = c(4, 2, 3, 4, 5, 2, 5,
3, 2, 4, 4, 4, 2, 4, 2), iqr = c(2, 2, 3, 2, 1, 2, 1, 3,
2, 2, 1, 2, 2.5, 2, 2), min = c(2, 1, 1, 2, 2, 1, 2, 1, 1,
2, 2, 1, 1, 2, 1), max = c(5, 3, 5, 5, 5, 4, 5, 5, 4, 5,
5, 5, 5, 5, 5)), row.names = c(NA, -15L), class = c("tbl_df",
"tbl", "data.frame"))
A: How about just use arrange() on the integer part of variable?
descriptive %>% arrange(as.integer(gsub("Q","",variable)))
Output:
# A tibble: 15 × 8
variable n mean sd median iqr min max
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Q1 63 3.94 1.03 4 2 2 5
2 Q2 63 2.86 1.58 3 3 1 5
3 Q3 63 1.97 1.06 2 2 1 4
4 Q4 63 3.98 1.04 4 2 2 5
5 Q5 63 4.21 0.94 4 1 2 5
6 Q6 63 4.05 1.04 4 2 1 5
7 Q7 63 2.38 1.36 2 2.5 1 5
8 Q8 63 4.03 1.05 4 2 2 5
9 Q9 63 2.25 1.12 2 2 1 5
10 Q10 63 1.84 0.88 2 2 1 3
11 Q11 63 2.62 1.31 3 3 1 5
12 Q12 63 3.98 1.01 4 2 2 5
13 Q13 63 4.33 0.8 5 1 2 5
14 Q14 63 1.91 0.88 2 2 1 4
15 Q15 63 4.25 0.95 5 1 2 5
A: We could use mixedorder which would work even if the values have different prefix
library(dplyr)
descriptive %>%
arrange(order(gtools::mixedorder(variable)))
-output
# A tibble: 15 × 8
variable n mean sd median iqr min max
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Q1 63 3.94 1.03 4 2 2 5
2 Q2 63 2.86 1.58 3 3 1 5
3 Q3 63 1.97 1.06 2 2 1 4
4 Q4 63 3.98 1.04 4 2 2 5
5 Q5 63 4.21 0.94 4 1 2 5
6 Q6 63 4.05 1.04 4 2 1 5
7 Q7 63 2.38 1.36 2 2.5 1 5
8 Q8 63 4.03 1.05 4 2 2 5
9 Q9 63 2.25 1.12 2 2 1 5
10 Q10 63 1.84 0.88 2 2 1 3
11 Q11 63 2.62 1.31 3 3 1 5
12 Q12 63 3.98 1.01 4 2 2 5
13 Q13 63 4.33 0.8 5 1 2 5
14 Q14 63 1.91 0.88 2 2 1 4
15 Q15 63 4.25 0.95 5 1 2 5
Or with parse_number
descriptive %>%
arrange(readr::parse_number(variable))
A: There are already better soultions. Just for fun:
We could split variable column with regex (?<=[A-Za-z])(?=[0-9]) and then arrange:
library(tidyr)
library(dplyr)
df %>%
separate(variable, c("quarter", "number"), sep = "(?<=[A-Za-z])(?=[0-9])", remove = FALSE) %>%
arrange(quarter, as.numeric(number)) %>%
select(-c(quarter, number))
variable n mean sd median iqr min max
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Q1 63 3.94 1.03 4 2 2 5
2 Q2 63 2.86 1.58 3 3 1 5
3 Q3 63 1.97 1.06 2 2 1 4
4 Q4 63 3.98 1.04 4 2 2 5
5 Q5 63 4.21 0.94 4 1 2 5
6 Q6 63 4.05 1.04 4 2 1 5
7 Q7 63 2.38 1.36 2 2.5 1 5
8 Q8 63 4.03 1.05 4 2 2 5
9 Q9 63 2.25 1.12 2 2 1 5
10 Q10 63 1.84 0.88 2 2 1 3
11 Q11 63 2.62 1.31 3 3 1 5
12 Q12 63 3.98 1.01 4 2 2 5
13 Q13 63 4.33 0.8 5 1 2 5
14 Q14 63 1.91 0.88 2 2 1 4
15 Q15 63 4.25 0.95 5 1 2 5
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74933605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I make input field and an image come in same line? I am trying to make this input field come in the same line as the image please help here is the output and the code
<img class="comment-profile-pic" src="https://drgsearch.com/wp-content/uploads/2020/01/no-photo.png" alt="">
<input type="text" class="post-comment">
CSS:
.comment-profile-pic
{
border-radius: 50%;
width: 50px;
}
[Output]
[1]: https://i.stack.imgur.com/Zd0ww.png
A: You can put img and input in flex div :
.comment-profile-pic {
border-radius: 50%;
width: 50px;
}
.wrapper {
display: flex;
align-items: center;
}
<div class="wrapper">
<img class="comment-profile-pic" src="https://drgsearch.com/wp-content/uploads/2020/01/no-photo.png" alt="">
<input type="text" class="post-comment">
</div>
A: use flexboxs, wrap both items in a div and then add a class with display:flex
A: You can also do, that image and input top would be in the same line.
<html>
<head>
<title>Example</title>
<style>
.comment-profile-pic {
width: 100px;
}
.one-line {
display: flex;
align-items: top;
}
input {
height: 25px;
}
</style>
</head>
<body>
<div class="one-line">
<img class="comment-profile-pic" src="https://drgsearch.com/wp-content/uploads/2020/01/no-photo.png" alt="">
<input type="text" class="post-comment">
</div>
</body>
</html>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/68713971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Bash loop, print current iteration? Say you have a simple loop
while read line
do
printf "${line#*//}\n"
done < text.txt
Is there an elegant way of printing the current iteration with the output? Something like
0 The
1 quick
2 brown
3 fox
I am hoping to avoid setting a variable and incrementing it on each loop.
A: To do this, you would need to increment a counter on each iteration (like you are trying to avoid).
count=0
while read -r line; do
printf '%d %s\n' "$count" "${line*//}"
(( count++ ))
done < test.txt
EDIT: After some more thought, you can do it without a counter if you have bash version 4 or higher:
mapfile -t arr < test.txt
for i in "${!arr[@]}"; do
printf '%d %s' "$i" "${arr[i]}"
done
The mapfile builtin reads the entire contents of the file into the array. You can then iterate over the indices of the array, which will be the line numbers and access that element.
A: You can use a range to go through, it can be an array, a string, a input line or a list.
In this example, i use a list of numbers [0..10] is used with an increment of 2, as well.
#!/bin/bash
for i in {0..10..2}; do
echo " $i times"
done
The output is:
0 times
2 times
4 times
6 times
8 times
10 times
To print the index regardless of the loop range, you have to use a variable "COUNTER=0" and increase it in each iteration "COUNTER+1".
my solution prints each iteration, the FOR traverses an inputline and increments by one each iteration, also shows each of words in the inputline:
#!/bin/bash
COUNTER=0
line="this is a sample input line"
for word in $line; do
echo "This i a word number $COUNTER: $word"
COUNTER=$((COUNTER+1))
done
The output is:
This i a word number 0: this
This i a word number 1: is
This i a word number 2: a
This i a word number 3: sample
This i a word number 4: input
This i a word number 5: line
to see more about loops: enter link description here
to test your scripts: enter link description here
A: n=0
cat test.txt | while read line; do
printf "%7s %s\n" "$n" "${line#*//}"
n=$((n+1))
done
This will work in Bourne shell as well, of course.
If you really want to avoid incrementing a variable, you can pipe the output through grep or awk:
cat test.txt | while read line; do
printf " %s\n" "${line#*//}"
done | grep -n .
or
awk '{sub(/.*\/\//, ""); print NR,$0}' test.txt
A: You don't often see it, but you can have multiple commands in the condition clause of a while loop. The following still requires an explicit counter variable, but the arrangement may be more suitable or appealing for some uses.
while ((i++)); read -r line
do
echo "$i $line"
done < inputfile
The while condition is satisfied by whatever the last command returns (read in this case).
Some people prefer to include the do on the same line. This is what that would look like:
while ((i++)); read -r line; do
echo "$i $line"
done < inputfile
A: Update: Other answers posted here are better, especially those of @Graham and @DennisWilliamson.
Something very like this should suit:
tr -s ' ' '\n' <test.txt | nl -ba
You can add a -v0 flag to the nl command if you want indexing from 0.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/10942825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
} |
Q: Kubernetes Ingress Nginx Controller is Not Found Ingress Nginx controller is returning 404 Not Found for the React application. I narrowed it down to the React app because if I try to hit posts.com/posts, it actually returns the JSON list of existing posts, but for the frontend app it keeps showing
GET http://posts.com/ 404 (Not Found)
I looked to some other stackoverflow questions, to no avail unfortunately.
ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "use"
spec:
rules:
- host: posts.com
http:
paths:
- path: /posts/create
pathType: Prefix
backend:
service:
name: posts-clusterip-srv
port:
number: 4000
- path: /posts
pathType: Prefix
backend:
service:
name: query-srv
port:
number: 4002
- path: /posts/?(.*)/comments
pathType: Prefix
backend:
service:
name: comments-srv
port:
number: 4001
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
client-depl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: brachikaa/client
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
frontend Dockerfile
FROM node:alpine
ENV CI=true
WORKDIR /app
COPY package.json ./
RUN npm install
COPY ./ ./
CMD ["npm", "start"]
Logging the pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/client-depl-f7cf996cf-cvh6m to minikube
Normal Pulling 11m kubelet Pulling image "brachikaa/client"
Normal Pulled 11m kubelet Successfully pulled image "brachikaa/client" in 42.832431635s
Normal Created 11m kubelet Created container client
Normal Started 11m kubelet Started container client
If you need any other logs, I will gladly provide them. Thanks.
A: In your yamls, there is a path "/?..." for handling the query parameters but this path will not receive traffic from "/" path as there is no prefix match. So you have to create a path "/" with type prefix to solve the issue. Then you can ignore current "/?..." path as it will match prefix with "/" path.
Please try this:
ingress-srv.yaml
__________________
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "use"
spec:
rules:
- host: posts.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
- path: /posts/create
pathType: Prefix
backend:
service:
name: posts-clusterip-srv
port:
number: 4000
- path: /posts
pathType: Prefix
backend:
service:
name: query-srv
port:
number: 4002
- path: /posts/?(.*)/comments
pathType: Prefix
backend:
service:
name: comments-srv
port:
number: 4001
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
A: so the issue was a really dumb one - I actually set nginx.ingress.kubernetes.io/use-regex: "use" instead of nginx.ingress.kubernetes.io/use-regex: "true"... After three days of checking through the documentation, I finally found it. If anyone encounters a similar problem - there you have it.
A: You have to add following rules in your spec
- path: /
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
This matches all paths.
Reference - https://kubernetes.io/docs/concepts/services-networking/ingress/#examples
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66772611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Unable to connect to micro service through ZUUL api gateway in docker I've following services and working fine when they are deployed in localhost (through eclipse). But unable to invoke the rest service when deployed as separate docker containers.
I'm new docker and attended the tutorials to have knowledge on how this works.
Following services are running in separate docker containers and configured as follows (local environment)
Eureka
Docker IP : 172.17.0.3
Docker port mapping : 8761:8761
spring.application.name=naming-server
server.port=8761
Zuul API gate way
Docker IP : 172.17.0.4
Docker port mapping : 8765:8765
spring.application.name=gateway-server
server.port=8765
User service
Docker IP : 172.17.0.5
Docker port mapping : 8101:8101
spring.application.name=user-service
server.port=8101
Registered services info in Eureka
Application AMIs Availability Zones Status
USER-SERVICE n/a (1) (1) UP (1) - de4396a354ea:user-service:8101
API-GATEWAY n/a (1) (1) UP (1) - e5dd509065cd:api-gateway:8765
When tried to invoke the service in "User service" through gateway, it's throwing exception
com.netflix.zuul.exception.ZuulException: Forwarding error
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.handleException(RibbonRoutingFilter.java:198) ~[spring-cloud-netflix-zuul-2.2.0.RELEASE.jar!/:2.2.0.RELEASE]
at org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.forward(RibbonRoutingFilter.java:173) ~[spring-cloud-netflix-zuul-2.2.0.RELEASE.jar!/:2.2.0.RELEASE]
Caused by: java.net.UnknownHostException: de4396a354ea: Name or service not known
at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[na:na]
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) ~[na:na]
at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1515) ~[na:na]
Note: "de4396a354ea" is the container id for "User service"
Please guide on how to resolve this issue and also provide any links where I can get more info regarding deploying microservices in docker containers.
A: Able to resolve this by adding "eureka.instance.hostname=" property.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64412122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: GEE do not import data into array I am having problems importing data from Google Earth Engine to a local array using Python API.
A simplified version of my code:
import ee
ee.Initialize()
#Load a collection
TERRA = ee.ImageCollection("MODIS/006/MOD09A1").select(['sur_refl_b02', 'sur_refl_b07',"StateQA"])
TERRA = TERRA.filterDate('2003-01-01', '2019-12-31')
#Extract an image
TERRA_list = TERRA.toList(TERRA.size())
Terra_img = ee.Image(TERRA_list.get(1))
#Load as array
Terra_img = Terra.get('sur_refl_b02')
np_arr_b2 = np.array(Terra_img.getInfo())
But np_arr_b2 seems to be empty
Does anybody know what am I doing wrong?
Thanks!
A: You are not far from the goal, at least to a certain extent. There's a limit to how many pixels can be transferred over such a request, namely 262144. Your image, when taken over the whole globe (like you are doing), has 3732480000 - over 10000x too many. Still, you can sample a small area and put in the numpy:
import ee
import numpy as np
import matplotlib.pyplot as plt
ee.Initialize()
#Load a collection
TERRA = ee.ImageCollection("MODIS/006/MOD09A1").select(['sur_refl_b02', 'sur_refl_b07',"StateQA"])
TERRA = TERRA.filterDate('2003-01-01', '2019-12-31')
#Extract an image
TERRA_list = TERRA.toList(TERRA.size())
Terra_img = ee.Image(TERRA_list.get(1))
img = Terra_img.select('sur_refl_b02')
sample = img.sampleRectangle()
numpy_array = np.array(sample.get('sur_refl_b02').getInfo())
It's an area over Wroclaw, Poland, and looks like this when passed to matplotlib via imshow:
What if you really need the whole image? That's where Export.image.toDrive comes into play. Here's how you'd download the image to the Google Drive:
bbox = img.getInfo()['properties']['system:footprint']['coordinates']
task = ee.batch.Export.image.toDrive(img,
scale=10000,
description='MOD09A1',
fileFormat='GeoTIFF',
region=bbox)
task.start()
After the task is completed (which you can monitor also from Python), you can download your image from Drive and access it like any other GeoTIFF (see this GIS Stack Exchange post).
A: It seems like you want to download data from earth engine to then use them with numpy. You are doing two things wrong here:
*
*You are treating Google Earth Engine as a download service. This is not the purpose of Earth Engine. If you want to download big amounts of data (like in your case a year of Terra Surface Reflectance) you should download them directly from the service providers. The only thing you should download from Earth Engine are the end-results of your analysis which you conducted within Earth Engine.
*.getInfo does not get you the satellite data, it will only get you the metadata of your ImageCollection in the form of a JSON Object. If you want to the actual raster data you would have to export it (which, as said in 1, you shouldn't do for this amount of data).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/60580995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Samba Share Over OpenVPN, Split Tunneling? I would like my PC to connect to a server running OpenVPN + Samba + file management software. How would I setup so that the PC only connects to the server without tunneling all the internet traffic to the VPN? I would like to keep the Samba Share connection encrypted.
A: You shouldn't push the default route from your OpenVPN server - you push only routes to the network you want to access. For example I have OpenVPN running on internal network, so in OpenVPN server.conf I have this:
push "route 10.10.2.0 255.255.255.0"
push "route 172.16.2.0 255.255.255.0"
This will cause Windows OpenVPN client to add only routes for these 2 networks after connect, so it won't affect the default route and internet traffic.
One caveat is that at least Windows 7 recognizes different networks by their gateways. If the network doesn't have a gateway, Windows is unable to recognize the network and you are unable to choose if is it Home/Work/Public network (which would deny samba access if using Windows Firewall).
The workaround I use is to add a default gateway route with big metric (999), so that it is never used for routing by Windows. I have this in the clients config file, but probably it can be put also to the server's config.
# dummy default gateway because of win7 network identity
route 0.0.0.0 0.0.0.0 vpn_gateway 999
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15078250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: memcached dead but subsys locked service memcached restart yields:
stopping memcached: [failed]
starting memcached: [ ok ]
service memcached status yields:
memcached dead but subsys locked
ls inside /var/lock/subsys/ shows a file named memcached
ls inside /var/run/ shows no pid file named memcached
there is another folder named memcached in here but there is nothing in that folder.
rm /var/lock/subsys/memcached gets rid of the memcached lock file
service restart memcached yeilds:
stopping memcached: [failed]
starting memcached: [ ok ]
service memcached status yields:
memcached dead but subsys locked
what am I doing wrong?
EDIT: I'd like to add that I've searched for this before posting and I'm either already doing the steps listed in said post or that post is years old.
A: Is there another process binding to TCP/11211?
Perhaps you tried to start the memcached service as a non-privileged user and it failed with:
$ service memcached start
Starting memcached: [ OK ]
touch: cannot touch ‘/var/lock/subsys/memcached’: Permission denied
After that, service memcached status will falsely report that memcached is not running:
$ service memcached status
memcached dead but subsys locked
But it is, and it is binding to port 11211, in order to check for this you can use:
$ fuser -n tcp 11211
11211/tcp: 4439
Or:
$ pgrep -l memcached
4439 memcached
Memcached will fail to start because it cannot bind to 11211, as the running instance is already bound to it. Unfortunately there are some systems (I'm looking at you, CENTOS) where it may not leave any useful hint at /var/log/messages or /var/log/syslog. That is why many of the previous answers to this question that fiddle with the binding address will look like they solved the problem.
How do you fix it?
Since service stop memcached will not work, you have to kill it:
$ pkill memcached
Or this (where 4439 is the pid you found in the previous step):
$ kill 4439
Then you can do it right, using sudo:
$ sudo service memcached start
Starting memcached: [ OK ]
$ service memcached status
memcached (pid 6643) is running...
A: Solved this problem by typing the following commands in terminal:
1) su (becoming root).
2) killall -9 memcached (killing memcached).
3) /etc/init.d/memcached start (starting memcached by hands).
Alternatively: 3) service memcached start.
A: check /etc/sysconfig/memcached
make sure the OPTIONS="-l 127.0.0.1" is correct
A: Remove -l from OPTION.
e.g., Instead of
OPTION="-l 2.2.2.2"
try using
OPTION="2.2.2.2"
This worked for me.
A: To resolve this problem, run the following script as root
rm /var/run/memcached/memcached.pid
rm /var/lock/subsys/memcached
service memcached start
A: Removing and reinstalling memcached is what worked for me:
[acool@acool super-confidential-dir]$ sudo yum remove memcached
...
[acool@acool super-confidential-dir]$ sudo yum install memcached
After the above commands and starting it I got:
[acool@acool super-confidential-dir]$ sudo service memcached status
memcached dead but pid file exists
At that point I killed it and removed the pid file:
[acool@acool super-confidential-dir]$ sudo killall -s 9 memcached
...
[acool@acool super-confidential-dir]$ sudo rm /var/run/memcached/memcached.pid
And finally started it and checked its status:
[acool@acool super-confidential-dir]$ sudo service memcached start
...
[acool@acool super-confidential-dir]$ sudo service memcached status
memcached (pid 13804) is running...
And then I was happy again.
Good luck.
A: In my case I wanted to use memcache through the socket with
OPTIONS="-t 8 -s /run/memcached/memcached.sock -a 0777 -U 0"
copied from another OS, and get the same problem.
Then I realized that I just forgot, that in my OS /run/ doesn't exist. That's it. Just check your path, hah
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22207420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Replace Space (%20) with dashes in url using .htacess I am facing an issue in url re-writing in .htaccess. My url data coming from Database which include some spaces as well. I want to omit spaces from my url's and want to replace it with dashes.
currently what i am getting with my current .htacess ..
http://www.xyz.com/detail-10-Event%20Tickets.html
I want it to be replaced with
http://www.xyz.com/detail-43-61-Event-Tickets.html (This is what i want.)
Please find the code for .htaccess and suggest what changes should i made to solve this issue.
Options +FollowSymLinks
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule uploadPRODUCT-(.*)-(.*)-(.*).html$ uploadPRODUCT.php?cid=$1&aid=$2&tid=$3
RewriteRule ab-(.*)-(.*).html$ products.php?cid=$1&cname=$2
RewriteRule detail-(.*)-(.*)-(.*).html$ productDETAILS.php?cid=$1&aid=$2&pname=$3
RewriteRule (.*)-(.*).html$ cms.php?name=$1&cmsid=$2
errorDocument 404 http://www.xyz.com/notfound.php
errorDocument 500 http://www.xyz.com/500error.html
RewriteCond %{http_host} ^xyz.com.com [NC]
RewriteRule ^(.*)$ http://www.xyz.com/$1 [R=301,L]
</IfModule>
A: Since you create the URLs from database entries, I would replace the spaces at URL creation time. You can use str_replace for this
$url = str_replace(' ', '-', $db_column);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16132767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: what is the difference between GNU,GCC and MinGW.. arent they same? I was informed that Gcc is the not only the compiler for c but also for many languages is it true? if it is then how it is done
A: GNU is not a compiler. It is an Operating System and a collection of free software made to be "Unix like" without using Unix.
(GNU stands for "GNU's not Unix!")
GCC stands for "GNU Compiler Collection" and is a piece of GNU software that includes a compiler with frontends for multiple languages:
The standard compiler releases since 4.6 include front ends for C
(gcc), C++ (g++), Objective-C, Objective-C++, Fortran (gfortran), Java
(gcj), Ada (GNAT), and Go (gccgo).
MinGW stands for "Minimalist GNU for Windows" It is essentially a tool set that includes some GNU software, including a port of GCC.
In summary, MinGW contains GCC which is in the collection of GNU free software.
Further Reading Below:
GNU - https://en.wikipedia.org/wiki/GNU
GCC - https://en.wikipedia.org/wiki/GNU_Compiler_Collection#cite_note-39
MinGW - http://www.mingw.org/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38252370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Data distribute in large cluster with many indices (Elasticsearch) I have a cluster with 30 nodes and a lot of indices with small number of primary shards . Let's say 800 indices . most indices only have 1 or 2 primary shards.
I want to know how elasticsearch cluster distribute data across the cluster such a lot of small indices?
Does all nodes in cluster receive data evenly ? or near evenly ?
Thanks,
Sun Chanras
A:
The cluster reorganizes itself to spread the data evenly.
You can read it here
For your specific case you cane use kopf, A greate plugin that visualize the location of all shards in each node.
I think that there are more similar plugins but this is the only one that i worked with.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45891616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Get documents using find() method in couchdb-nano Since CouchDB doesn't have any collections, I added a custom typeproperty to my entitys. Now I want to filter all entitys on that property, e.g. get all users by {type:'user'}. In the couchdb-doc I found a method called 'find()', which is also implemented in the nano typings, but have a lack of documentation in couchdb-nano. According to the definition, I wrote the follwing code:
class UserModel {
type: string = 'User';
name: string = '';
mail: string = '';
}
let db = <nano.DocumentScope<UserModel>>nano("http://localhost:5984/testdb");
let query: nano.MangoQuery = { selector: { type: "User" } };
db.find(query, (cb:nano.Callback<nano.MangoResponse<UserModel>>) => {
// How to get the results here? cb is a callback, but this doesn't make sense
});
It doesn't make sense to me that I get a callback. How can I get the results?
Tried using some kind of callback:
db.find(query, (users: nano.MangoResponse<UserModel>) => {
console.log(users);
});
But users is undefined, altough the filter { selector: { type: "User" } } works well in Project Fauxton.
A: As mentioned in the nano documentation:
In nano the callback function receives always three arguments:
*
*err - The error, if any.
*body - The HTTP response body from CouchDB, if no error. JSON parsed body, binary for non JSON responses.
*header - The HTTP response header from CouchDB, if no error.
Therefore, in the case of db.find you will have:
db.find(query, (err, body, header) => {
if (err) {
console.log('Error thrown: ', err.message);
return;
}
console.log('HTTP header received: ', header)
console.log('HTTP body received: ', body)
});
I didn't work with typescript, however I think you can do the same with typescript.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49917323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: how to show results of data augmentation before and after keras.preprocessing.image.ImageDataGenerator
i am currently training a CNN with the ASL dataset https://www.kaggle.com/datamunge/sign-language-mnist. To optimize my accuracy I used the ImageDataGenerator from Keras. I wanted to print out the results of the Data Augmentation (image before and after the Data Augmentation). But I don't understand how to plot the results from datagen. This is my code:
datagen = keras.preprocessing.image.ImageDataGenerator(
featurewise_center=False, samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False, rotation_range=10,
zoom_range=0.1, width_shift_range=0.1,
height_shift_range=0.1, horizontal_flip=False,
vertical_flip=False)
datagen.fit(train_data)
result_data = datagen.flow(train_data, train_label, batch_size=128)
print(result_data)
train_data is a numpy array of shape (20, 28, 28, 1) and train_label(20, 1) as they are 20 images with 28*28 pixels and the third dimension for the usage in a CNN. I would like to plot it with matploit lib but also happy with anything else (np array of the pixels).If someone could also tell me how I can print the amount of data the datagen generated would be awesome. Thank you in advance for your help.
A: First, you can create default DataGenerator to plot original images easily
datagenOrj = keras.preprocessing.image.ImageDataGenerator()
You can flow a small sample like the first five images into your 'datagen'. This generator gets images randomly. To making a proper comparison, small and certain sampling can be good for large dataset.
result_data = datagen.flow(train_data[0:5], train_label[0:5], batch_size=128)
result_data_orj = datagenOrj.flow(train_data[0:5], train_label[0:5], batch_size=128)
When you call the next() function, your data generator loads your first batch. The result should contain both train data and train label. You can access them by index.
def getSamplesFromDataGen(resultData):
x = resultData.next() #fetch the first batch
a = x[0] # train data
b = x[1] # train label
for i in range(0,5):
plt.imshow(a[i])
plt.title(b[i])
plt.show()
Be carefull about plotting. You may need to rescale your data. If your data type is float, you need to scale it between 0 and 1 and if your data type integer, you should scale it between 0 and 255. To do it you can use rescale property.
datagenOrj = keras.preprocessing.image.ImageDataGenerator(rescale=1.0/255.0)
I tried on my own dataset and it is working.
Best
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62217528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Correct Way for Threads in Java I am beginning to learn threads in Java and I am a bit confused now.
I hope this is the right place to ask.
I made a little program for testing:
public class GetInfo implements Runnable {
//private static boolean stopPointerInfo = false;
private static Thread tn;
public static void StartPointerInfo() {
//stopPointerInfo = false;
tn = new Thread(new GetInfo());
tn.start();
System.out.println("After start1");
}
public static void StopPointerInfo() {
//stopPointerInfo = true;
tn.interrupt();
}
@Override
public void run() {
// while (stopPointerInfo == false) {
while (! tn.isInterrupted() ) {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
tn.interrupt();
}
.
.
doSomething
}
}
}
Because the run() method must know which thread is interrupted I must use the global definition of thread tn?
Should I use the interrupt() method like I did or should I use the boolean variable like in the comments?
With this Method I can't use the interrupt and I must use the boolean version because Runnable r doesn't know the thread here?
public class GetInfo {
private static boolean stopPointerInfo = false;
public static void StartPointerInfo() {
stopPointerInfo = false;
getPointerInfo();
}
public static void StopPointerInfo() {
stopPointerInfo = true;
}
public static void getPointerInfo() {
Runnable r = new Runnable() {
@Override
public void run() {
while (stopPointerInfo == false) {
try {
Thread.sleep(500);
} catch (InterruptedException e) { }
.
.
doSomething
}
}
};
new Thread(r).start();
}
}
A: The boolean flag solution is fragile as it is not guaranteed that updates will be visible across different threads. To fix this problem you may declare it as volatile, but if you set the boolean flag you don't interrupt the sleep call like in first version. Thus using interrupts is preferred.
I see no reason to declare Thread tn as static. You may use:
public class GetInfo implements Runnable {
private final Thread tn;
private GetInfo() {
this.tn = new Thread(this);
this.tn.start();
}
public static GetInfo StartPointerInfo() {
GetInfo info = new GetInfo();
System.out.println("After start1");
return info;
}
public void StopPointerInfo() {
tn.interrupt();
}
...
}
And use it like this:
GetInfo infoWorker = GetInfo.StartPointerInfo();
...
infoWorker.StopPointerInfo();
A: You don't need to use a static Thread.
Within the run() method, you need to test whether the current thread has been interrupted. You don't need access to tn for that. You can do this like this:
Thread.currentThread().isInterrupted();
or
Thread.interrupted();
(Note that the behavior of these two are different. One clears the interrupt flag and the other doesn't.)
Alternatively, you could just change the static field to an instance field, and make all of the GetInfo methods instance methods. That means you can instantiate multiple instances of GetInfo which each create their own Thread and hold the reference in the instance field.
Two more points:
*
*The way that you are creating and then disposing of threads is rather inefficient. (Starting a thread is expensive.) A better way is to restructure the code so that it can use a thread pool.
*Calling your methods StartPointerInfo and StopPointerInfo is BAD STYLE. Java method names should always start with a lowercase letter.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/32695472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How can I filter instances of a model in django, with a queryset value of filter field? I am creating an application in django, and I have the next question:
I want to filet objects of modelA, by the field att1, and I have a queryset of values of the field att1. I mean, my models are:
class modelA(models.Model):
att1 = models.ForeignKey(modelB)
...
class modelB(models.Model):
...
I got a queryset ot objects of modelB, and I want to get all objects of modelA which has as value of att1, any of the values of the queryset of modelB.
How can I do it?
Thank you so much!
A: Nothing magic
ModelA.objects.filter(att1=queryset of modelB)
A: say you have object B with fields att2 and att3
class modelA(models.Model):
att1 = models.ForeignKey(modelB)
class modelB(models.Model):
att2 = models.CharField(max_length=255)
att3 = models.CharField(max_length=255)
then you filter by doing:
results = modelA.objects.filter(att1__att2='foo')
hope this helps
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30952364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: window.location.reload(); is losing viewstate in IE8 On Click of a button, I need to make sure window control changes to new new element(tab). Therefore, I am using something like this in html:
<input id="back" type="button" value="Back to Form" onclick="backTo();">
and the corresponding JS code is:-
function backTo(){
window.location.href='#fragment-1';
window.location.reload();
}
On executing above in Firefox, control is going back to the element(fragment-1) with fields having text-data entered prev. While the same in IE8 control is going back to fragment-1 element but all text-data entered prev is lost.
Further, I have added Cache-control paremeter set to public and tried seeing whether this would help the viewstate to be loaded from cache whenever control goest back to given div/element.
Anything missing?
A: well.. it depends upon how a particular browser saves the state of the page..
also try using history.go() method http://www.w3schools.com/jsref/met_his_go.asp and see if the problem is solved.
A: how about resubmitting the form instead of reloading:
document.forms[0].submit();//assumed there is only one form in you page
UPDATE: This should do it:
Assuming that div below is the element you like to transfer the control to, use the scrollIntoView function:
<div id="fragment-1" name="fragment-1">
....
</div>
document.getElementById('fragment-1').scrollIntoView();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9274920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: git commit fix to project owner I forked binux/pyspider and made some change.Today I found a bug and want to commit a fix to the owner.
I have 4 commit for now, the last commit contain two files(one file to fix bug, one file my edit).
And I found the most easy understanding way by John Naegle.
But I got error here:
mithril@KASIM /E/Project/pyspider (master)
$ git checkout pullrequest
Switched to branch 'pullrequest'
mithril@KASIM /E/Project/pyspider (pullrequest)
$ git pull https://github.com/binux/pyspider.git
From https://github.com/binux/pyspider
* branch HEAD -> FETCH_HEAD
Already up-to-date.
mithril@KASIM /E/Project/pyspider (pullrequest)
$ git branch
master
* pullrequest
$ git cherry-pick 6b8fc09133b11ff8f243cdcf90fa559ee9cf4f26
error: could not apply 6b8fc09... fix pymongo dump error
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add <paths>' or 'git rm <paths>'
hint: and commit the result with 'git commit'
mithril@KASIM /E/Project/pyspider (pullrequest|CHERRY-PICKING)
$ git diff
diff --cc pyspider/scheduler/scheduler.py
index 48a7888,a2f5aaf..0000000
--- a/pyspider/scheduler/scheduler.py
+++ b/pyspider/scheduler/scheduler.py
I run the command under my pyspider clone folder, is it wrong?
Does that means I have to revert changes of scheduler.py and add them back after I switch to master branch?
Could I just add pyspider/webui/result.py to the pullrequest branch without influence to mater brach?
Should I go to a new folder to create and get this new branch?
I am not very familiar with git, I fear I would do something wrong..
A: Now I know Git is very powerful:
1. create a new branch and make change, would not affect other branches.
2. I can create branch with a SHA key (every commit has a unique key)
I have made my first the pullrequest, it's feel good.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/32085489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: OpenMP reduction into array of C++ template-specified size causes undefined behaviour I'm new to OpenMP, but am trying to use it to accelerate some operations on entries of a 2D array with a large number of rows and a small number of columns. At the same time, I am using a reduction to calculate the sum of all the array values in each column. The code looks something like this (I will explain the weird bits in a moment):
template <unsigned int NColumns>
void Function(int n_rows, double** X, double* Y)
{
#pragma omp parallel for reduction(+:Y[:NColumns])
for (int r = 0; r < n_rows; ++r)
{
for (int c = 0; c < NColumns; ++c)
{
X[r][c] = some_complicated_stuff(X[r], X[r][c]);
Y[c] += X[r][c];
}
}
}
To clarify, X is a n_rows x NColumns-sized 2D array allocated on the heap, and Y is a NColumns-sized 1D array. some_complicated_stuff isn't actually implemented as a separate function, but what I do to X[r][c] in that line only depends on X[r][c] and other values in the 1D array X[r].
The reason that NColumns is passed in as a template parameter rather than as a regular argument (like n_rows) is that when NColumns is known at compile-time, the compiler can more aggressively optimise the inner loop in the above function. I know that NColumns is going to be one of a small number of values when the program runs, so later on I have something like this code:
cin >> n_cols;
double** X;
double Y[n_cols];
// initialise X and Y, etc. . .
for (int i = 0; i < n_iterations; ++i)
{
switch (n_cols)
{
case 2: Function< 2>(X, Y); break;
case 10: Function<10>(X, Y); break;
case 60: Function<60>(X, Y); break;
default: throw "Unsupported n_cols."; break;
}
// . . .
Report(i, Y); // see GDB output below
}
Through testing, I have found that having this NColumns "argument" to Update as a template parameter rather than a normal function parameter actually makes for an appreciable performance increase. However, I have also found that, once in a blue moon (say, about every 10^7 calls to Function), the program hangs—and even worse, that its behaviour sometimes changes from one run of the program to the next. This happens rarely enough that I have been having a lot of trouble isolating the bug, but I'm now wondering whether it's because I am using this NColumns template parameter in my OpenMP reduction.
I note that a similar StackOverflow question asks about using template types in reductions, which apparently causes unspecified behaviour - the OpenMP 3.0 spec says
If a variable referenced in a data-sharing attribute clause has a type
derived from a template, and there are no other references to that
variable in the program, then any behavior related to that variable is
unspecified.
In this case, it's not a template type per se that is being used, but I'm sort of in the same ballpark. Have I messed up here, or is the bug more likely to be in some other part of the code?
I am using GCC 6.3.0.
If it is more helpful, here's the real code from inside Function. X is actually a flattened 2D array; ww and min_x are defined elsewhere:
#pragma omp parallel for reduction(+:Y[:NColumns])
for (int i = 0; i < NColumns * n_rows; i += NColumns)
{
double total = 0;
for (int c = 0; c < NColumns; ++c)
if (X[i + c] > 0)
total += X[i + c] *= ww[c];
if (total > 0)
for (int c = 0; c < NColumns; ++c)
if (X[i + c] > 0)
Y[c] += X[i + c] = (X[i + c] < min_x * total ? 0 : X[i + c] / total);
}
Just to thicken the plot a bit, I attached gdb to a running process of the program which hanged, and here's what the backtrace shows me:
#0 0x00007fff8f62a136 in __psynch_cvwait () from /usr/lib/system/libsystem_kernel.dylib
#1 0x00007fff8e65b560 in _pthread_cond_wait () from /usr/lib/system/libsystem_pthread.dylib
#2 0x000000010a4caafb in omp_get_num_procs () from /opt/local/lib/libgcc/libgomp.1.dylib
#3 0x000000010a4cad05 in omp_get_num_procs () from /opt/local/lib/libgcc/libgomp.1.dylib
#4 0x000000010a4ca2a7 in omp_in_final () from /opt/local/lib/libgcc/libgomp.1.dylib
#5 0x000000010a31b4e9 in Report(int, double*) ()
#6 0x3030303030323100 in ?? ()
[snipped traces 7-129, which are all ?? ()]
#130 0x0000000000000000 in ?? ()
Report() is a function that gets called inside the program's main loop but not within Function() (I've added it to the middle code snippet above), and Report() does not contain any OpenMP pragmas. Does this illuminate what's happening at all?
Note that the executable changed between when the process started running and when I attached GDB to it, which required referring to the new (changed) executable. So that could mean that the symbol table is messed up.
A: I have managed to partly work this out.
One of the problems was with the program behaving nondeterministically. This is just because (1) OpenMP performs reductions in thread-completion order, which is non-deterministic, and (2) floating-point addition is non-associative. I assumed that the reductions would be performed in thread-number order, but this is not the case. So any OpenMP for construct that reduces using floating-point operations will be potentially non-deterministic even if the number of threads is the same from one run to the next, so long as the number of threads is greater than 2. Some relevant StackOverflow questions on this matter are here and here.
The other problem was that the program occasionally hangs. I have not been able to resolve this issue. Running gdb on the hung process always yields __psynch_cvwait () at the top of the stack trace. It hangs around every 10^8 executions of the parallelised for loop.
Hope this helps a little.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49986327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: getting an extra 0.001 in my calculation When i enter 0.2 as the volume i should get 2260.00 but im getting 2260.001 and i cant understand why. Even after adjusting how many decimals i want there is always 0.1 at the end...
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int option;
float volume;
float density = 11300;
float mass;
while (option != 3)
{
printf("1: Calculate Mass\n");
printf("2: Calculate Volume\n");
printf("3: Exit\n");
printf("Input here: ");
scanf("%d", &option);
if (option == 1)
{
printf("Input Volume: ");
scanf("%f", &volume);
mass = volume*density;
printf("%.2f", mass);
}
}
return 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40266333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Changing Mysql Column Names I want to change various column names using:
ALTER TABLE tablename CHANGE COLUMN oldname newname datatype(length);
This is easy except for the last part: datatype(length). It seems silly to need to specify that since I don't want to change the column type or length, only its name, but from what I've read, specifying that is mandatory. I need automated code, NOT a command that merely displays the table from which datatype(length) is displayed on a screen; I want to put those values into PHP variable(s) so they can be manipulated by other PHP code. Thus I'd appreciate code that gives me $datatype and $length, if the latter is applicable.
A: You can select that information from the INFORMATION_SCHEMA.COLUMNS table.
select
DATA_TYPE,
CHARACTER_MAXIMUM_LENGTH,
IS_NULLABLE,
NUMERIC_SCALE,
NUMERIC_PRECISION
-- And many other properties
from
INFORMATION_SCHEMA.COLUMNS
where
TABLE_NAME = 'tablename' and
COLUMN_NAME = 'yourcolumn'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24683533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is calling not thread-safe method properly possible? I need to parallelize this code in C#. listTipoGraficos is a ListView object (I'm locking its access) but the method this.claseChartPadre.CargarGrafico is not thread-safe. It only modifies the input graficos[i] but it has very much local variables, so modify it is an horror... Can I do anything without adapting it?
System.Collections.Concurrent.ConcurrentDictionary<int, Grafico> graficos = new System.Collections.Concurrent.ConcurrentDictionary<int,Grafico>();
for (int i = 0; i < listTipoGraficos.CheckedItems.Count; i++)
graficos[i] = (Grafico)listTipoGraficos.CheckedItems[i].Tag;
System.Threading.Tasks.Parallel.For(0,listTipoGraficos.CheckedItems.Count, i =>
{
//Controlamos que no lo tenemos ya cargado
if (graficos[i].EstaPintado == false || graficos[i].TipoFecha != fechaSelecionada || graficos[i].FechaIni != fechaIniSelec || graficos[i].FechaFin != fechaFinSelec)
{
graficos[i].TipoFecha = fechaSelecionada;
graficos[i].FechaIni = fechaIniSelec;
graficos[i].FechaFin = fechaFinSelec;
//Resetea el contenido del grafico
graficos[i].reiniciarContenido();
this.claseChartPadre.CargarGrafico(graficos[i]); // A lot of local variables inside
}
});
I can't post very much code because it isn't mine.
public bool CargarGrafico(Grafico gf)
{
bool leido_ok = true;
// Reding from DB
ClaseComandoSql comando = new ClaseComandoSql();
comando.NombreComando = this.procedimiento;
comando.TipoComando = System.Data.CommandType.StoredProcedure;
comando.AñadirParametro("@CodGrafico", gf.CodGrafico);
comando.AñadirParametro("@Codigo", this.codigoObjeto);
comando.AñadirParametro("@CodOpcion", gf.TipoFecha);
comando.AñadirParametro("@FecInicial", gf.FechaIni);
comando.AñadirParametro("@FecFinal", gf.FechaFin);
System.Data.DataSet lector = comando.EjecutarLector(); // This is the operation I need to parallelize
bool primeraPasada = true;
bool codigoNulo = false;
int totalLeyendas = 0;
int totalAgrupaciones = 0;
String LeyendaAnterior = "";
int i = 0;
// Calculus and formatting
}
Basically the code create an sql command, executes it and formats a graph to draw it.
EDIT: I took this approach and seems that works well, but I think it's ugly. sync is an instance variable inside the class.
public bool CargarGrafico(Grafico gf)
{
lock (sync)
{
bool leido_ok = true;
// Reding from DB
ClaseComandoSql comando = new ClaseComandoSql();
comando.NombreComando = this.procedimiento;
comando.TipoComando = System.Data.CommandType.StoredProcedure;
comando.AñadirParametro("@CodGrafico", gf.CodGrafico);
comando.AñadirParametro("@Codigo", this.codigoObjeto);
comando.AñadirParametro("@CodOpcion", gf.TipoFecha);
comando.AñadirParametro("@FecInicial", gf.FechaIni);
comando.AñadirParametro("@FecFinal", gf.FechaFin);
}
System.Data.DataSet lector = comando.EjecutarLector(); // This is the operation I need to parallelize
lock(sync)
{
bool primeraPasada = true;
bool codigoNulo = false;
int totalLeyendas = 0;
int totalAgrupaciones = 0;
String LeyendaAnterior = "";
int i = 0;
// Calculus and formatting
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34331030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Getting the exit code within a __del__ function I'd like to do a clean shutdown of a class that does one task when exiting normally, and not under something like exit(1). How can I determine the current exit code. Example:
import sys
import atexit
def myexit():
print("atexit")
atexit.register(myexit)
class Test(object):
def __init(self):
pass
def __del__(self):
print("here")
# print(sys.exit_code) # How do I get this???
x = Test()
exit(1)
which produces:
atexit
here
But in neither of those places do I know how to get the exit code passed to sys.exit().
There was other
answer
but it doesn't seem wise to implement a forced wrapper from another
reusable module.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52318117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to add column values from one Data Frame to the other column in second Data Frame based on conditions in Python?
df1 = pd.DataFrame(zip(l1), columns =['l1'])
df1.l2.value_counts()
df2 = pd.DataFrame(zip(l2), columns =['l2'])
df2.l2.value_counts()
I want to add the column values from l2 to l1 depending on the value count in df1. For example
*
*if value count of 'bb_#2' < value count of 'bb_#1' in df1 then all of 'bb_#3' in df2 should be added in 'bb_#2' in df1 also changing their name to 'bb_#2' as well and
*same logic as described above for 'aa_#3'
*'cc_#2' & 'cc_#3' in df2 should be combined and added into 'cc_#1' in df1.
conditions should be checked in df1 and if a condition is met then values from l2 in df2 should be added to the l1 column of df1
output
is given here as well
l1=['aa_#1', 'bb_#1', 'bb_#2', 'aa_#1', 'aa_#1', 'aa_#1', 'bb_#1','aa_#2','bb_#2','aa_#2','bb_#1','bb_#1','bb_#1','bb_#2','bb_#2','cc_#1','bb_#2','bb_#2', 'bb_#2','aa_#2','aa_#2','aa_#2', 'cc_#1','cc_#1','cc_#1','cc_#1']
Please let me know if there is a way to do this in Python. I have 10,000 rows like this to add from l2 to l1 and I don't know how to even begin with it. I am really new to Python.
A: This is a method that doesn't use pandas. The .count() method returns the value count of an item in a list. The .extend() method appends another to the end of an existing list. Lastly, multiplying a list by an integer duplicates and concats it that many times. ['a'] * 3 == ['a', 'a', 'a']
def extend_list(l1, l2, prefixes, final_prefixes, suffix_1, suffix_2, suffix_3):
for prefix in prefixes:
if l1.count(f'{prefix}_{suffix_2}') < l1.count(f'{prefix}_{suffix_1}'):
l1.extend([f'{prefix}_{suffix_2}'] * l2.count(f'{prefix}_{suffix_3}'))
for final_prefix in final_prefixes:
l1.extend([f'{final_prefix}_{suffix_1}'] *
(l2.count(f'{final_prefix}_{suffix_2)') + l2.count(f'{final_prefix}_{suffix_3}')))
l1 = ['aa_#1','bb_#1','bb_#2','aa_#1','aa_#1','aa_#1','bb_#1','aa_#2','bb_#2','aa_#2','bb_#1','bb_#1','bb_#1','bb_#2','bb_#2','cc_#1']
l2 = ['aa_#3','aa_#3','aa_#3','bb_#3','bb_#3','bb_#3','cc_#2','cc_#2','cc_#3','cc_#3']
l1 = extend_list(l1, l2, ["aa", "bb"], ["cc", "dd"], "#1", "#2", "#3")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/68087669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Hibernate criteria of Projection I have 2 classes with a relation @OneToOne: User and Player.
User contain a player:
@Entity
@Table(name = "user")
public class User {
@Column(name = "nickname")
private String nickname;
@OneToOne(fetch = FetchType.EAGER, cascade = CascadeType.ALL)
@JoinColumn(name = "id_player")
private Player player;
//getters and setters...
}
I want to query user with only nickname and player, and with player I don't want all attributes, I only want 2.
This is what I have now:
//Projections for the class User
ProjectionList projectionList = Projections.projectionList();
projectionList.add(Projections.property("id"), "id");
projectionList.add(Projections.property("nickname"), "nickname");
projectionList.add(Projections.property("player"), "player");
Criteria criteria = session.createCriteria(User.class);
criteria.setFirstResult(player.getPvpRank() - 5);
criteria.setMaxResults(11);
criteria.createAlias("player", "p");
criteria.addOrder(Order.asc("p.pvpRank"));
criteria.setProjection(projectionList);
criteria.setResultTransformer(Transformers.aliasToBean(User.class));
I am getting only nickname, id and the player, but how can I set projections to the player to only get player.level and not all the attributes?
A: Its think it is exactly as you suggest:
projectionList.add(Projections.property("player.level"), "player.level");
A: First create alias for the JoinColumn table Player in User & then refer it in your projectionList.
Criteria criteria = session.createCriteria(User.class);
criteria.createAlias("player", "p");
projectionList.add(Projections.property("p.level"), "player");
...
criteria.setProjection(projectionList);
criteria.setResultTransformer(Transformers.aliasToBean(User.class));
I hope this helps you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30096486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Apostrophe replacement in mysql query I have in a column names(notes) as
Notes:
John's Table
Smith's Window
Column contains an apostrophe in it. I am trying to run 2 queries.
*
*Select query to get the column entries that have apostrophe.
*Update those with apostrophe with empty string ie.John's Table replace to John Table
select notes from table1 where notes like ' '% ';
I get a syntax error , any help will be great.
A: Escape the apostrophe with a backslash;
select notes from table1 where notes like '%\'%';
And to update, use REPLACE()
UPDATE table1 SET notes = REPLACE(notes, '\'', '');
A: Did you try:
select notes from table1 where notes like "'%";
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7895748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to change elements in a navbar in html I made a navigation bar in HTML and trying that the elements of the navigation bar are on the right site. However, the elements are on the right side but in the wrong direction. The direction of the elements is, Kontakt, Über uns, Klimawandel and Home. The direction I will is the reverse of this.
This is the HTML I wrote for the navigation bar:
nav {
list-style-type: none;
margin: 0;
padding: 0;
overflow: hidden;
background-color: #96CB49;
}
li {
float: right;
border-right: 1px solid black;
}
.active {
background-color: #254a01;
}
li:last-child {
border-right: none;
}
li a:hover:not(.active) {
background-color: #254a01;
}
li a,
.dropbtn {
display: inline-block;
color: #F9FCEA;
text-align: center;
padding: 14px 16px;
text-decoration: none;
}
li a:hover,
.dropdown:hover .dropbtn {
background-color: #254a01;
}
li.dropdown {
display: inline-block;
}
.dropdown-content {
display: none;
position: absolute;
background-color: #96CB49;
min-width: 160px;
box-shadow: 0px 8px 16px 0px rgba(0, 0, 0, 0.2);
z-index: 1;
}
.dropdown-content a {
color: #F9FCEA;
padding: 12px 16px;
text-decoration: none;
display: block;
text-align: left;
}
.dropdown-content a:hover {
background-color: #254a01;
}
.dropdown:hover .dropdown-content {
display: block;
}
<nav>
<li><a class="active" href="#home">Home</a></li>
<li class="dropdown">
<a href="javascript:void(0)" class="dropbtn">Klimawandel</a>
<div class="dropdown-content">
<a href="Seiten/der_klimawandel.html">Der Klimawandel</a>
<a href="Seiten/ursachen.html">Die Ursachen des Klimawandels</a>
<a href="Seiten/auswirkungen.html">Die Auswirkungen des Klimawandels</a>
</div>
</li>
<li><a href="Seiten/über_uns.html">Über uns</a></li>
<li><a href="Seiten/kontakt.html">Kontakt</a></li>
<li style="float:left"><a>Logo</a></li>
</nav>
A: The way to approach this is to make the container float right, and the items inside of it to float left, if you want to use floats for this purpose.
And since you are using float which will cause the element width to depend on its content, you will need to add a wrapper to your <nav> element, that will have the same background color, so that you achieve full width background visually. In my example below, I wrapped the list elements in the <ul>, and made <nav> be the top level container to add the background.
nav {
background-color: #96CB49;
overflow: hidden;
}
ul {
float: right;
list-style-type: none;
margin: 0;
padding: 0;
}
li {
float: left;
border-right: 1px solid black;
}
.active {
background-color: #254a01;
}
li:last-child {
border-right: none;
}
li a:hover:not(.active) {
background-color: #254a01;
}
li a,
.dropbtn {
display: inline-block;
color: #F9FCEA;
text-align: center;
padding: 14px 16px;
text-decoration: none;
}
li a:hover,
.dropdown:hover .dropbtn {
background-color: #254a01;
}
li.dropdown {
display: inline-block;
}
.dropdown-content {
display: none;
position: absolute;
background-color: #96CB49;
min-width: 160px;
box-shadow: 0px 8px 16px 0px rgba(0, 0, 0, 0.2);
z-index: 1;
}
.dropdown-content a {
color: #F9FCEA;
padding: 12px 16px;
text-decoration: none;
display: block;
text-align: left;
}
.dropdown-content a:hover {
background-color: #254a01;
}
.dropdown:hover .dropdown-content {
display: block;
}
<nav>
<ul>
<li><a class="active" href="#home">Home</a></li>
<li class="dropdown">
<a href="javascript:void(0)" class="dropbtn">Klimawandel</a>
<div class="dropdown-content">
<a href="Seiten/der_klimawandel.html">Der Klimawandel</a>
<a href="Seiten/ursachen.html">Die Ursachen des Klimawandels</a>
<a href="Seiten/auswirkungen.html">Die Auswirkungen des Klimawandels</a>
</div>
</li>
<li><a href="Seiten/über_uns.html">Über uns</a></li>
<li><a href="Seiten/kontakt.html">Kontakt</a></li>
<li style="float:left"><a>Logo</a></li>
</ul>
</nav>
However, today there are more effective approaches to doing layout in CSS, such as flex-box which would make your task very easy:
ul {
background: green;
display: flex;
list-style: none;
justify-content: end;
}
<ul>
<li>One</li>
<li>Two</li>
<li>Three</li>
</ul>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74799148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: AngularJS - Disable ng-click if element has specific class I have a list of li items with functions attached to them. I also have an event listener which attaches a class of "current" to whichever li item is being clicked.
HTML
<ul class="cf" id="filter">
<li ng-click="getRadius(5)" class="current"><a href="#">5 km</a></li>
<li ng-click="getRadius(10)"><a href="#">10 km</a></li>
<li ng-click="getRadius(25)"><a href="#">25 km</a></li>
<li ng-click="getRadius(50)"><a href="#">50 km</a></li>
<li ng-click="getRadius(100)"><a href="#">100 km</a></li>
</ul>
Is there a way to disable the ng-click event if that specific li item has a class of "current" or is there a better way to go about this?
A: You cannot disable a list cause its not a interactive element can use ngClass to apply a specific class when disabled to make it appear disabled:
<li ng-class="{'disabled':condition}"ng-click="getRadius(5)">item</li>
You can use ng-if to remove those items completely from the list:
<li ng-if="!condition" ng-click="getRadius(5)">item</li>
<li ng-if="condition" >item</li>
A: If you use button tag as trigger instead of li, you can use ng-disabled directive to permit click by a condition. For example:
<ul class="cf" id="filter">
<li><button ng-click="getRadius(5)" ng-disabled="current!=5">5 km</button></li>
<li><button ng-click="getRadius(10)" ng-disabled="current!=10">10 km</button></li>
<li><button ng-click="getRadius(25)" ng-disabled="current!=25">25 km</button></li>
<li><button ng-click="getRadius(50)" ng-disabled="current!=50">50 km</button></li>
<li><button ng-click="getRadius(100)" ng-disabled="current!=100">100 km</button></li>
</ul>
If you want, you can customize button style and you can show it like a standart text element.
.cf button {
border: none;
background: none;
}
A: // template
<ul class="cf" id="filter">
<li ng-click="!clickLi[5] && getRadius(5)" class="current"><a href="#">5 km</a></li>
<li ng-click="!clickLi[10] && getRadius(10)"><a href="#">10 km</a></li>
<li ng-click="!clickLi[25] && getRadius(25)"><a href="#">25 km</a></li>
<li ng-click="!clickLi[50] && getRadius(50)"><a href="#">50 km</a></li>
<li ng-click="!clickLi[100] && getRadius(100)"><a href="#">100 km</a></li>
</ul>
// controller
function MyController($scope) {
$scope.clickLi = {
5: true
};
$scope.getRadius = function(li_id) {
$scope.clickLi[li_id] = true;
console.log(li_id);
};
}
A demo on JSFiddle.
A: Possible workaround:
If you need a single selection you can add variable into the scope specifying which row is selected and generate your list with the ng-repeat, then you can add lazy checking on ng-click if current $index is equal to selected index, you can use the same condition to apply current class with ng-class.
For example:
app.js
$scope.selected = 0;
$scope.distances = [5, 10, 25, 50, 100];
app.html
<ul class="cf" id="filter">
<li ng-repeat = "distance in distances" ng-click="($index == selected) || getRadius(distance)" ng-class="{'current':($index == selected)}"><a href="#">{{distance}} km</a></li>
</ul>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31744817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: bash shell append to variable frustration So I have a relatively simple bash script:
#!/bin/bash
declare ALL="";
while IFS="" read -r line || [ -n "$line" ]
do
if [[ $line == 'ENV ms'* ]]; then
words=( $line )
if [[ ${#ALL} > 0 ]]; then
ALL="$ALL;${words[1]}=${words[2]}"
else
ALL="${words[1]}=${words[2]}"
fi
if [[ ${#ALL} > 0 ]]; then
printf '%s\n' "$ALL"
echo "${#ALL}"
fi
fi
done < Dockerfile
echo "$ALL"
echo "${#ALL}"
parsing a Dockerfile that looks like this:
#
# Configuration Stage
#
FROM maven:3.6.1-jdk-12 AS build
ENV HOME=/usr/local/ms-cards
RUN mkdir -p $HOME
WORKDIR $HOME
COPY maven-settings.xml /root/.m2/settings.xml
COPY pom.xml $HOME
RUN mvn -Dmaven.test.skip=true clean verify --fail-never
COPY . $HOME
RUN mvn -Dmaven.test.skip=true clean package
#
# Package stage
#
FROM openjdk:12
COPY --from=build /usr/local/ms-cards/target/ms-cards-1.0-exec.jar /usr/local/lib/ms-cards.jar
ENV ms_oauth_ip ms-oauth
ENV ms_oauth_port 48001
ENV ms_cards_client_id clientapp
ENV ms_cards_client_secret 123456
ENV ms_cards_port 48002
ENV ms_connection_port 48003
ENV ms_connection_ip ms-connection
EXPOSE 48002
ENTRYPOINT ["java","-jar","/usr/local/lib/ms-cards.jar"]
and it gives me this output:
ms_oauth_ip=ms-oauth
21
;ms_oauth_port=48001
42
;ms_cards_client_id=clientapp
72
;ms_cards_client_secret=123456
103
;ms_cards_port=48002ret=123456
124
;ms_connection_port=4800323456
150
;ms_connection_ip=ms-connection
182
;ms_connection_ip=ms-connection
182
So I can see that my ALL variable is growing in length...but when printf'ing it never seems to be growing...Anyone know what I'm doing wrong?
A: The Dockerfile has Windows \r\n line endings. Each \r that is printed causes the cursor to jump back to the beginning of the line and overwrite the previous setting.
Debugging tip: Use declare -p var to see exactly what's in a variable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58721263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can i change Mysql character_set_system UTF8 to utf8bin When i dump mysql data out , some data has changed because of the character_set_system which is UTF8.Server , client and connection character sete are utf8mb4.
I guess the problem is system character set and server character set differences.
I am trying to change system caharacter set from UTF8 to utf8mb4 with this
Change MySQL default character set to UTF-8 in my.cnf?
But i can not
A: The title is incorrectly phrased.
"utf8" is a "character set"
"utf8_bin" is a "collation" for the character set utf8.
You cannot change character_set... to collation. You may be able to set some of the collation_% entries to utf8_bin.
But none of that a valid solution for the problem you proceed to discuss.
Probably you can find more tips here: Trouble with UTF-8 characters; what I see is not what I stored
To help you further, we need to see the symptoms that got you started down this wrong path.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69735429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Never ending while() loop I need to add a while loop for my script. It's a while $points < 17 ( draw_cards)
Maybe you can guess, it's a cards game. I wish it was as simple as that because it won't work. It gets stuck in an endless while loop.
if(FORM_stand("Stand")){
while ($total_dealer < 17){
list_dealer_hand();
draw_dealer_card();
}
}
If i run my script it goes on forever. Showing me for example 7, 7, 3, 7, 3, 9, 7, 3, 9, 2, 7, 3, 9, 2, 6, 7, 3, 9, 2, 6, 10, 7, 3, 9, 2, 6, 10, Ace, 7, 3, 9, 2, 6, 10, Ace, 5, 7, 3, 9, 2, 6, 10, Ace, 5, Queen, 7, 3, 9, 2, 6, 10, Ace, 5, Queen, Jack, 7, 3, 9, 2, 6, 10, Ace, 5, Queen, Jack, King, 7, 3, 9, 2, 6, 10, Ace, 5, Queen, Jack, King, 4, 7, 3, 9, 2, 6, 10, Ace, 5, Queen, Jack, King, 4, 8,
before eventually repeating
Notice: Undefined index
But i if i were to use:
if(FORM_stand("Stand")){
list_dealer_hand();
if ($total_dealer < 17){
draw_dealer_card();
}
}
I need to press Stand manually a couple of times because it's an if but this way it will keep drawing cards untill his points are 17 or higher, meaning the If works but the While never ends.
I don't know if you need any more information, in case you do, please ask away. Since i've been stuck on this while loop for 2 days now. And no one seems to be able to help me.
Thanks in advance!
PS: If i run the while loop and press control + f5 after all the errors are shown, it shows me this: 3, 10, 7, 9, 6, King, 8, Queen, Jack, 4, 2, Ace, 5, , and in the point section: 85
Busted!
I know all points together are 95, but since i used a case for my Ace that if points are > 11 it will count as a 1 instead of an 11. Mabye this bit will help you!
list_dealer_hand()
function list_dealer_hand() {
foreach($_SESSION["dealer_hand"] as $dealer_card=>$points) {
echo $dealer_card;
echo ', ';
}
}
and draw_dealer_card()
function draw_dealer_card() {
$dealer_card = array_rand($_SESSION["dealer_pile"]);
$_SESSION["dealer_hand"][$dealer_card] = $_SESSION["dealer_pile"][$dealer_card];
unset($_SESSION["dealer_pile"][$dealer_card]);
}
My case system for points looks as follows:
$total_dealer = 0;
$text_dealer = '';
foreach($_SESSION["dealer_hand"] as $dealer_card=>$dealer_points) {
switch($dealer_card)
{
case "King":
case "Queen":
case "Jack":
case "10":
$total_dealer += 10;
break;
case "Ace":
if($total_dealer >= 11)
{
$total_dealer += 1;
}else{
$total_dealer += 11;
}
break;
case "9":
$total_dealer += 9;
break;
case "8":
$total_dealer += 8;
break;
case "7":
$total_dealer += 7;
break;
case "6":
$total_dealer += 6;
break;
case "5":
$total_dealer += 5;
break;
case "4":
$total_dealer += 4;
break;
case "3":
$total_dealer += 3;
break;
case "2":
$total_dealer += 2;
break;
}
}
EDIT: Session dealer_pile
if(!isset($_SESSION["dealer_pile"])) $_SESSION["dealer_pile"] = array(
2 => 2,
3 => 3,
4 => 4,
5 => 5,
6 => 6,
7 => 7,
8 => 8,
9 => 9,
10 => 10,
'Jack' => 10,
'Queen' => 10,
'King' => 10,
'Ace' => 11 );
A: draw_dealer_card() needs to increase $total_dealer; otherwise the loop will go an forever.
A more elaborate answer
You only calculate the total once and never again in the while loop, that is why the dealers total will never increase and therefore will never be greater than 17.
Put the code that converts a card to its value in its own function, so you can use it anywhere
<?php
/**
* return the value of the card for the current total
* @param string $card the card to convert to count
* @param int $current_total the current total of the player/dealer
* @return int the value of $card
*/
function get_card_value($card, $current_total) {
switch($card) {
case "King":
case "Queen":
case "Jack":
return 10;
case "Ace":
return ($current_total > 10) ? 1 : 11;
case "10":
case "9":
case "8":
case "7":
case "6":
case "5":
case "4":
case "3":
case "2":
return (int) $card;
}
return 0; // this should not happen probably abort here
}
From here it is easy, edit your while loop like this:
<?php
while ($total_dealer < 17){
list_dealer_hand();
draw_dealer_card();
/* this is bad code using end(),
* which might not always get the last drawn card.
* Also calculation of total is wrong this way:
* What happens if dealer draws Ace, Ace, Ace, King?
* Should be 1+1+1+10 = 13 but will result in 11+1+1+10=23
*/
$total_dealer += get_card_value(end($_SESSION['dealer_hand']), $total_dealer);
}
Correct calculation of total
To make your code more robust add a function calc_total(array $cards) which calculates the total of an array of cards and use it instead in the while loop to recalculate the dealers total. A function like this could look like this
<?php
function calc_total(array $cards) {
//this is a little tricky since aces must be counted last
$total = 0;
$aces = array();
foreach($cards as $card) {
if($card === 'Ace') {
$aces[] = $card;
continue; // next $card
}
$total += get_card_value($card, $total);
}
// add aces values
if (($total + 10 + count($aces)) > 21) {
//all aces must count 1 or 21 will be exceeded
return $total + count($aces);
}
foreach($aces as $card) {
$total += get_card_value($card, $total);
}
return $total;
}
Now your while loop could lool like this
<?php
while ($total_dealer < 17){
list_dealer_hand();
draw_dealer_card();
// recalculate the dealers total
$total_dealer = calc_total($_SESSION['dealer_hand']);
}
Setting up the pile
Mixing of number and string keys is perfectly valid php, but also most of the time misleading. In your pile you only need the cards, the values are not imoprtant, you can get a cards value at all time by calling get_card_value($card, 0). So set up the pile like this:
<?php
if(!isset($_SESSION["dealer_pile"])) $_SESSION["dealer_pile"] = array(
'Jack', 'Queen', 'King', 'Ace', '10', '9', '8', '7', '6', '5', '4', '3', '2'
);
Also change the draw_dealer_card function
<?php
function draw_dealer_card() {
//get a key
$key = array_rand($_SESSION["dealer_pile"]);
// add the card to the hand
$_SESSION["dealer_hand"][] = $_SESSION["dealer_pile"][$key];
/*
* why are you removing it from pile, the pile might
* contain multiple cards of each type
*/
// unset($_SESSION["dealer_pile"][$dealer_card]);
}
Notice how the $_SESSION['dealer_hand'] is no longer associative. Take this into account whenever you are adding cards to it, just use, $_SESSION["dealer_hand"][] = $the_new_card
A: Your current code gets static value of $total_dealer and checks in while loop without incrementing, which results in infinite loop.So try putting foreach{} loop inside while loop, which will allow $total_dealer to increment value after each selection.
if(FORM_stand("Stand")){
$total_dealer = 0;
while ($total_dealer < 17){
list_dealer_hand();
draw_dealer_card();
$text_dealer = '';
foreach($_SESSION["dealer_hand"] as $dealer_card=>$dealer_points) {
switch($dealer_card)
{
case "King":
case "Queen":
case "Jack":
case "10":
$total_dealer += 10;
break;
case "Ace":
if($total_dealer >= 11)
{
$total_dealer += 1;
}else{
$total_dealer += 11;
}
break;
case "9":
$total_dealer += 9;
break;
case "8":
$total_dealer += 8;
break;
case "7":
$total_dealer += 7;
break;
case "6":
$total_dealer += 6;
break;
case "5":
$total_dealer += 5;
break;
case "4":
$total_dealer += 4;
break;
case "3":
$total_dealer += 3;
break;
case "2":
$total_dealer += 2;
break;
}
}
}
}
A: try this
$total_dealer=0;
if(FORM_stand("Stand")){
while ($total_dealer < 17){
list_dealer_hand();
draw_dealer_card();
$total_dealer =$total_dealer+1;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21625443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Why did the folder not clone too So I have Visual Studio 2019 and Azure Devops as my repository.
I went to VS to clone the repo. Everything came with it except one folder. It was an authorization filter folder to filter authorization of a policy based authorization.
Why did that not get cloned?
A: Visual Studio is just calling git clone command to clone the repo.
Suggest you could directly use Git Command, such as follow
git clone https://dev.azure.com/fabrikam/DefaultCollection/_git/Fabrikam C:\Repos\FabrikamFiber
If you still get the same result. Afraid these files .xxx are all ignored in Git by default. You need to check .gitingore file.
Either manually add them or override this for particular folders in your .gitignore file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61550296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I match zero or more brackets in python regex I want a python regex to capture either a bracket or an empty string. Trying the usual approach is not working. I need to escape something somewhere but I've tried everything I know.
one = "this is the first string [with brackets]"
two = "this is the second string without brackets"
# This captures the bracket on the first but throws
# an exception on the second because no group(1) was captured
re.search('(\[)', one).group(1)
re.search('(\[)', two).group(1)
# Adding a "?" for match zero or one occurrence ends up capturing an
# empty string on both
re.search('(\[?)', one).group(1)
re.search('(\[?)', two).group(1)
# Also tried this but same behavior
re.search('([[])', one).group(1)
re.search('([[])', two).group(1)
# This one replicates the first solution's behavior
re.search("(\[+?)", one).group(1) # captures the bracket
re.search("(\[+?)", two).group(1) # throws exception
Is the only solution for me to check that the search returned None?
A: The answer is simple! :
(\[+|$)
Because the only empty string you need to capture is the last of the string.
A: Here's a different approach.
import re
def ismatch(match):
return '' if match is None else match.group()
one = 'this is the first string [with brackets]'
two = 'this is the second string without brackets'
ismatch(re.search('\[', one)) # Returns the bracket '['
ismatch(re.search('\[', two)) # Returns empty string ''
A: Ultimately, the thing I wanted to do is to take a string and, if I find any square or curly brackets, remove the brackets and their contents from the string.
I was trying to isolate the strings that needed fixing first by finding a match and the fixing the resulting list in a second step when all I needed to do was do both at the same time as follows:
re.sub ("\[.*\]|\{.*\}", "", one)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23554088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Not transfer/received voice (Twilio VOIP) after cellular call ended iOS
If I am on a Twilio video call (VOIP) and at the same time I received a cellular call on my device, I accept the call, speak with the person and end the call, I resume on Twilio video call everything is working apart from my voice is not getting transfer to the other end, also other end’s voice is not received at my end.
iOS is using 'TwilioVideo', '~> 1.2.1' SDK's
Android
In Android, we are having a different issue, Audio is not getting transferred to a cellular call if the user is already on a Twilio VOIP call.
Android is using build.gradle dependencies - compile 'com.twilio:video-android:1.2.0'
So basically what I want to do is, even if I am on a Twilio VOIP call, I want the user to be able to successfully do a cellular call from the application in both the platform, Android, and iOS.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45096559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to find out what caused a power mode change VB.NET has an event that fires when a computer's power mode is changed (SystemEvents.PowerModeChanged).
I need my program to find out what caused the power mode change, specifically, if there was a power button pressed, or some other reason.
How can I program this in VB.NET?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30903922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Android: Result of TextView.setText() overridden by fragment I'm a bit new to using fragments, and am having an issue with setting the text of a TextView within a fragment.
Right now, I have a single Activity with a common set of six buttons, along with two fragments which are displayed in a LinearLayout. I am able to use the buttons to replace the fragment in the LinearLayout successfully. However, I'm noticing some strange behavior related to changing a TextView in those two fragments.
I have a method in the fragments' class, called setTimer(), which attempts to change the text in the TextView. The strange thing is, the method works successfully. However, a split second later, the text reverts back to the default text in the TextView contained in the fragment's layout .xml file (which is a blank string).
I've tried calling the setTimer() method both before and after I replace the fragment in the LinearLayout, and the results are the same either way. How can I change this TextView's contents without having them overridden moments later? Thank you!
MainActivity.java's onCreate() method:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (savedInstanceState == null) {
FocusFragment initialFragment = new FocusFragment();
initialFragment.setFragmentType( Constants.FRAGMENT_WORK );
initialFragment.setTimer( 25 );
getFragmentManager().beginTransaction()
.add( R.id.linearLayout_fragmentHolder, initialFragment, "fragment_work" )
.commit();
}
}
FocusFragment.java:
public class FocusFragment extends Fragment {
int fragment_type;
static TextView timer;
final String TAG = "FocusFragment";
public FocusFragment() {
}
public void setFragmentType( int fragment_type ) {
this.fragment_type = fragment_type;
}
public int getFragmentType() {
return this.fragment_type;
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View rootView;
if ( getFragmentType() == Constants.FRAGMENT_WORK ) {
rootView = inflater.inflate( R.layout.fragment_work, container, false );
} else {
rootView = inflater.inflate( R.layout.fragment_break, container, false );
}
timer = ( TextView ) rootView.findViewById( R.id.textView_timer );
return rootView;
}
public boolean setTimer( int timerValue ) {
if ( timer != null ) {
if ( timerValue < 10 ) {
timer.setText( "0" + timerValue + ":00" );
} else {
timer.setText( timerValue + ":00" );
}
Log.d( TAG, "Timer text set successfully." );
return true;
}
Log.w( TAG, "WARNING: setTimer() couldn't find the timer TextView!" );
return false;
}
}
Finally, the changeFragment() method, which is called when the buttons are pressed:
public void changeFragment( String fragmentName, int timerValue ){
FocusFragment fragment = new FocusFragment();
if ( fragmentName == "fragment_work" ) {
fragment.setFragmentType( Constants.FRAGMENT_WORK );
} else {
fragment.setFragmentType( Constants.FRAGMENT_BREAK );
}
fragment.setTimer( timerValue );
getFragmentManager().beginTransaction()
.replace( R.id.linearLayout_fragmentHolder, fragment, fragmentName )
.commit();
}
A: The problem is that the OnCreateView() method of the fragment is called after the setTimer() is called.
An easy way to solve this is to first call fragment.setTimerValue(value) when you create the fragment.
void setTimerValue(int value){
this.timerValue = value;
}
Then at the end of OnCreateView() method do:
OnCreateView(){
...
setTimer(timerValue);
return rootView;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24553926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Passing Angular form data I'm attempting to pass the data of an angular form to external service, however, the base form is returning a null object and prevents any further processing. Haven't been able to identify the issue. I was able to accomplish an insert form almost identical to this.
Thanks in advance for any suggestions.
My form
<div class="col-md-6">
<form #currentItem="ngForm" (ngSubmit)="updateItem(currentItem.value)" autocomplete="off" novalidate>
<div class="form-group" [ngClass]="{'error': currentItem.controls.name?.invalid && currentItem.controls.name?.touched}">
<em *ngIf="currentItem.controls.name?.invalid && (currentItem.controls.name?.touched)">*</em>
<label for="name">Item name:</label>
<input [formControl]="name" (ngModel)="currentItem.name" name="name" required id="name" type="text" class="form-control" placeholder="name" />
</div>
<div class="form-group" [ngClass]="{'error': currentItem.controls.description?.invalid && currentItem.controls.description?.touched}">
<em *ngIf="currentItem.controls.description?.invalid && (currentItem.controls.description?.touched)">*</em>
<label for="description">Item Description:</label>
<input [formControl]="description" (ngModel)="currentItem.description" name="description" required id="description" type="text" class="form-control" placeholder="description" />
</div>
<div class="form-group" [ngClass]="{'error': currentItem.controls.price?.invalid && currentItem.controls.price?.touched}">
<em *ngIf="currentItem.controls.price?.invalid && (currentItem.controls.price?.touched)">*</em>
<label for="price">Item Price:</label>
<input [formControl]="price" (ngModel)="currentItem.price" name="price" required id="price" type="text" class="form-control" placeholder="price" />
</div>
<div class="form-group" [ngClass]="{'error': currentItem.controls.inventory?.invalid && currentItem.controls.inventory?.touched}">
<em *ngIf="currentItem.controls.inventory?.invalid && (currentItem.controls.inventory?.touched)">*</em>
<label for="inventory">Item Inventory:</label>
<input [formControl]="inventory" (ngModel)="currentItem.inventory" name="inventory" required id="inventory" type="text" class="form-control" placeholder="inventory" />
</div>
<div class="form-group" [ngClass]="{'error': currentItem.controls.category?.invalid && currentItem.controls.category?.touched}">
<em *ngIf="currentItem.controls.category?.invalid && (currentItem.controls.category?.touched)">*</em>
<label for="category">Item Category:</label>
<input [formControl]="category" (ngModel)="currentItem.category" name="category" required id="category" type="text" class="form-control" placeholder="category" />
</div>
<div class="form-group" [ngClass]="{'error': currentItem.controls.image_url?.invalid && currentItem.controls.image_url?.touched}">
<em *ngIf="currentItem.controls.image_url?.invalid && currentItem.controls.image_url?.touched && currentItem.controls.image_url?.errors.required">*</em>
<label for="image_url">Image:</label>
<input [formControl]="image_url" (ngModel)="currentItem.image_url" name="image_url" required pattern=".*\/.*.(png|jpg)" id="image_url" type="text" class="form-control" placeholder="preview.png" />
<em *ngIf="currentItem.controls.image_url?.invalid && currentItem.controls.image_url?.touched && currentItem.controls.image_url?.errors.pattern">Must be a png or jpg url</em>
<img [src]="currentItem.controls.image_url.value" *ngIf="currentItem.controls.image_url?.valid" />
</div>
<div class="form-group">
<button type="submit" class="btn btn-primary">Update</button>
<button type="button" [disabled]="currentItem.invalid" class="btn btn-default" (click)="cancel()">Cancel</button>
</div>
</form>
</div>
Component
import { Component, OnInit } from '@angular/core';
import { ActivatedRoute, Router, Params } from '@angular/router';
import { IItem } from './../../models/index';
import { ItemsService } from './../../services/index';
import { FormControl } from '@angular/forms';
import { JsonPipe } from '@angular/common';
@Component({
selector: 'app-update-item',
templateUrl: './update-item.component.html',
styleUrls: ['./update-item.component.scss']
})
export class UpdateItemComponent implements OnInit {
existingObject:any;
currentItem: IItem;
isDirty:boolean = true
// Form Controls
name = new FormControl('name');
description = new FormControl('description');
price = new FormControl('price');
inventory = new FormControl('inventory');
category = new FormControl('category');
image_url = new FormControl('image_url');
constructor(private itemsService: ItemsService, private route:ActivatedRoute, private router: Router) { }
updateItem(formValues) {
console.log(formValues); // temporary
console.log(this.currentItem); // temporary
this.itemsService.updateItem(formValues).subscribe(() => {
this.router.navigate(['/items'])
});
}
cancel() {
this.router.navigate(['/items'])
}
ngOnInit() {
this.route.params.forEach((params: Params) => {
this.itemsService.getItem(+params['id']).subscribe((res: any) => {
this.existingObject = res;
this.name.setValue(this.existingObject.name);
this.description.setValue(this.existingObject.description);
this.price.setValue(this.existingObject.price);
this.inventory.setValue(this.existingObject.inventory);
this.category.setValue(this.existingObject.category);
this.image_url.setValue(this.existingObject.image_url);
})
});
}
}
Service
updateItem(item) {
let options = { headers: new HttpHeaders({'Content-Type': 'application/json'})};
return this.http.post<IItem>(this.server_url + '/backend/items/update.php', item, options);
}
A: You should put all your formcontrols in a formGroup
myFormGroup: FormGroup = this.fb.group({
name: new FormControl('name'),
description: new FormControl('description'),
price: new FormControl('price'),
inventory: new FormControl('inventory'),
category: new FormControl('category'),
image_url: new FormControl('image_url'),
});
In order to do so, you need to be able to make a formGroup, thus have the FormBuilder dependency injected.
constructor(..., private fb: FormBuilder) {}
The above code of grouping formcontrols can be done easier like this:
myFormGroup: FormGroup = this.fb.group({
name: [''],
description: [''],
price: [''],
inventory: ['']),
category: [''],
image_url: [''],
});
You will need to add the formGroup to your HTML template aswell
<form [formGroup]="myFormGroup" (ngSubmit)="updateItem()" autocomplete="off" novalidate>
...
</form>
Lastly, you can print the values in your submit function by adding likes this
updateItem() {
if (this.myFormGroup.valid) //Not necessary since you don't use validators
console.log(this.myFormGroup.value)
}
All this information and more can be found on the Reactive Forms Documentation of Angular
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63515967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Operational Transformation in Meteor.js? Does Meteor.js support Operational Transformation yet?
I'm working on a project which is some what related to Etherpad for which I thought of using Meteor.js(which I think is very much suited for this kind of project). Operational transformation is very important for my project if I think of making it scalable. My current knowledge suggest that meteor does't support operational transformation out of box (correct me if I am wrong here).
So basically my question is how to implement operational transformation in meteor.js?
I tried using this library google-diff-match-patch, by Neil Fraser, but had problems while applying patches(though it worked outside meteor.js quite easily).
So any suggestions?
A: After seeing several Meteor projects make use of OT (i.e. http://cocodojo.meteor.com/), I decided to go for a proper integration.
I've created a smart package to integrate ShareJS into meteor. Please come check it out and add your pull requests: https://github.com/mizzao/meteor-sharejs
Demo App: http://documents.meteor.com
A: An in-browser collaborative text editor has two major components: the text area itself, which must behave well in coordinating the user's typing with other edits that are received from the server; and the data model for sending, receiving, and combining these edits.
Meteor today doesn't provide special help for either of these things specifically, but it does provide real-time data transport, and a way to move data automatically between the client and server.
If I were to implement EtherPad on Meteor, I've always imagined I would use a collection as an "operation log". User changes would be sent to the server, where they would be appended to the official log of operations (basically diffs) which would automatically stream to all clients. The client would have the work of applying diffs that come in and reconciling them with typing that hasn't been acknowledged by the server yet.
It's a tough implementation challenge. Good luck!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11594043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: JSON object with multiple arrays At the moment I have multiple arrays inside an object containing different pieces of data.
The issue is that the JSON is invalid and I am not sure who to correct it.
Here is my current code:
{
"cars": {
[{
"model": 'test'
}],
[{
"model": 'test2'
}]
}
};
Any help would be appreciated!
A: Use key:value pairs and remove that semicolon.
{
"cars": [
{
"model": "test"
},
{
"model": "test2"
}
]
}
Then once you parse your JSON and assign it to a variable, e.g. jsonVar, you can loop over the array jsonVar.cars to get each dictionary, which has model property.
More examples of correctly formatted JSON.
Finally, this JSON validator can provide helpful hints on incorrectly formatted JSON.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53037764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I declare "Member Fields" in Java? This question probably reveals my total lack of knowledge in Java. But let me first show you what I thought was the correct way to declare a "member field":
public class NoteEdit extends Activity {
private Object mTitleText;
private Object mBodyText;
I'm following a google's notepad tutorial for android (here) and they simply said: "Note that mTitleText and mBodyText are member fields (you need to declare them at the top of the class definition)." I thought I got it and then realized that this little snippet of code wasn't working.
if (title != null) {
mTitleText.setText(title);
}
if (body != null) {
mBodyText.setText(body);
}
So either I didn't set the "member fields" correctly which I thought all that was needed was to declare them private Objects at the top of the NoteEdit class or I'm missing something else. Thanks in advance for any help.
UPDATE
I was asked to show where these fields were being intialized here is another code snippet hope that it's helpful...
@Override
protected void onCreate(Bundle savedInstanceState) {
// TODO Auto-generated method stub
super.onCreate(savedInstanceState);
setContentView(R.layout.note_edit);
Long mRowId;
mTitleText = (EditText) findViewById(R.id.title);
mBodyText = (EditText) findViewById(R.id.body);
So basically the error that is showing up is coming from eclipse:
"The method setText(String) is undefined for the type Object"
A: When you declare fields and variables, it's usually helpful to give them a more specific static type than Object. Because you have declared mTitleText as an Object, the compiler only knows how to invoke methods on the general Object class definition. setText is not such a method, so it's not legal to call it without a cast or other trickery.
What you should do is figure out the type that your field should be. I don't know Android, but I presume that there is a text label class which defines your setText method. If you change your fields to be defined as that,
private EditText mTitleText;
you will find that things should work much better :-)
A: Unless you set those two fields somewhere else, they are not being initialized. So when you use them later, they are null and are causing exceptions. Java reference types initialize to null. Basic data types, which are not nullable, initialize to 0.
A: Your mBodyText field needs to be typed to allow access to the setText method.
e.g:
private BodyText mBodyText;
A: its not working because your objects not initialized.
mTitleText text should be a TextView
then you need to initialize it
mTitleText = findViewById( R.id.yourviewid );
and then do
mTitleText.setText(title);
A: They are marked as private so you can't the them outside of your class. If you want to get access outside of your class you have to use the keyword public in front of your field. BUT this is not recommended use properties instead
A: In this case you need to assign values to the two member variables. I believe the NoteEdit class has a layout xml file associated with it. You need to assign the text field objects from that layout to the objects before you try to reference their properties.
mTitleText = ( TextView ) findViewById( R.id.name_of_the_field_in_the_layout_file )
The answer above is also correct. You should assign types to those variables at the top of your class, not just make them Objects
A: Declaring the name and type of a reference is one thing; initializing it to point to a valid memory location on the heap is another. You need to initialize data members in a constructor for your class.
The default constructor initializes references to null by default. If you haven't written a constructor to initialize your data member references, that could be an explanation why you're having trouble.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2806045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: There is no "used in actions" in Media-type I 'm usiing zabbix 3.4
I want to know how to link Action with media-type.
I tried all ways i knew but it was useless.
Below is the steps i did.
1. Create a media-type
2. Create a user for newly created media-type.
3. Create Action.
3.1 Add Operations on the acknowledgement operations tab.
3.3 New - Send message to users.
==> But there is no "used in actions" in the Media-type lists.
If someone has solution , let me know it.
Thanks in advance.
A: That column is populated for media types that are used directly in actions - that is, explicitly selected in the dropdown in action properties.
By default actions do not limit to any media types and all media types will be used as per the user media entries and various filters.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52921069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Powershell Silent Install Exe w/o GUI I am trying to write a silent installation in Powershell that installs an exe using the following command:
$pathargs = {$exePath /s /v /qn}
Invoke-Command -ScriptBlock $pathargs
When I run this however, I'm prompted with this gui:
Does anyone know how to avoid having this page popup and start the installation?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/42491611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Create multiple rects for one data row using join I have the following data structure:
{
{date: <dd-mm-yyyy>,
uname: frank,
x: 3,
nr: 45},{
date: <dd-mm-yyyy>,
uname: john,
x: 4,
nr: 40},
...
}
I'd like to create a rect for every integer that constitutes nr. So in Frank's case: 45 rects. In John's case 40.
Currently I am creating a rect for every row as such:
var blocks = contentGroup
.selectAll("rect")
.data(data)
.join("rect")
.attr("x", d => d.x)
.attr("y", function(d) { return y(d.uname)})
.attr("width", 5)
.attr("height", 5)
.attr("fill", d => z(d.date))
I noticed that d3 associates one DOM element with one datum. I thought about creating a new dataset with 45 entries for Frank and 40 entries for John, but that seems off. Equally, I thought about brute forcing the rect generation, simply by iterating through the nr (45/40 times respectively), but my fear here is I lose power of join and enter.
Any other suggestions on how to create <nr> of rects with this setup?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74747792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: React Redux - Manage dependencies between reducers I want to know how to manage dependencies between reducers.
Let's say you fire some action, and there are two reducers "listening" to this action, but you want one of them to run before the other one does.
Example:
You have reducer of ingredients and reducer of meals (meals consist of ingredients).
Every reducer takes the fetched data from the server, makes cast objects out of it and save them i the store (for instance - the ingredients reducer makes an aray of Ingredient objects and return it as the new state).
Every ingredient has a unique id.
A Meal of gets a listingredients ids (the ingredients it contains) in the constructor, then fetch the relevant ingredients objects from the store and add them as an attribute of Meal.
For reasons of efficiency you fetch all the data (the ingredients and the meals) together from the server (one GET request).
When you fire FETCH_ALL_DATA_FROM_SERVER (with the fetched data as payload) - you want both the ingredients and the meals reducers to "listen" to this action:
*
*The ingredients reducer should parse the ingredients raw info that
was fetched from the server into cast Ingredient objects, and store
them in the Store.
*The meals reducer should parse the meals raw info that was fetched
from the server into cast Mael objects, and store them in the
Store.
But here is where it gets tricky - what if the meals reducer tries to create a Meal object that contains ingredients that are not in the store yet (the ingredients reducer hasn't load them to the store yet)?
How can I solve this problem?
Thanks in advance :)
UPDATED
Code example:
The fetching action looks something like that:
export function fetchAllDataFromDB() {
return dispatch => {
axios.get('http://rest.api.url.com')
.then(response => {
dispatch({
type: FETCH_ALL_DATA_FROM_DB,
payload: response.data
})
})
.catch(error => {
console.log(error)
})
}
}
The ingredients reducer looks something like that:
export default function reducer(state={}, action) {
switch (action.type) {
case FETCH_ALL_DATA_FROM_DB: {
// Create the Ingredient objects from the payload
// They look like Ingredient(id, name, amount)
}
}
return state
}
The meals reducer looks something like that:
export default function reducer(state={}, action) {
switch (action.type) {
case FETCH_ALL_DATA_FROM_DB: {
// Create the Meal objects from the payload
// They look like Meal(id, name, price, ingredients_ids)
// It will try to fetch the ingredients (of ingredients_ids) from the store - and fail, because hey are not there yet.
}
}
return state
}
The problem is - you can't create the Meal objects before the ingredints reducer is done loading the relevant ingredients to the store.
UPDATED
I haven't really solved the problem, but wat I did is:
I changed the Meal constructor so it won't fetch the Ingredient objects from the store, but will stick with the ingredients_ids list.
I also added a getter that fetches the Meal objects from store.
It might be better this way, because now I can change the ingredients a meal is consist of dynamically (not that I want to...).
It's less efficient though...
If you find a better solution I'd really like to know.
Thanks for the help :)
A: I think you could search more on topic about asynchronous in javascript.
in general if you wish to wait for 2 or more action to complete before running another method. You could use Promise.all(), or nest the callback into callback hell or use async.series() library. Back to redux, you could make sure that all your queue has finished running before rendering( dispatch() )
Also my recommendation is to learn Promise instead of async library.
example case:
fetch().then( res => dispatch({type: "I_GET_THE_DATA", payload: res.data}))
you could even chain fetch if you want.
fetch(mealURL).then( res1 =>{
fetch(ingredientsURL).then(res2 => {
var meal_and_ingredients = {
meal: res1,
ingredients: res2
}
dispatch({type: "GET_TWO_DATA", payload: meal_and_ingredients})
})
})
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39379539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to test some conditions by code then validate or invalidate and show error message? I want to test if customer name is unique in database.
if customer name added before then send validation error message.
In the past we test this add new error to ModelState like this:
ModelState.AddModelError("Name", "Some message");
How to do like this in BLAZOR???
A: I assume since you reference ModelState you want to know how forms and validation works in Blazor. Have you looked at the documentation?
https://learn.microsoft.com/en-us/aspnet/core/blazor/forms-validation?view=aspnetcore-3.0
This explains how to use a validation process to show errors in a form. As well as built-in validations ( [Required] etc) you can also create custom validations, e.g. How to create Custom Data Annotation Validators
Alternatively you can use a more powerful library such as Fluent Validation - see these articles for more help integrating this with Blazor:
https://blog.stevensanderson.com/2019/09/04/blazor-fluentvalidation/
https://chrissainty.com/using-fluentvalidation-for-forms-validation-in-razor-components/
A: I used FluentValidation with Steve Sanderson's implementation https://gist.github.com/SteveSandersonMS/090145d7511c5190f62a409752c60d00#file-fluentvalidator-cs
I then made a few modifications to it.
First I added a new interface based on IValidator.
public interface IMyValidator : IValidator
{
/// <summary>
/// This should be the objects primary key.
/// </summary>
object ObjectId { get; set; }
}
Then I changed the FluentValidator to implement IMyValidator and added a new parameter.
public class FluentValidator<TValidator> : ComponentBase where TValidator: IMyValidator,new()
{
/// <summary>
/// This should be the objects primary key.
/// </summary>
[Parameter] public object ObjectId { get; set; }
... continue with the rest of Steve Sanderson's code
}
For my FluentValidation AbstractValidator I did the following.
public class InvestigatorValidator:AbstractValidator<IAccidentInvestigator>,IMyValidator
{
public object ObjectId { get; set; }
public InvestigatorValidator()
{
RuleFor(user=>user.LogonName).NotEmpty().NotNull().MaximumLength(100);
RuleFor(user=>user.Email).NotEmpty().NotNull().MaximumLength(256);
RuleFor(user=>user.FullName).NotEmpty().NotNull().MaximumLength(100);
RuleFor(user=>user.RadioId).MaximumLength(25);
RuleFor(user=>user.LogonName).MustAsync(async (userName, cancellation)=>
{
var exists = await GetUserNameExists(userName);
return !exists;
}).WithMessage("UserName must be unique.");
RuleFor(user=>user.Email).MustAsync(async (email, cancellation)=>
{
var exists = await GetEmailExists(email);
return !exists;
}).WithMessage("Email must be unique.");
}
private async Task<bool> GetUserNameExists(string userName)
{
if(ObjectId is int badge)
{
await using var db = MyDbContext;
var retVal = await db.AccidentInvestigators.AnyAsync(a=>a.Badge != badge && string.Equals(a.LogonName.ToLower(), userName.ToLower()));
return retVal;
}
return false;
}
private async Task<bool> GetEmailExists(string email)
{
if(ObjectId is int badge)
{
await using var db = DbContext;
var retVal = await db.AccidentInvestigators.AnyAsync(a=>a.Badge != badge && string.Equals(a.Email.ToLower(), email.ToLower()));
return retVal;
}
return false;
}
}
Then in my Razor Component Form I changed the FluentValidator to set the ObjectId.
<EditForm Model="_investigator" OnValidSubmit="Save">
<FluentValidator TValidator="InvestigatorValidator" ObjectId="@_investigator.Badge"/>
... put the rest of your layout here
</EditForm>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/59030664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Locate wp-config.php file after Word Press Click To Deploy on Google Compute Engine For Migration I have created a virtual machine on Google Cloud Platform that is running Wordpress, which was setup using Wordpress click-to-deploy: https://console.cloud.google.com/marketplace/details/click-to-deploy-images/wordpress?pli=1&_ga=2.72191571.-1784804083.1544815132&_gac=1.153409866.1545571350.Cj0KCQiAgf3gBRDtARIsABgdL3mwngvHYtz5GvkiA6vsknZDGdM8JIDPByT7v2O4m0tkvXXibVI0trAaAi37EALw_wcB
I am trying to migrate a my website over to GCP, and to export everything, I used All-in-One WP Migration. I have used this to export the website data, but when I go over to my new host and try to import the data, there it says that there is an file size limit for imports of 100 mb. I found that i need to increase the limit in my wp-config.php file (option 2): https://help.servmask.com/2018/10/27/how-to-increase-maximum-upload-file-size-in-wordpress/
However, I cannot find my wp-config.php anywhere on GCP or in the wordpress dashboard platform. How do I access this in order to increase my limit so that i can import the new file?
A: I figured using this:
Go search here: /var/www/html/
A: I solved this by using the Duplicator plugin for wordpress, instead of the All-in-One WP migration.
The Duplicator plugin will produce two files; an archive file and an installer file. This link here will tell you how to upload these:
https://snapcreek.com/duplicator/docs/quick-start/?utm_source=duplicator_free&utm_medium=wordpress_plugin&utm_content=package_built_install_help&utm_campaign=duplicator_free#quick-040-q
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54162599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Insert/update to sql_ascii encoding postgreSQL Here is a postgreSQL with server encoding SQL_ASCII. When I get data, I must use function convert_to(column1, 'SQL_ASCII') in select, and then use new String(value1, 'GBK') in java to get the right value.
But, when I send data by insert/update, the value in DB always error. Anyone can tell me how to send SQL including Chinese or other character by Java?
Apache DBCP config:
driverClassName=org.postgresql.Driver
url=jdbc:postgresql://127.0.0.1:5432/fxk_db_sql_ascii
username=test
password=test
initialSize=10
maxTotal=10
maxIdle=10
minIdle=5
maxWaitMillis=1000
removeAbandonedOnMaintenance=true
removeAbandonedOnBorrow=true
removeAbandonedTimeout=1
connectionProperties=useUnicode=true;characterEncoding=SQL_ASCII;allowEncodingChanges=true
SQL query in java:
String sql = "select user_id, first_name as first_name, convert_to(first_name, 'sql_ascii') as first_name1, last_name as last_name, convert_to(last_name, 'sql_ascii') as last_name1 from public.tbl_users";
ResultSet rs = stmt.executeQuery(sql);
List<Map<String, Object>> list = new ArrayList<Map<String, Object>>();
ResultSetMetaData md = rs.getMetaData();
int columnCount = md.getColumnCount();
while (rs.next()) {
Map<String, Object> rowData = new HashMap<String, Object>();
for (int i = 1; i <= columnCount; i++) {
rowData.put(md.getColumnName(i), rs.getObject(i)==null?"":new String(rs.getBytes(i),"GBK"));
}
list.add(rowData);
}
rs.close();
But how should I do while insert/update?
A: Avoid SQL_ASCII
You should be using a server encoding of UTF8 rather than SQL_ASCII.
The documentation is quite clear about this matter, and even includes a warning to not do what you are doing. To quote (emphasis mine):
The SQL_ASCII setting behaves considerably differently from the other settings. When the server character set is SQL_ASCII, the server interprets byte values 0-127 according to the ASCII standard, while byte values 128-255 are taken as uninterpreted characters. No encoding conversion will be done when the setting is SQL_ASCII. Thus, this setting is not so much a declaration that a specific encoding is in use, as a declaration of ignorance about the encoding. In most cases, if you are working with any non-ASCII data, it is unwise to use the SQL_ASCII setting because PostgreSQL will be unable to help you by converting or validating non-ASCII characters.
Use UTF8
Use an encoding of UTF8, meaning UTF-8. This can handle the characters for any language including Chinese.
And the UTF8 encoding allows Postgres to make use of the new support for International Components for Unicode (ICU) in Postgres 10 and later.
Java also uses Unicode encoding. Just let your JDBC driver handle the marshaling of text between Java and the database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49713964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Angular2/Ionic2 circular dependent modules because of navigation between pages I was refactoring my project to match the Angular2 styleguide at https://angular.io/guide/styleguide At first I had one module so there was no problem. Now while refactoring and with splitting into modules I got circular dependencies because of the navigation between pages of different modules.
Simplified, I have three modules, each having components:
*
*Shared
*
*BookListItemComponent
*AjaxSpinnerComponent
*FormHelperComponent
*...
*Books
*
*BookComponent
*Shops
*
*ShopListComponent
Modules Books and Shops each import Shared. The BookListItemComponent shows a book title, and when tapped navigates to the BookComponent which show the book's details.
The ShopListComponent shows a list of books of a certain shop.
Since the Books module imports the Shared module to use the spinner etc. this creates a circular dependency. How are we supposed to solve this?
In an app you navigate between pages of different modules. I don't see a way to avoid having these pointing at each other. Especially with the BookListItemComponent which is used all over the app to list books.
I have also looked at:
*
*https://forum.ionicframework.com/t/ionic2-navigation-circular-depencies/41123/6
*Circular dependency injection angular 2
But couldn't really map this to my problem.
A: So I feel your pain in refactoring all of your code. We went through the same thing with one of our apps at work and it was a pain. On the backside of it though, well worth it! The way ionic suggests you to organize your files is not sustainable. I have a couple thoughts and ideas for you based on going through the same thing that might help you out.
First off, putting everything back in the same module is not the best idea, because then you end back where you started with the disorganized code as ionic would have you do it, rather than the organized code as Angular suggests. In general, Angular suggests some great patterns, and I think it's worth the struggle to get your code in line with their suggestions.
Secondly, and this is the main point, are you using deep links in your app? If you are, there is a fantastic, barely documented feature you get with deeplinks to avoid circular dependency within pages. Suppose you have a page with the following deep link config:
{
component: MyCoolPage, // the page component class
name: 'MyCoolPage', // I use the name of the class but can be any sting you want
segment: 'cool-page' // optional, not related to the problem,
// but it's probably best to use this field as well
}
Whenever you want to navigate to MyCoolPage, instead of doing navCtrl.push(MyCoolPage), you can now do navCtrl.push('MyCoolPage') // or whatever string name you gave the page. So now you're navigating to pages via string names, which eliminates the need for importing pages whenever you want to navigate to it. This feature has existed since ionic 2, although I did not realize you could do this until updating to ionic 3.
Thirdly, more of a design consideration than anything else, you might want to reconsider navigating to pages from within components. Generally what we do is emit events up to the parent page components, and then have the page component handle pushing or popping the nav stack. Your BookListItemComponent shouldn't be causing you problems. If that is something in the shared module, used throughout the app, it shouldn't be depending on other modules. Your shared module shouldn't depend on anything else besides the ionic and angular modules you need to import.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44912488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How does the efficiency compare in these two SQL statements? I'm updating a stored function that fetches a few columns from a table and does some other operations on it before returning it. Depending on an argument passed to the function, one value might be a special, hard-coded string, but 99% of the time, it's just going to be the column value instead.
I want this to be a future-proof solution, and can either handle it in a CASE statement inside the SELECT, or in procedural logic after the SELECT.
Which one is likely to be more efficient?
--Option 1
CREATE FUNCTION WeWantTheFunc(@arg1 NVARCHAR(40))
RETURNS NVARCHAR(255)
BEGIN
DECLARE @my_variable NVARCHAR(255);
SELECT @my_variable = CASE
WHEN Column1 IS NULL AND @arg1 = 'Special Value'
THEN 'Something special'
ELSE Column1
END
FROM my_table;
RETURN @my_variable;
END
--Option 2
CREATE FUNCTION WeWantTheFunc(@arg1 NVARCHAR(40))
RETURNS NVARCHAR(255)
BEGIN
DECLARE @my_variable NVARCHAR(255);
SELECT @my_variable = column1
FROM my_table;
IF @my_variable IS NULL AND @arg1 = 'Special Value'
SET @my_variable = 'Something special';
RETURN @my_variable;
END
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74380342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Remove old marker add new one after call ajax method every x second with OpenLayers I want to change marker position for every x seconds using ajax method but i am facing 1 issue for that is the new marker is adding on OpenLayers but old marker is not removing from OpenLayers. I want to remove old marker first and then add new marker in updated place.
function ShowCurrentTime() {
var obj = {};
obj.device_id = $.trim($("\[id*=txtdevice_id\]").val());
var marker;
var mapOptions;
$.ajax({
url: "TRACKING.aspx/GetData",
data: JSON.stringify(obj),
type: "POST",
dataType: "json",
contentType: "application/json; charset=utf-8",
success: function(data) {
if (data.d != '') {
var lat = data.d\[1\];
var lng = data.d\[2\];
}
$.each(data, function(index, value) {
var zoom = 13;
var marker = new OpenLayers.Layer.Markers("Markers");
var lonLat = new OpenLayers.LonLat(lng, lat).transform(new OpenLayers.Projection("EPSG:4326"), map.getProjectionObject());
map.addLayer(marker);
map.removeLayer(marker);
marker.addMarker(new OpenLayers.Marker(lonLat));
map.setCenter(lonLat, zoom);
});
}
});
}
A: you are initializing the marker again in ajax, remove first and then initialize it again
this should work
function ShowCurrentTime() {
var obj = {};
obj.device_id = $.trim($("\[id*=txtdevice_id\]").val());
var marker = null;
var mapOptions;
$.ajax({
url: "TRACKING.aspx/GetData",
data: JSON.stringify(obj),
type: "POST",
dataType: "json",
contentType: "application/json; charset=utf-8",
success: function(data) {
if (data.d != '') {
var lat = data.d\[1\];
var lng = data.d\[2\];
}
$.each(data, function(index, value) {
var zoom = 13;
if (marker != null) map.removeLayer(marker);
marker = new OpenLayers.Layer.Markers("Markers");
var lonLat = new OpenLayers.LonLat(lng, lat).transform(new OpenLayers.Projection("EPSG:4326"), map.getProjectionObject());
map.addLayer(marker);
marker.addMarker(new OpenLayers.Marker(lonLat));
map.setCenter(lonLat, zoom);
});
}
});
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51096648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Get parent directory in Ansible? Is there a way to evaluate a relative path in Ansible?
tasks:
- name: Run docker containers
include: tasks/dockerup.yml src_code='..'
Essentially I am interested in passing the source code path to my task. It happens that the source code is the parent path of {{ansible_inventory}} but there doesn't seem to be anything to accomplish that out of the box.
---- further info ----
Project structure:
myproj
app
deploy
deploy.yml
So I am trying to access app from deploy.yml.
A: You can use the dirname filter:
{{ inventory_dir | dirname }}
For reference, see Managing file names and path names in the docs.
A: You can use {{playbook_dir}} for the absolute path to your current playbook run.
For me thats the best way, because you normally know where your playbook is located.
A: OK, a workaround is to use a separate task just for this:
tasks:
- name: Get source code absolute path
shell: dirname '{{inventory_dir}}'
register: dirname
- name: Run docker containers
include: tasks/dockerup.yml src_code={{dirname.stdout}}
Thanks to udondan for hinting me on inventory_dir.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35271368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: How to wrap 2 divs into another on on the fly? I have 2 divs that I am creating from an array:
$.each(data, function(i,item) {
$('<img/>').attr("src", item.media_path).wrap('<div class="friend_pic' + item.id + '"></div>').appendTo('.sex');
$('<div class="friends-name' + item.id + '" id="fname_' + item.id + '" />').html(item.fname).appendTo('.sex');
});
And I want to wrap them all in a div, like this:
<div class="sex">... the divs from the each function ...<div>
<div class="sex">... the divs from the each function ...<div>
<div class="sex">... the divs from the each function ...<div>
I am using: $('<div class="sex"></div>').appendTo('.mutual_row'); or:
$('.mutual_friends, friends-name1').wrap('<div class="sex"></div>')
But they give the wring result wrappin gthem all in one div called sex
A: $.each(data, function(i,item) {
var container = $('<div class="sex" />');
$('<img/>').attr("src", item.media_path).wrap('<div class="friend_pic' + item.id + '"></div>').appendTo(container);
$('<div class="friends-name' + item.id + '" id="fname_' + item.id + '" />').html(item.fname).appendTo(container);
container.appendTo('.mutual_row');
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5545842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: explain analyze - cost to actual time relation Usual when improving my queries I see a coinciding improvement with both cost and actual time when running an explain analyze on both before and after queries.
However, in one case, the before query reports
"Hash Join (cost=134.06..1333.57 rows=231 width=70)
(actual time=115.349..115.650 rows=231 loops=1)"
<cut...>
"Planning time: 4.060 ms"
"Execution time: 115.787 ms"
and the after reports
"Hash Join (cost=4.63..1202.61 rows=77 width=70)
(actual time=0.249..0.481 rows=231 loops=1)"
<cut...>
"Planning time: 2.079 ms"
"Execution time: 0.556 ms"
So as you can see, the costs are similar but actual and real execution times are vastly different, regardless of the order in which I run the tests.
Using Postgres 8.4.
Can anyone clear up my understanding as to why the cost does not show an improvement?
A: There isn't much information available in the details given in the question but a few pointers can may be help others who come here searching on the topic.
*
*The cost is a numerical estimate based on table statistics that are calculated when analyze is run on the tables that are involved in the query. If the table has never been analyzed then the plan and the cost may be way sub optimal. The query plan is affected by the table statistics.
*The actual time is the actual time taken to run the query. Again this may not correlate properly to the cost depending on how fresh the table statistics are. The plan may be arrived upon depending on the current table statistics, but the actual execution may find real data conditions different from what the table statistics tell, resulting in a skewed execution time.
Point to note here is that, table statistics affect the plan and the cost estimate, where as the plan and actual data conditions affect the actual time. So, as a best practice, before working on query optimization, always run analyze on the tables.
A few notes:
*
*analyze <table> - updates the statistics of the table.
*vacuum analyze <table> - removes stale versions of the updated records from the table and then updates the statistics of the table.
*explain <query> - only generates a plan for the query using statistics of the tables involved in the query.
*explain (analyze) <query> - generates a plan for the query using existing statistics of the tables involved in the query, and also runs the query collecting actual run time data. Since the query is actually run, if the query is a DML query, then care should be taken to enclose it in begin and rollback if the changes are not intended to be persisted.
A: *
*Cost meaning
*
*The costs are in an arbitrary unit. A common misunderstanding is that they are in milliseconds or some other unit of time, but that’s not the case.
*The cost units are anchored (by default) to a single sequential page read costing 1.0 units (seq_page_cost).
*
*Each row processed adds 0.01 (cpu_tuple_cost)
*Each non-sequential page read adds 4.0 (random_page_cost).
*There are many more constants like this, all of which are configurable.
*Startup cost
*
*The first numbers you see after cost= is known as the “startup cost”. This is an estimate of how long it will take to fetch the first row.
*The startup cost of an operation includes the cost of its children.
*Total cost
*
*After the startup cost and the two dots, is known as the “total cost”. This estimates how long it will take to return all the rows.
*example
QUERY PLAN |
--------------------------------------------------------------+
Sort (cost=66.83..69.33 rows=1000 width=17) |
Sort Key: username |
-> Seq Scan on users (cost=0.00..17.00 rows=1000 width=17)|
*
*We can see that the total cost of the Seq Scan operation is 17.00, and the startup cost of the Seq Scan is 0.00. For the Sort operation, the total cost is 69.33, which is not much more than its startup cost (66.83).
*Actual time meaning
*
*The “actual time” values are in milliseconds of real time, it is the result of EXPLAIN's ANALYZE. Note: the EXPLAIN ANALYZE option performs the query (be careful with UPDATE and DELETE)
*EXPLAIN ANALYZE could be used to compare the estimated number of rows with the actual rows returned by each operation.
*Helping the planner estimate more accurately
*
*Gather better statistics
*
*tables also change over time, so tuning the autovacuum settings to make sure it runs frequently enough for your workload can be very helpful.
*If you’re having trouble with bad estimates for a column with a skewed distribution, you may benefit from increasing the amount of information Postgres gathers by using the ALTER TABLE SET STATISTICS command, or even the default_statistics_target for the whole database.
*Another common cause of bad estimates is that, by default, Postgres will assume that two columns are independent. You can fix this by asking it to gather correlation data on two columns from the same table via extended statistics.
*Tune the constants it uses for the calculations
*
*Assuming you’re running on SSDs, you’ll likely at minimum want to tune your setting of random_page_cost. This defaults to 4, which is 4x more expensive than the seq_page_cost we looked at earlier. This ratio made sense on spinning disks, but on SSDs it tends to penalize random I/O too much.
Source:
*
*PG doc - using explain
*Postgres explain cost
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56387883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: I can't connect MySQL DB in symfony 1.0 I use symfony1.0.22. on localhost,on MAMP.
studying with askeet ,I can't move...
I can't connect mySQL DB…
it say "no such file or directry".
I can't understand WHAT IT SAYING!
log is below.Please tell me what I can do…
> symfony propel-insert-sql
> >> schema converting "/Applications/MAMP/..._fail/config/schema.yml" to XML
> >> schema putting /Applications/MAMP/htdo...ail/config/generated-schema.xml Buildfile:
> /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build.xml
> [resolvepath] Resolved /Applications/MAMP/htdocs/hatch/sf_fail/config
> to /Applications/MAMP/htdocs/hatch/sf_fail/config
>
> propel-project-builder > check-project-or-dir-set:
>
> propel-project-builder > check-project-set:
>
> propel-project-builder > set-project-dir:
>
> propel-project-builder > check-buildprops-exists:
>
> propel-project-builder > check-buildprops-for-propel-gen:
>
> propel-project-builder > check-buildprops:
>
> propel-project-builder > configure:
> [echo] Loading project-specific props from /Applications/MAMP/htdocs/hatch/sf_fail/config/propel.ini [property]
> Loading /Applications/MAMP/htdocs/hatch/sf_fail/config/propel.ini
>
> propel-project-builder > insert-sql:
> [phing] Calling Buildfile '/Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build-propel.xml'
> with target 'insert-sql' [property] Loading
> /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/./default.properties
>
> propel > insert-sql: [propel-sql-exec] Executing statements in file:
> /Applications/MAMP/htdocs/hatch/sf_fail/data/sql/lib.model.schema.sql
> [propel-sql-exec] Our new url -> mysql://root:root@localhost/iii
> Execution of target "insert-sql" failed for the following reason:
> /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build-propel.xml:296:1:
> [wrapped: connect failed [Native Error: No such file or directory]
> [User Info: Array]]
> [phing] /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build-propel.xml:296:1:
> [wrapped: connect failed [Native Error: No such file or directory]
> [User Info: Array]]
>
> BUILD FINISHED
>
> Total time: 0.3183 seconds
> >> file- /Applications/MAMP/htdocs/hatch...ail/config/generated-schema.xml
P.S. and propel.ini is below.
> propel.targetPackage = lib.model
> propel.packageObjectModel = true
> propel.project = sf_fail
> propel.database = mysql
> #propel.database.createUrl = mysql://root:root@localhost/
> propel.database.url = mysql://root:root@localhost/sf_fail
>
> propel.addGenericAccessors = true
> propel.addGenericMutators = true
> propel.addTimeStamp = false
>
> propel.schema.validate = false
> propel.mysql.tableType = InnoDB
>
> ; directories
> propel.home = .
> propel.output.dir = /var/www/symfony/sf_fail
> propel.schema.dir = ${propel.output.dir}/config
> propel.conf.dir = ${propel.output.dir}/config
> propel.phpconf.dir = ${propel.output.dir}/config
> propel.sql.dir = ${propel.output.dir}/data/sql
> propel.runtime.conf.file = runtime-conf.xml
> propel.php.dir = ${propel.output.dir}
> propel.default.schema.basename = schema
> propel.datadump.mapper.from = *schema.xml
> propel.datadump.mapper.to = *data.xml
>
> ; builder settings
> propel.builder.peer.class = addon.propel.builder.SfPeerBuilder
> propel.builder.object.class = addon.propel.builder.SfObjectBuilder
>
> propel.builder.objectstub.class = addon.propel.builder.SfExtensionObjectBuilder
> propel.builder.peerstub.class = addon.propel.builder.SfExtensionPeerBuilder
> propel.builder.objectmultiextend.class = addon.propel.builder.SfMultiExtendObjectBuilder
> propel.builder.mapbuilder.class = addon.propel.builder.SfMapBuilderBuilder
> propel.builder.interface.class = propel.engine.builder.om.php5.PHP5InterfaceBuilder
> propel.builder.node.class = propel.engine.builder.om.php5.PHP5NodeBuilder
> propel.builder.nodepeer.class = propel.engine.builder.om.php5.PHP5NodePeerBuilder
> propel.builder.nodestub.class = propel.engine.builder.om.php5.PHP5ExtensionNodeBuilder
> propel.builder.nodepeerstub.class = propel.engine.builder.om.php5.PHP5ExtensionNodePeerBuilder
>
> propel.builder.addIncludes = false
> propel.builder.addComments = false
>
> propel.builder.addBehaviors = false
What I did before I encounter this problem
install symfony1.0.22 in /Applications/MAMP/bin/php5/lib/php/ by pear
$ sudo ln -s /Applications/MAMP/bin/php5/bin/pear /usr/bin/pear
$ pear upgrade PEAR
$ pear channel-discover pear.symfony-project.com
$ pear install symfony-1.0.22
$ sudo ln -s -f /Applications/MAMP/bin/php5/bin/symfony /usr/bin/symfony
and I made dir named 'sf_fail' in /Application/MAMP
$cd /Application/MAMP/sf_fail
$symfony init-project sf_fail
$symfony init-app frontend
next,I edit shema.yml,database.yml,propel.ini to create DB. and I made DB by MAMP's phpMyAdmin ,named 'sf_fail'.
$symfony propel-build-model
$symfony propel-build-sql
I think,there is no problem until below command.
$symfony propel-insert-sql
this can't move uhmm...
I just added resulut of "propel-build-all" for more infomation.
> symfony propel-build-all
> >> schema converting "/Applications/MAMP/..._fail/config/schema.yml" to XML
> >> schema putting /Applications/MAMP/htdo...ail/config/generated-schema.xml
> Buildfile: /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build.xml
> [resolvepath] Resolved /Applications/MAMP/htdocs/hatch/sf_fail/config to
> /Applications/MAMP/htdocs/hatch/sf_fail/config
>
> propel-project-builder > check-project-or-dir-set:
>
> propel-project-builder > check-project-set:
>
> propel-project-builder > set-project-dir:
>
> propel-project-builder > check-buildprops-exists:
>
> propel-project-builder > check-buildprops-for-propel-gen:
>
> propel-project-builder > check-buildprops:
>
> propel-project-builder > configure:
> [echo] Loading project-specific props from /Applications/MAMP/htdocs/hatch/sf_fail/config/propel.ini
> [property] Loading /Applications/MAMP/htdocs/hatch/sf_fail/config/propel.ini
>
> propel-project-builder > om:
> [phing] Calling Buildfile '/Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build-propel.xml'
> with target 'om'
> [property] Loading /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/./default.properties
>
> propel > check-run-only-on-schema-change:
>
> propel > om-check:
>
> propel > om:
> [echo] +------------------------------------------+
> [echo] | |
> [echo] | Generating Peer-based Object Model for |
> [echo] | YOUR Propel project! (NEW OM BUILDERS)! |
> [echo] | |
> [echo] +------------------------------------------+
> [phingcall] Calling Buildfile '/Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build-propel.xml'
> with target 'om-template'
> [property] Loading /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/./default.properties
>
> propel > om-template:
> [PHP Error] strftime(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting
> or the date_default_timezone_set() function. In case you used any of
> those methods and you are still getting this warning, you most likely
> misspelled the timezone identifier. We selected 'Asia/Tokyo' for
> 'JST/9.0/no DST' instead [line 539 of
> /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/classes/propel/phing/AbstractPropelDataModelTask.php]
> [propel-om] Target database type: mysql
> [propel-om] Target package: lib.model
> [propel-om] Using template path: /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/templates
> [propel-om] Output directory: /Applications/MAMP/htdocs/hatch/sf_fail
> [propel-om] Processing: generated-schema.xml
> [propel-om] Processing Datamodel : JoinedDataModel
> [propel-om] - processing database : propel
> [propel-om] + user
> [propel-om] -> BaseUserPeer [builder: SfPeerBuilder]
> [propel-om] -> BaseUser [builder: SfObjectBuilder]
> [propel-om] -> UserMapBuilder [builder: SfMapBuilderBuilder]
> [propel-om] -> (exists) UserPeer
> [propel-om] -> (exists) User
> [propel-om] + tag
> [propel-om] -> BaseTagPeer [builder: SfPeerBuilder]
> [propel-om] -> BaseTag [builder: SfObjectBuilder]
> [propel-om] -> TagMapBuilder [builder: SfMapBuilderBuilder]
> [propel-om] -> (exists) TagPeer
> [propel-om] -> (exists) Tag
> [propel-om] + photo
> [propel-om] -> BasePhotoPeer [builder: SfPeerBuilder]
> [propel-om] -> BasePhoto [builder: SfObjectBuilder]
> [propel-om] -> PhotoMapBuilder [builder: SfMapBuilderBuilder]
> [propel-om] -> (exists) PhotoPeer
> [propel-om] -> (exists) Photo
> [propel-om] + idle
> [propel-om] -> BaseIdlePeer [builder: SfPeerBuilder]
> [propel-om] -> BaseIdle [builder: SfObjectBuilder]
> [propel-om] -> IdleMapBuilder [builder: SfMapBuilderBuilder]
> [propel-om] -> (exists) IdlePeer
> [propel-om] -> (exists) Idle
>
> BUILD FINISHED
>
> Total time: 1.5223 second
> >> file- /Applications/MAMP/htdocs/hatch...ail/config/generated-schema.xml
> >> schema converting "/Applications/MAMP/..._fail/config/schema.yml" to XML
> >> schema putting /Applications/MAMP/htdo...ail/config/generated-schema.xml
> Buildfile: /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build.xml
> [resolvepath] Resolved /Applications/MAMP/htdocs/hatch/sf_fail/config to
> /Applications/MAMP/htdocs/hatch/sf_fail/config
>
> propel-project-builder > check-project-or-dir-set:
>
> propel-project-builder > check-project-set:
>
> propel-project-builder > set-project-dir:
>
> propel-project-builder > check-buildprops-exists:
>
> propel-project-builder > check-buildprops-for-propel-gen:
>
> propel-project-builder > check-buildprops:
>
> propel-project-builder > configure:
> [echo] Loading project-specific props from /Applications/MAMP/htdocs/hatch/sf_fail/config/propel.ini
> [property] Loading /Applications/MAMP/htdocs/hatch/sf_fail/config/propel.ini
>
> propel-project-builder > sql:
> [phing] Calling Buildfile '/Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build-propel.xml'
> with target 'sql'
> [property] Loading /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/./default.properties
>
> propel > check-run-only-on-schema-change:
>
> propel > sql-check:
>
> propel > pgsql-quoting-check:
>
> propel > sql:
> [echo] +------------------------------------------+
> [echo] | |
> [echo] | Generating SQL for YOUR Propel project! |
> [echo] | |
> [echo] +------------------------------------------+
> [phingcall] Calling Buildfile '/Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build-propel.xml'
> with target 'sql-template'
> [property] Loading /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/./default.properties
>
> propel > sql-template:
> [propel-sql] Processing: generated-schema.xml
> [propel-sql] Writing to SQL file: /Applications/MAMP/htdocs/hatch/sf_fail/data/sql/lib.model.schema.sql
> [propel-sql] + user [builder: MysqlDDLBuilder]
> [propel-sql] + tag [builder: MysqlDDLBuilder]
> [propel-sql] + photo [builder: MysqlDDLBuilder]
> [propel-sql] + idle [builder: MysqlDDLBuilder]
>
> BUILD FINISHED
>
> Total time: 0.3328 seconds
> >> file- /Applications/MAMP/htdocs/hatch...ail/config/generated-schema.xml
> >> schema converting "/Applications/MAMP/..._fail/config/schema.yml" to XML
> >> schema putting /Applications/MAMP/htdo...ail/config/generated-schema.xml
> Buildfile: /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build.xml
> [resolvepath] Resolved /Applications/MAMP/htdocs/hatch/sf_fail/config to
> /Applications/MAMP/htdocs/hatch/sf_fail/config
>
> propel-project-builder > check-project-or-dir-set:
>
> propel-project-builder > check-project-set:
>
> propel-project-builder > set-project-dir:
>
> propel-project-builder > check-buildprops-exists:
>
> propel-project-builder > check-buildprops-for-propel-gen:
>
> propel-project-builder > check-buildprops:
>
> propel-project-builder > configure:
> [echo] Loading project-specific props from /Applications/MAMP/htdocs/hatch/sf_fail/config/propel.ini
> [property] Loading /Applications/MAMP/htdocs/hatch/sf_fail/config/propel.ini
>
> propel-project-builder > insert-sql:
> [phing] Calling Buildfile '/Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build-propel.xml'
> with target 'insert-sql'
> [property] Loading /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/./default.properties
>
> propel > insert-sql:
> [propel-sql-exec] Executing statements in file: /Applications/MAMP/htdocs/hatch/sf_fail/data/sql/lib.model.schema.sql
> [propel-sql-exec] Our new url -> mysql://root:root@localhost/sf_fail
> Execution of target "insert-sql" failed for the following reason:
> /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build-propel.xml:296:1:
> [wrapped: connect failed [Native Error: No such file or directory]
> [User Info: Array]]
> [phing] /Applications/MAMP/bin/php5/lib/php/symfony/vendor/propel-generator/build-propel.xml:296:1:
> [wrapped: connect failed [Native Error: No such file or directory]
> [User Info: Array]]
>
> BUILD FINISHED
>
> Total time: 0.1648 seconds
> >> file- /Applications/MAMP/htdocs/hatch...ail/config/generated-schema.xml
A: Some of the error messages in Propel 1.2 (the bundled version) could do with being a bit more helpful, that's for sure. It also doesn't use PDO, and so is much slower than recent versions. I'd recommend that you bump up to at least Symfony 1.3, which has a much better version of Propel installed.
@richsage recommends Symfony 1.4, but the forms system that that enforces - in my humble opinion - was vastly over-complicated. Symfony 1.3 has it also, but at least there you can switch to 10compatibility mode (I forget exactly what it is called) - and this lets you return to the component helpers approach. Also, you can upgrade your version of Propel right up to 1.6 in Symfony 1.3 (or 1.4) by adding a plugin.
A: Similar issue
[wrapped: connect failed [Native Error: No such file or directory] [User Info: Array]]
Having had exactly the same issue with an old Symfony1 project at work, the fix that worked for me was to change the databases.yml from:-
mysql://root:@localhost/database_name_here
to
mysql://root:@127.0.0.1/database_name_here
The underlying issue for this was that the original dev environment was setup on Windows and my Dev environment is on a Mac. This may not be the same situation for you but hopefully will help someone out, searching for this error
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9554490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: jedi-vim return to old position after jump with goto assignments/definitions I'm using vim with jedi/jedi-vim when I develop python code and I use extensively <Leader>d (goto definitions) and/or <Leader>g (goto assignments). I can use '' to return to the line before the jump, but only within the same file.
Is there a way to have the same behaviour when jumping between different files?
A: I'm using Ctrl + O all the time to jump back (not only for Jedi, but also).
Also with Ctrl + I you can do the opposite: Jump forward.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38362597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Reflecting texture register from a HLSL shader by name I know how to reflect constant buffers, but how do I reflect textures? Here's my shader:
cbuffer buffer : register(b0)
{
column_major matrix viewProjectionMatrix;
column_major matrix modelMatrix;
float4 texScaleOffset;
float4 tint;
}
struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD0;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = mul( viewProjectionMatrix, mul( modelMatrix, input.Pos ) );
output.Tex = input.Tex * texScaleOffset.xy + texScaleOffset.zw;
return output;
}
Texture2D textureMap : register(t0);
SamplerState SampleType : register(s0);
float4 PS( PS_INPUT input ) : SV_Target
{
return textureMap.Sample( SampleType, input.Tex );
}
So how do I query textureMap's register number from C++ if I know its name ("textureMap")? My use case is an engine that allows users to write their own shaders so I can't hardcode any values.
A: In a very similar way to how you reflect constant buffers:
ID3D11ShaderReflection* reflectionInterface;
D3DReflect(bytecode, bytecodeLength, IID_ID3D11ShaderReflection, (void**)&reflectionInterface);
D3D11_SHADER_INPUT_BIND_DESC bindDesc;
reflectionInterface->GetResourceBindingDescByName("textureMap", &bindDesc);
bindDesc.BindPoint is the index of the slot the texture is bound to.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17772130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Foundation Section (Tabs) not reflowing properly I have two Tab Sections on my site that are being added after page load. I am "reflowing" them through backbone.js like this:
this.$el.foundation('section', 'reflow');
One of them shows up as expected. The HTML generated is:
<div class="section-container auto" data-section="" data-section-resized="true" style="min-height: 48px;">
The other one gets different data-sections applied to it, and does not display properly.
<div class="section-container auto" data-section="" data-section-small-style="true">
Here is a screenshot of the incorrect behaviour: http://imgur.com/9ozNvNC
All of the tabs have width: 100% applied to them and overlap (hence why you can only see the 'Help' tab there) and the top of the 'Preview' image is covered by the tabs.
The strange thing is, the HTML is exactly the same, in a Reveal Modal in both cases. The same JS is being applied to each. Does anyone know why one of my sections would get data-section-resized while the other gets data-section-small-size?
Edit: Two things.
*
*I forgot to mention, this is Zurb Foundation 4.3.2
*If I resize the window, it automatically shows up correctly... So I guess if I can run the 'window resize' Zurb code, that would solve my issue.
A: I was able to solve my problem. After pouring through the issues in Zurb's Github, I found a semi-related issue that was fixed in a recent pull request.
On a whim I merged it into my code, and it fixed the issue.
See here: https://github.com/seantimm/foundation/commit/7af78ddbcc5a516eafed588e7c17d90bee115567
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19406039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: UI-select Focus remains I am using ui-select (Version: 0.8.3, angularjs) library in order to display a drop down list.
I have a situation when i click on a text area there is JavaScript that changes the height of the div to a larger height so it can be scrolled up.
the problem is that when it occures the dropdown remains open and does not close.
after a little of debugging i saw that the OnDocumentClick is not being called once the height is to big.
if i don't change the height of the div, it is working correctly.
Is there any other solution possible?
Thanks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28208858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: All parts of array becomming the same I am creating a program that will ad a number to an array that is equal to 1 over the number of the past array. The wanted output is: [[13],[12],[11],[10]] but the output from this program is[[13],[13],[13],[13]] do you know how I can fix the code to show the wanted output? Program is below
var test = [[10]]
intervalID = setInterval(function(){
var test_first = test[0];
test_first[0] += 1;
test.unshift(test_first);
console.log(test);
},1000);
A: You're adding the same array object to the array repeatedly. Instead clone:
var test_first = [ ...test[0] ];
A: You are manipulating the value test_first, and you are implicitly stringifying the value in test[0][0] by accessing test[0] - which returns an array of a single number, not a number. The code that produces what your requirement is would be
const test = [[10]]
const intervalID = setInterval(() =>{
const new_val = test[0][0] + 1;
test.unshift([new_val]);
console.log(test);
},1000);
I'm pretty sure you're looking for an array of numbers like [13, 12, 11, 10]. That's produced by below code
const test = [10]
const intervalID = setInterval(() =>{
const new_val = test[0] + 1;
test.unshift(new_val);
console.log(test);
},1000);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61942269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: playing slow motion, fast forward , rewind in a video player in flash video player We want to build a flash video player to play FLV videos. In addition to basic video controls, client also wants below features for that video player
*
*Slow Motion
*Fast Forward
*Rewind
We are using ffmpeg to convert videos (from a PHP script) to flv videos. From this video flash player has to perform these operations. We were told that it is not possible to do these features in flv and we need to convert this flv to swf for developing these features. If that is the case, we have to do one more conversion which is from flv to swf and I think the conversion process is going to be very long.
Is there any way to achieve these features in flash action script without converting flv videos to swf?
A: Fast Forward and Rewind are easy enough to do, though not in the conventional sense.
Both involve timers wherein you simply seek to a previous or future point on an interval. This is not playing the video at increased speed forward and backwards.
As for slow motion... you are in a much tighter fix there. There are 2 (theoretical) ways I know of to achieve slow motion in a flash video player. As you will see, neither of these are desirable solutions. (I have coded 3x full featured flash players + recorders and dealt with this very same rabbit hole):
1) You do not play via rtmp steaming but rather http progressive download. Once you have loaded the data into flash for the video you run it through an algorithm that either removes or duplicates p-frames. Thus increasing or decreasing video time. Audio syncing would be a nightmare even if you pull this off.
2) You encode a second video at whatever speed they wish for "slow motion" to be. You load the two videos simultaneously, and swap between them at appropriate times when the button is pressed/released.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6261681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to find which row has the biggest sum in 2D array? I have 2D array, I need to write a function to find which row has the biggest sum, and if there is more than one row with the same sum, I need to print no particular max. This is what I wrote so far:
int find_max_sum(int b[N][N])
{
int row_sum = 0;
int row_max = -1;
int i,j,k;
int counter=0;
for(int i =0;i<N;i++)
{
row_sum = 0;
for (j = 0; j < N; ++j)
{
row_sum += b[i][j] ;
}
if(row_max < row_sum)
{
row_max = row_sum;
}
}
for (i = 0; i < N; i++)
{
for (j= 0;j< N;j++)
{
if(k=row_max);
counter++;
}
}
if (counter>1)
return(printf("No unique max.\n"));
else
return row_max;
}
Now I need help with the counter thing, and if the function is int how can it return prints? Is it possible?
A: Here's an example.
#include <stdio.h>
#include <stdbool.h>
#define N 2
#define NO_UNIQUE -1
int find_max_sum(int b[][N])
{
int row_sum, i, j;
int row_max = -1;
bool unique = false;
for (i = 0; i < N; ++i) {
row_sum = 0;
for (j = 0; j < N; ++j)
row_sum += b[i][j];
if (row_max < row_sum) {
row_max = row_sum;
unique = true;
} else if (row_max == row_sum)
unique = false;
}
if (unique)
return row_max;
else {
printf("No unique max.\n");
return NO_UNIQUE;
}
}
int main(void)
{
int b[N][N] = {1, 2, 3, 4};
printf("Max sum is %d\n", find_max_sum(b));
return 0;
}
A: I suggest you to use a third variable (let's call it rowsWithMaxCount) to store the amount of rows with the current max value such that:
*
*if you find a row with a new maximum then rowsWithMaxCount = 1
*if you find a row such that row_max == row_sum then ++rowsWithMaxCount
*otherwise rowsWithMaxCount is unaffected
This will save you from looping the bidimensional array, which is a waste of code since you can obtain all the information you need with a single traversal of the array.
"returning a printf" doesn't make any sense and it's not possible, if you declare the function to return an int then you must return an int. Consider using a special value to signal the caller that there is no unique maximum value. Eg, assuming values are always positive:
static const int NO_UNIQUE_MAX = -1;
int find_max_sum(int b[N][N]) {
...
if (counter > 1)
return NO_UNIQUE_MAX;
...
}
But this will prevent you from returning the not-unique maximum value. If you need to return both then you could declare a new type, for example
struct MaxRowStatus {
int value;
int count;
};
So that you can precisely return both values from the function.
A: You may be over-thinking the function, if I understand what you want correctly. If you simply want to return the row index for the row containing a unique max sum, or print no unique max. if the max sum is non-unique, then you only need a single iteration through the array using a single set of nested loops.
You can even pass a pointer as a parameter to the function to make the max sum available back in your calling function (main() here) along with the index of the row in which it occurs. The easiest way to track the uniqueness is to keep a toggle (0, 1) tracking the state of the sum.
An example would be:
int maxrow (int (*a)[NCOL], size_t n, long *msum)
{
long max = 0;
size_t i, j, idx = 0, u = 1;
for (i = 0; i < n; i++) { /* for each row */
long sum = 0;
for (j = 0; j < NCOL; j++) /* compute row sum */
sum += a[i][j];
if (sum == max) u = 0; /* if dup, unique 0 */
if (sum > max) /* if new max, save idx, u = 1 */
max = sum, idx = i, u = 1;
}
if (u) { /* if unique, update msum, return index */
if (msum) *msum = max;
return idx;
}
fprintf (stderr, "no unique max.\n");
return -1; /* return -1 if non-unique */
}
(note: if you don't care about having the max sum available back in the caller, simply pass NULL for the msum parameter)
A short test program could be the following. Simply uncomment the second row to test the behavior of the function for a non-unique max sum:
#include <stdio.h>
#include <stdlib.h>
enum { NCOL = 7 };
int maxrow (int (*a)[NCOL], size_t n, long *msum)
{
long max = 0;
size_t i, j, idx = 0, u = 1;
for (i = 0; i < n; i++) { /* for each row */
long sum = 0;
for (j = 0; j < NCOL; j++) /* compute row sum */
sum += a[i][j];
if (sum == max) u = 0; /* if dup, unique 0 */
if (sum > max) /* if new max, save idx, u = 1 */
max = sum, idx = i, u = 1;
}
if (u) { /* if unique, update msum, return index */
if (msum) *msum = max;
return idx;
}
fprintf (stderr, "no unique max.\n");
return -1; /* return -1 if non-unique */
}
int main (void) {
int a[][7] = {{ 0, 9, 3, 6, 4, 8, 3 },
/* { 3, 9, 2, 7, 9, 1, 6 }, uncomment for test */
{ 6, 1, 5, 2, 6, 3, 4 },
{ 4, 3, 3, 8, 1, 2, 5 },
{ 3, 9, 2, 7, 9, 1, 6 }},
maxidx;
long sum = 0;
size_t nrow = sizeof a/sizeof *a;
if ((maxidx = maxrow (a, nrow, &sum)) != -1)
printf (" max sum '%ld' occurs at row : %d (0 - indexed).\n",
sum, maxidx);
return 0;
}
Example Use/Output
For the unique sum case:
$ ./array2Drow
max sum '37' occurs at row : 3 (0 - indexed).
non-unique case:
$ ./array2Drow
no unique max.
Look it over and let me know if you have any questions, or if I misinterpreted your needs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37780332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I recover a Subversion repository from a git-svn copy? I am pretty sure I have tried everything, figured I would ask all the gurus on here.
Background:
I had an SVN repository on an old linux box.
I accessed this SVN repo with git-svn.
The system's hard drive crashed and the SVN repo was lost.
Question:
Since I have an entire backup of the SVN repository on my local machine through me using GIT, I would like to figure out how to publish everything, including previous commits from my local machine to the new SVN server (that now is on a RAID 5 array).
Currently the projects still have the old SVN information in them, so I need to figure out how to get rid of that as well as migrating the GIT repo to the new SVN repo I set up.
I have contemplated setting up a remote GIT repo, but none of my co-programmers know/want to learn how to use GIT because currently they use the SVN plugin for eclipse and it is ultra easy, even though I am the one who saved everyone by using GIT.
A: It looks like Git::SVNReplay might fit the bill.
A: One approach might be to push your Git repository up to a private repo at GitHub, where you can use Git and everybody else can use Subversion to access the same repository.
A: Maybe Pushing an existing git repository to SVN solves your problem.
Use
svn switch --relocate file:///tmp/repos file:///tmp/newlocation .
to connect existing checkouts to the new svn repository.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3782112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Compress a folder to .zip on scala Please help me as much as possible to compress the entire folder with the contents in .zip or .gz on scala?
example
Path: C:\Users\Documents\temp (temp folder with contents)
after the path: C:\Users\Documents\temp.zip(.gz) or Path: C:\Users\Documents\temp\temp.zip(.gz)
A: I've implemented this kind of thing and I'm satisfied with Apache Compress. Their examples helped enough to implement combination of tar and gzip. After you've tried to implement it with their examples you can come back to SO for further questions.
A: Checkout : https://github.com/zeroturnaround/zt-zip
Pack a complete folder :
ZipUtil.pack(new File("C:\\somewhere\\folder"), new File("C:\\somewhere\\folder.zip"))
and there is unpack.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/68271745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How to enable or disable a task in build pipeline using powershell script I am using this code to update the build pipeline task by updating the json by this line $TaskDetail.enabled = "false" and posting the updated json:
$BuildName = "Test-demo"
$buildTaskName = 'Print Hello World'
$BuildDefinitions = Invoke-WebRequest -Headers $headers -Uri ("{0}/_apis/build/definitions?api-version=5.0" -f $TFSProjectURL)
$BuildDefinitionsDetail = convertFrom-JSON $BuildDefinitions.Content
foreach($BuildDetail in $BuildDefinitionsDetail.value){
if($BuildDetail.name -eq $BuildName)
{
$Id = $BuildDetail.id
$name = $BuildDetail.name
$Project = $BuildDetail.project.name
$BuildTask = Invoke-WebRequest -Headers $headers -Uri ("{0}/_apis/build/definitions/$($Id)?api-version=5.0" -f $TFSProjectURL)
$BuildTaskDetails = convertFrom-JSON $BuildTask.Content
foreach($TaskDetail in $BuildTaskDetails.process.phases.steps){
if($TaskDetail.displayName -eq $buildTaskName)
{
$taskName = $TaskDetail.displayName
$TaskDetail.enabled = "false"
}
}
Write-Host $BuildTaskDetails
$Updatedbuilddef = ConvertTo-Json $BuildTaskDetails
buildUri = "$TFSProjectURL//_apis/build/definitions/$Id?api-version=5."
$buildResponse =Invoke-WebRequest -Headers $headers -Uri $buildUri -Method Put -ContentType "application/json" -Body $Updatedbuilddef
}
}
But I am getting this error:
Invoke-RestMethod : {"$id":"1","innerException":null,"message":"Processing of the HTTP request resulted in an exception. Please see the HTTP response returned by the 'Response' property of this exception for
details.","typeName":"System.Web.Http.HttpResponseException, System.Web.Http","typeKey":"HttpResponseException","errorCode":0,"eventId":0}
At C:\Users\Z004APNA\Desktop\BuildPipelineScript\BuildDefinition_edit.ps1:145 char:26
*
*... dResponse = Invoke-RestMethod -Uri $buildUri -Method Post -Headers $h ...
*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-RestMethod], WebException
*FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeRestMethodCommand
A: Test your code sample and it has some issues.
1.In your code, it is missing authentication credentials. You can try to create PAT and use it in authentication .
2.When you use ConvertTo-Json, you need to add depth parameter to expand the json body.
3.For the buildurl, you need to modify the id format in the url.
$TFSProjectURL/_apis/build/definitions/$($Id)?api-version=5.0
Here is the example:
$token = "PAT"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($token)"))
$BuildName = "Test-demo"
$buildTaskName = 'Print Hello World'
$BuildDefinitions = Invoke-WebRequest -Headers @{Authorization = "Basic $token"} -Uri ("{0}/_apis/build/definitions?api-version=5.0" -f $TFSProjectURL)
$BuildDefinitionsDetail = convertFrom-JSON $BuildDefinitions.Content
foreach($BuildDetail in $BuildDefinitionsDetail.value){
if($BuildDetail.name -eq $BuildName)
{
$Id = $BuildDetail.id
$name = $BuildDetail.name
$Project = $BuildDetail.project.name
$BuildTask = Invoke-WebRequest -Headers @{Authorization = "Basic $token"} -Uri ("{0}/_apis/build/definitions/$($Id)?api-version=5.0" -f $TFSProjectURL)
$BuildTaskDetails = convertFrom-JSON $BuildTask.Content
foreach($TaskDetail in $BuildTaskDetails.process.phases.steps){
if($TaskDetail.displayName -eq $buildTaskName)
{
$taskName = $TaskDetail.displayName
$TaskDetail.enabled = "false"
}
}
Write-Host $BuildTaskDetails
$Updatedbuilddef = ConvertTo-Json $BuildTaskDetails -Depth 99
buildUri = "$TFSProjectURL/_apis/build/definitions/$($Id)?api-version=5.0"
$buildResponse =Invoke-WebRequest -Headers @{Authorization = "Basic $token"} -Uri $buildUri -Method Put -ContentType "application/json" -Body $Updatedbuilddef
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72782560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Error: Could not connect to logcat in Android player xamarin visual studio 2015 I am using Xamarin trial version and visual studio 2015 community version for trial at my end.
I started with a blank android application. It was running fine before.
but next day when i try to run it. Emulator is not runnig.
It is saying Could not connect to logcat, GetProcessId returned: 0
I am attaching screenshot what errors i have in my logcat.
Note- I have disabled Fast deployment in my settings but its not working.
Please help and thanks in advance.
A: Clearing emulator logs by running adb logcat -c in did the trick for me!!!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35601207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: python 2.7 + flask-sqlalchemy, flask marshmallow + DB relations throws error foreignkeyconstraint and primary join I have existing DB dataset that has these tables already:
owners table that has ownerid as primary_key
another table owndnis whose primary_key is the same ownerid
one other table whose primary_key is also the same ownerid as that of owners
I want to define a relation which looks like this
owners
having
{owndnis} and
{application_parameters}
My model and route file contents are given below
model.py
from marshmallow import fields
from flask import jsonify
class owners(db.Model):
__tablename__ = 'owners'
ownerid = db.Column('ownerid',db.String(60), nullable=False)
name = db.Column('ownerdomainname', db.String(60),primary_key=True, nullable=False)
spownerid = db.Column('spownerid', db.String(60))
ownerid = db.Column(db.String(), db.ForeignKey('owndnis.ownerid'))
dnis = db.relationship("owndnis", uselist=False, backref="owners")
# ownerid = db.Column(db.String(), db.ForeignKey('application_parameters.ownerid'))
# app_params = db.relationship("application_parameters", backref="owners")
class owndnis(db.Model):
__tablename__ = 'owndnis'
ownerid = db.Column('ownerid',db.String(60),primary_key=True)
dnisstart = db.Column('dnisstart', db.String(20), nullable=False)
dnisend = db.Column('dnisend', db.String(20))
class application_parameters(db.Model):
__tablename__ = 'application_parameters'
ownerid = db.Column('ownerid',db.String(60),primary_key=True)
applicationid = db.Column('applicationid', db.String(60), nullable=False)
key = db.Column('key', db.String(128), nullable=False)
value = db.Column('value', db.String(1024), nullable=False)
###### SCHEMAS #####
class owndnis_schema(ma.ModelSchema):
dnisstart = fields.String()
dnisend = fields.String()
class app_params_schema(ma.ModelSchema):
key = fields.String()
value = fields.String()
class owners_schema(ma.ModelSchema):
ownerid = fields.String()
ownerdomainname = fields.String()
spownerid = fields.String()
ownerdescription = fields.String()
dnis = fields.Nested(owndnis_schema)
app_params = fields.Nested(app_params_schema)
routes.py
---------
from model import owners, owndnis, application_parameters,owners_schema,owndnis_schema, app_params_schema
@mod.route('/api/sp/<spdomainname>', methods=['GET'])
def findSp(spdomainname):
ownerArr = []
owner = owners.query.get(spdomainname)
owner_schema = owners_schema()
if owner:
owners_sm_result = owner_schema.dump(owner).data
return jsonify({'owner': owners_sm_result})
I get the output like this
{
"owner": {
"spownerid": "SYSTEM",
"ownerid": "NEWSP~ZryOZB9BGb",
"dnis": {
"dnisend": "199999",
"dnisstart": "100000"
}
}
}
If I uncomment the commented lines in model.py(owners) to include another table that has foreign key same as owndnis table
but I get this run time error
File "/home/holly/python_ws/new_project_blue/blue/lib/python2.7/site-packages/sqlalchemy/orm/relationships.py", line 2383, in _determine_joins
"specify a 'primaryjoin' expression." % self.prop
sqlalchemy.exc.NoForeignKeysError: Could not determine join condition between parent/child tables on relationship owners.dnis - there are no foreign keys linking these tables. Ensure that referencing columns are associated with a ForeignKey or ForeignKeyConstraint, or specify a 'primaryjoin' expression.
[pid: 18308|app: 0|req: 1/1] 10.133.0.31 () {34 vars in 620 bytes} [Tue May 14 07:22:14 2019] GET /api/sp/NEW-SP => generated 0 bytes in 25 msecs (HTTP/1.1 500) 0 headers in 0 bytes (0 switches on core 0)
The requirement is to have the output like this
I get the output like this
{
"owner": {
"spownerid": "SYSTEM",
"ownerid": "NEWSP~ZryOZB9BGb",
"dnis": {
"dnisend": "199999",
"dnisstart": "100000"
},
"app_params": {
"key":"xxxxx",
"value":"yyyy"
}
}
}
A: I would closely follow the relationship patterns in the documentation:
https://docs.sqlalchemy.org/en/13/orm/basic_relationships.html
As an example assuming you wanted a one-to-one relationship between owners and owndnis...
One to One
class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
child = relationship("Child", uselist=False, back_populates="parent")
class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('parent.id'))
parent = relationship("Parent", back_populates="child")
Untested but following this pattern and in your case treating owners as the parent:
# Treated as Parent of One to One
class owners(db.Model):
__tablename__ = 'owners'
ownerid = db.Column('ownerid', db.String(60), primary_key=True) <--- changed primary key
name = db.Column('ownerdomainname', db.String(60), nullable=False)
spownerid = db.Column('spownerid', db.String(60))
dnis = db.relationship("owndnis", uselist=False, back_populates="owners") <--- note change
# child = relationship("Child", uselist=False, back_populates="parent")
# Treated as Child of One to One
class owndnis(db.Model):
__tablename__ = 'owndnis'
ownerid = db.Column('ownerid', db.String(60),
primary_key=True, db.ForeignKey('owners.ownerid')) <-- both a PK and FK
dnisstart = db.Column('dnisstart', db.String(20), nullable=False)
dnisend = db.Column('dnisend', db.String(20))
owner = relationship("owners", back_populates="owndnis") <--- added
# parent = relationship("Parent", back_populates="child")
I've used back_populates but per the docs:
As always, the relationship.backref and backref() functions may be used in lieu of the relationship.back_populates approach; to specify uselist on a backref, use the backref() function:
class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
child_id = Column(Integer, ForeignKey('child.id'))
child = relationship("Child", backref=backref("parent", uselist=False))
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56129675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to put a right amount of characters in one mobile page(page and font sizes will change on different mobiles) This question that I'm going to ask is hard to express, so please correct me if you think my expression doesn't reflect what I want to ask, thanks in advance.
I am working on a eBook mobile app. Things I need to take into consideration are :
1. screen size
2. font size
3. how many paragraphs I should put into one page according to current screen size and font size.
I'd like to use a bigger font size on a bigger screen, so I don't know how to decide how many paragraphs should be put into one page.
I have the data as a whole article but have no idea when and where to break it into pages.
Any suggestions or ideas are welcome.
Update:
Okay, let me make it more detailed.
Say I have an article:
var arti = "Stack Overflow is a website, the flagship site of the Stack Exchange Network,[2][3] created in 2008 by Jeff Atwood and Joel Spolsky,[4][5] as a more open alternative to earlier Q&A sites such as Experts Exchange. The name for the website was chosen by voting in April 2008 by readers of Coding Horror, Atwood's popular programming blog.[6]
It features questions and answers on a wide range of topics in computer programming.[7][8][9] The website serves as a platform for users to ask and answer questions, and, through membership and active participation, to vote questions and answers up or down and edit questions and answers in a fashion similar to a wiki or digg.[10] Users of Stack Overflow can earn reputation points and "badges"; for example, a person is awarded 10 reputation points for receiving an "up" vote on an answer given to a question, and can receive badges for their valued contributions,[11] which represents a kind of gamification of the traditional Q&A site or forum. All user-generated content is licensed under a Creative Commons Attribute-ShareAlike license.[12]
As of August 2013, Stack Overflow has over 1,900,000 registered users and more than 5,500,000 questions.[13][14] Based on the type of tags assigned to questions, the top eight most discussed topics on the site are: C#, Java, PHP, JavaScript, Android, jQuery, C++ and Python.[15]";
and the screen size of the mobile I'm using to test my app is 1280*760 which can only display around X characters in one screen.
So my program should break the content of the arti at the end of the Xth, 2*Xth , 3*Xth, ... etc. characters.
So the questions are
I don't know how to calculate the X.
A: I suggest bigger chars for smaller screens!
A: Just to expand on my comment. ( not an answer as subjective )
Using ems for width can tell us how many font characters wide a containing element is.
consider
<style>
body { font size: 0.8em; } /* roughly about 14 px */
.container { width: 30em; } /* 1em now equals 0.8 */
</style>
we now know ( or as close as possible ) , that the <div class="container"> .. </div> can hold text with lines of ( close to ) 30 characters.
The good part here is that if we change the font size :
<style>
body { font size: 1.2em; }
</style>
The container width also changes and we retain our set number of characters per line.
Because of this we have a connection between layout and typography and have control over "fixing" paragraph lengths and spacing within our designs.
Quite useful when we need to calculate heights, widths and our designed 'page breaks' of layout elements based on layouts that adapt to different sizes and are dealing with dynamic text and font sizes.
A: One posible solution is to adjust a div to the size of the viewport of the device and use https://github.com/olegskl/fitText
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20870031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to center oversize font inside Text's bounding box? I'm trying to create a simple Floating Action Bar button with a plus icon in it, and have had trouble true-centering the "plus" in some edge cases. I was just using '\uFF0B' in a <Text>, but tried to switch to react-native-vector-icons, only to discover that they too were using a font and not an image to back the <Icon> instances, and that my problems seem to persist.
Things are fine on most screens and devices but in some cases users are reporting the plus icon is not perfectly centered. I have a hypothesis that it may involve users' accessibility options increasing the font size in the app beyond size of the parent View. At any rate I can reproduce something like the screenshots folks are sharing with me by setting the fontSize greater than the lineHeight. Assuming that is the issue -
How do you center a single glyph within the view area of a <Text> (or <Icon>, since that derives from <Text>), even when the fontSize may be much larger than the <Text>'s lineHeight or even overall height?
In the below example, the "+" font size is exactly double the line-height, so the plus is centered smack dab on the upper-right corner of the view area, as though it were expecting to be in a box that was 112dp x 112dp; but I want it centered dead-center of the 56dp x 56dp box instead, with the arms of the plus cropped. No combination of style attributes seems to effect it, but rather just controls where the <Icon> positions within its parent.
Currently:
Normally:
For oversized font:
Code:
<View style={s.fabStyle}>
<TouchableOpacity onPress={()=>{this.onPlus()}}>
<Icon name="plus" style={s.fabText} />
</TouchableOpacity>
</View>
...
const s = StyleSheet.create({
fabStyle: {
position: 'absolute',
right: 16,
bottom: 16,
borderRadius: 28,
width: 56,
height: 56,
backgroundColor: styleConstants.color.primary,
display: 'flex',
justifyContent: 'center',
alignItems: 'center',
textAlign: 'center',
},
fabText: {
position: 'relative',
left: 0,
top: 0,
right: 0,
bottom: 0,
fontSize: 112,
color: '#fff',
textAlign: 'center',
lineHeight: 56,
width: 56,
height: 56,
},
});
A: This isn't an answer to the question itself, which still stands, but an answer to the underlying issue, in case somebody arrives here by Google search with a similar issue. In my case it was indeed the case that accessibility settings were causing font to be bigger than it was designed to be, thus triggering the above scenario. While I still don't know how to center the text adequately in this case, in my case the issue could be circumvented by making sure allowFontScaling=false for relevant Views holding text.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55368190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to redirect to previous page? or How to get title from ForeignKey? PART 1
I have some problems with redirecting on same page after click.
or example i was at /memes and after click i am at page /add_to_cart
OR PART 2
I have another solution for my problem
PART 1
in view i have
def add_to_cart(request, **kwargs):
return redirect(reverse('meme:meme_list'))
and my html looks like this
a href="{% url 'shoppingcart:add_to_cart' post.id %}" class="col-2">
but if i am at the /videos and whant to add to cart i will be redirected at /meme. i ve found request.path_info but it only shows current path (add_to_cart/1)
PART 2
I have category in my models for product
class product(models.Model):
category = models.ForeignKey(Category, on_delete=models.PROTECT)
class Category(models.Model):
title = models.CharField(max_length=32)
so i can just make my prev path by '/'+ category title and i can't do it because
'ForwardManyToOneDescriptor' object has no attribute 'title'
how to get title from my category?
A: request.META.get('HTTP_REFERER','/')
this how you get prev url page
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56091991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Should my equals/hashCode method examine more than the object IDs? In my application, I have model classes of the following form:
class Book
{
private int ID;
private String title;
//other code
}
Now my question is two part:
*
*Is the following a good implementation of the equals() method?
public boolean equals(Object o)
{
if(o == null)
{
return false;
}
if(!(o instanceof Book))
{
return false;
}
Book other = (Book)o;
if(o.getID() == ID)
{
return true;
}
return false;
}
I know that equals() implementation largely depends on my
application business logic. But If two Books have the same ID then
they ideally must be the same Book. Hence I am confused as to
whether I should check for equality for other value fields as well
[title, price etc].
*Is this a good implementation of the hashCode() method:
public int hashCode()
{
return ID;
}
My thinking is that different books will have different IDs and if
two books have the same ID they they are equal. Hence the above
implementation will ensure a good distribution of the hashcode in
context of my application.
A: Just wanted to add some coment over the previous answers. The contract of equals mentions that it must be symmetric. This means that a.equals(b) iff b.equals(a).
That's the reason why instanceof is usually not used in equals (unless the class is final). Indeed, if some subclass of Book (ComicsBook for example) overrides equals to test that the other object is also an instance of ComicsBook, you'll be in a situation where a Book instance is equal to a ComicsBook instance, but a ComicsBook instance is not equal to a Book instance.
You should thus (except when the class is final or in some other rare cases) rather compare the classes of the two objects:
if (this.getClass() != o.getClass()) {
return false;
}
BTW, that's what Eclipse does when it generates hashCode and equals methods.
A: Don't do this:
if(o.getID() == ID)
ID is an Integer object, not a primitive. Comparing two different but equal Integer objects using == will return false.
Use this:
if(o.getID().equals(ID))
You'd also need to check for ID being null.
Other than that, your logic is fine - you're sticking with the contract that says two equal objects must have the same hashcode, and you've made the business logic decision as to what equality means -a decision that only you can make (there's no one correct answer).
A: If you use Hibernate then you have to consider some Hibernate related concerns.
Hibernate create Proxies for Lazy Loading.
*
*always use getter to access the properties of the other object (you already done that)
*even if it is correct to use if (!this.getClass().equals(o.getClass())) { return false;} in a normal Application it WILL fail for hibernate Proxy (and all other proxies). The reason is if one of the two is a proxy, there classes will never be equals. So the test if(!(o instanceof Book)){return false;} is what you need.
If you want to do it in the symmetric way than have a look at org.hibernate.proxy.HibernateProxyHelper.getClassWithoutInitializingProxy() with help of this class you can implement:
if (!HibernateProxyHelper.getClassWithoutInitializingProxy(this)
.equals(HibernateProxyHelper.getClassWithoutInitializingProxy(o))) {
return false;
}
*
*An other problem maybe is the ID -- when you assign the id not with the creation of an new object but later on while storing them, then you can run into trouble. Assume this scenario: You create a new Book with id 0, then put the Book in a HashSet (it will assigned to an bucket in the hashset depending on the hashcode), later on you store the Book in the Database and the id is set to lets say 1. But this changes the hashcode, and you will have problems to find the entity in the set again. -- If this is a problem or not for you, strongly depends on your application, architecture and how you use hibernate.
A: It is no good idea to compare the two Integers like
if(o.getID() == ID) ...
This tests for identity. What you want is a test for equality:
if(ID!=null && ID.equals(o.getID())) ...
A: Just a note: Your equals method
public boolean equals(Object o)
{
if(o == null)
{
return false;
}
if(!(o instanceof Book))
{
return false;
}
Book other = (Book)o;
if(other.getID() == ID)
{
return true;
}
return false;
}
can be (equivalently) shorter written like this:
public boolean equals(Object o) {
return (o instanceof Book) &&
((Book)o).getID == ID;
}
Other than this, if your IDs are all different for different books (and same for same books), this is a good implementation.
(But note JB Nizet's remark: To make sure this stays symmetric, make equals (or the whole class) final.)
A: that depends !
i think this kind of implementation is ok, if you can handle the condition below,
you new two books, these two books have the same title, they are same book in fact, but you
don't save them into database, so these two books don't have ids yet, the equals will fall when you compare them
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7132649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Refresh window.rootViewController.view contents I am building an app in which I use MBProgressHUD library as an activity indicator. I attach the HUD to self.view.window.rootViewController.view . Everything works fine but when I rotate the device, the HUD (UIActivityIndicator) is overlapped by the other screen items and it looks as if HUD is in background of the view. Does any one has any idea how to deal with this issue? has any one used MBProgressHUB library and faced similar issues?
P.S I have to attach HUD to self.view.window.rootViewController.view only.
A: You should remove your view. Add this code;
[self.view removeFromSuperview];
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25180124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: dll for face recognition I started creating a face recognition application. But nothing works fine. Is any dll available for this so that I can decode the source and implement it ? Any reference or source will be greatly appreciated.
A: You can try with OpenCV. And this list may help you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24423120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Returning Specific Columns from Entitey Framework Query While Utilizing MVC I’m new to using both Entity Framework and MVC, but not new to programming. I am working with a large table and I want to return selected columns of the query back to the view and list them. The basic query is as follows:
public class EmployeeController : Controller
{
private ADPEntities db = new ADPEntities();
// GET: /Employee/
public ActionResult Index()
{
var tblEmployeeADPs = db.tblEmployeeADPs
.Where(p => p.Status == "Active")
.Select(p => new UserViewModel.USerViewModelADP
{
Status = p.Status,
FirstName = p.FirstName,
LastName = p.LastName,
SSN = p.SSN
});
return View(tblEmployeeADPs.ToList());
}
}
I created a basic C# class to strongly type the results (i.e. UserViewModel) and I still get an error:
The model item passed into the dictionary is of type
'System.Collections.Generic.List1[UserViewModel.USerViewModelADP]',
but this dictionary requires a model item of type
'System.Collections.Generic.IEnumerable1[DysonADPTest.Models.tblEmployeeADP]'.
when I execute the page.
I’m not sure what I’m missing, as I was sure that (cobbling what I’ve read) that this would be the answer.
A: Please try the following in your view:
@model IEnumerable<DysonADPTest.UserViewModel.USerViewModelADP>
Your problem lies in using the .Select() method which changes the type your controller action is returning from
IEnumerable<DysonADPTest.Models.tblEmployeeADP>
which your view is also expecting to something entirely different. For this to work, the type your controller action returns and your view uses should match. It's either not use the .Select() method in your controller action or change the type your view uses.
A: You are creating a list with a new object type (USerViewModelADP)
You can filter, but just keep the same type (an objects)
public ActionResult Index()
{
var tblEmployeeADPs = db.tblEmployeeADPs
.Where(p => p.Status == "Active")
.Select(p => p)
return View(tblEmployeeADPs.ToList());
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/42956643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Multiple login in Shiro shiro multiple login
I have a web app which have 2 parts: front part and backend part. Both part need a login page for users.
Members need to use /signin to login front part.
Admins need to use /admin/signin to login backend part.
public ShiroFilterFactoryBean shiroFilter(WebSecurityManager securityManager){
ShiroFilterFactoryBean shiroFilter = new ShiroFilterFactoryBean();
shiroFilter.setSecurityManager(securityManager);
Map<String, Filter> filterMap = Maps.newLinkedHashMap();
FormAuthenticationFilter adminAuthc = new FormAuthenticationFilter();
adminAuthc.setLoginUrl("/admin/signin");
adminAuthc.setSuccessUrl("/admin/course");
filterMap.put("adminAuthc", adminAuthc);
FormAuthenticationFilter authc = new FormAuthenticationFilter();
adminAuthc.setLoginUrl("/signin");
adminAuthc.setSuccessUrl("/");
filterMap.put("authc", authc);
shiroFilter.setFilters(filterMap);
Map<String, String> filterChainDefinitionMap = Maps.newLinkedHashMap();
filterChainDefinitionMap.put("/signout", "logout");
filterChainDefinitionMap.put("/style/**", "anon");
filterChainDefinitionMap.put("/javascript/**", "anon");
filterChainDefinitionMap.put("/image/**", "anon");
filterChainDefinitionMap.put("/flash/**", "anon");
filterChainDefinitionMap.put("/favicon.ico", "anon");
filterChainDefinitionMap.put("/captcha/**", "anon");
filterChainDefinitionMap.put("/course", "anon");
filterChainDefinitionMap.put("/course/**", "anon");
filterChainDefinitionMap.put("/course/*/collect", "authc");
filterChainDefinitionMap.put("/course/*/uncollect", "authc");
filterChainDefinitionMap.put("/course/study/record", "authc");
filterChainDefinitionMap.put("/**", "authc");
filterChainDefinitionMap.put("/admin/**", "adminAuthc");
shiroFilter.setFilterChainDefinitionMap(filterChainDefinitionMap);
return shiroFilter;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31821277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Mapping between Childs' List in Same Class The question looks easy but I am really can't get the trick same time I don't want to go through a work around.I have this XML file
<Master>
<Parent1>
<child1/>
<child1/>
<child1/>
</Parent1>
<Parent2>
<child2/>
<child2/>
<child2/>
</Parent2>
</Master>
I deserialize it in the current class structure
Class Master
{
List<child1> Parent1;
List<child2> Parent2;
}
Child2 is displayed on the UI whenever it changed shall change child1 had same ID as child2.The problem is that in child2 I have no access to Parent1 so how can go backward to change in child1.I need a good solution not a workaround.
Edit: Child1 and Child2 two different classes but there is certain mapping between their properties.
A: You are able to let each parent know about which children was added/removed.
Each parent will get the event and will handle it if it's a matching child (or relevant due to other logic).
Two public objects which are used for communication between parent and master:
public enum ChangeType { Added, Removed }
public delegate void ChildChangeEvent(Parent parent, Child child, ChangeType changeType);
The parent class has two main roles- raising an event if it has a child that was added/removed, and handling this event if raised by other parents and it's relevant for them.
class Parent
{
public event ChildChangeEvent ChildChangeEvent;
private List<Child> _children;
public void Add(Child child)
{
_children.Add(child);
ChildChangeEvent(this, child, ChangeType.Added);
}
public void Remove(Child child)
{
_children.Remove(child);
ChildChangeEvent(this, child, ChangeType.Removed);
}
public void NotifyChange(Parent parent, Child child, ChangeType changeType)
{
//do where you want with this info.. change local id or whatever logic you need
}
}
The Master class should notify all parents about the child added/removed
event that was raised by one of the parents:
class Master
{
private List<Parent> _parents;
public void AddParent(Parent parent)
{
_parents.Add(parent);
parent.ChildChangeEvent += parent_ChildChangeEvent;
}
void parent_ChildChangeEvent(Parent parent, Child child, ChangeType changeType)
{
foreach (var notifyParent in _parents)
{
notifyParent.NotifyChange(parent, child, changeType);
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/32553594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Controller not making associations in Rails because of wrong format I need to make new associations for Modification model if.save. Also these associations need to be same as related Entity model has. But i'm getting this error:
When assigning attributes, you must pass a hash as an argument.
ModificationController.rb
def create
@modification = Modification.new(change_params)
respond_to do |format|
if @modification.save
@modification.entity.boxes.each do |d|
@modification.boxes.new(d)
end
flash[:success] = "Success"
format.html { redirect_to @modification }
format.json { render :show, status: :created, location: @modification }
else
format.html { render :new }
format.json { render json: @modification.errors, status: :unprocessable_entity }
end
end
end
More info:
Each Modification belongs_to Entity
Both Modifications and Entities has_many Boxes.
A: So you want to create a new box association using an existing Box. We can grab the attributes of the existing box to create the new one. However, an existing box will already have an id, so we need to exclude that from the attributes.
Following the above logic, the following should work:
def create
@modification = Modification.new(change_params)
respond_to do |format|
if @modification.save
@modification.entity.boxes.each do |d|
@modification.boxes << d.dup
end
flash[:success] = "Success"
format.html { redirect_to @modification }
format.json { render :show, status: :created, location: @modification }
else
format.html { render :new }
format.json { render json: @modification.errors, status: :unprocessable_entity }
end
end
end
A: When you declare a has_many association, the declaring class automatically gains 16 methods related to the association as the mention Guide Ruby On Rails Association Has-Many
def create
@modification = Modification.new(change_params)
respond_to do |format|
if @modification.save
@modification.entity.boxes.each do |d|
@modification.boxes << d # if d.present? use if condition there is nay validation in your model.
end
flash[:success] = "Success"
format.html { redirect_to @modification }
format.json { render :show, status: :created, location: @modification }
else
format.html { render :new }
format.json { render json: @modification.errors, status: :unprocessable_entity }
end
end
end
Hope this helo you !!!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34290074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Z-fighting. Inaccurate coordinates of faces / shakes For sufficiently large approaching the surface of the sphere, which is a model of the Earth, I get inaccurate coordinates of vertices. Because of this, when moving the camera the shaking noticeable.
How to get rid of it? On the Internet to find solutions to invert zNear and zFar, but I have no idea how to do it in three.js
A: THREE.PerspectiveCamera has near and far parameters. These define the distance of the near and far clipping plane.
You have to choose clipping planes depending on your scene. For example if you have a large scene, and the near plane is very small, it can cause things you experienced.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28371418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: error to run function code in javascript has undefined A table made containing the edit information field
a problem to run code undifined code in function
please help me to fix this error
"info" id table in html
function submit ()
{
var table = document.getElementById("info");
var td1 = document.createElement("td")
var td2 = document.createElement("td");
var row = document.createElement("tr");
td1.innerHTML = document.getElementById("p-name").value;
td2.innerHTML = document.getElementById("p-id").value;
row.appendChild(td1);
row.appendChild(td2);
table.children[0].appendChild(row);
create button in script
var bedit = document.createElement("BUTTON");
var bename = document.createTextNode("Edit");
bedit.appendChild(bename);
bedit.onclick = function () { edit_row(event) }
td6.appendChild(bedit);}
function code on click button in submit function
function edit_row()
{
bedit.style.display = "none";
bsave.style.display = "block";
var input = document.createElement("INPUT");
input.setAttribute("type", "text");
var string = td1.textContent;
td1.innerHTML = "";
td1.appendChild(input);
input.value = string;
}
A: Ok, try this. I added some comments.
function submit () {
var table = document.getElementById("info");
var td1 = document.createElement("td")
var td2 = document.createElement("td");
td1.innerHTML = document.getElementById("p-name").value;
td2.innerHTML = document.getElementById("p-id").value;
// create a table row
var row = document.createElement("tr");
row.appendChild(td1);
row.appendChild(td2);
// instead of your append
// table.children[0].appendChild(row);
// use this append
table.appendChild(row);
var bedit = document.createElement("BUTTON");
var bename = document.createTextNode("Edit");
bedit.appendChild(bename);
bedit.onclick = edit_row.bind(null, bedit, td1);
// createtd6 before because it is not known by your previous code
// I use this td2 instead of td6
td2.appendChild(bedit);
}
function edit_row(bedit, td1) {
bedit.style.display = "none";
// create bsave button before because it is not known by your previous code
// bsave.style.display = "block";
var input = document.createElement("INPUT");
input.setAttribute("type", "text");
var string = td1.textContent;
td1.innerHTML = "";
td1.appendChild(input);
input.value = string;
}
submit();
<table id="info"></table>
<hr/>
<input id="p-name" type="text" value="name_value" />
<input id="p-id" type="text" value="id_value" />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53608533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Setting a cookie from a previous request when using axios in a Lambda function I am using Axios in my NodeJs application to do HTTP requests.
I am logging in using a post request that does not require a cookie set in the header.
const instance = axios.create({ baseURL: 'https://some_url.com' , withCredentials: true});
const response = await instance.post('auth/login', data);
This returns a set-cookie in its header that I need to use in all subsequent API call. This is code I have tried for this.
const getResponse = await instance.get('/getStuff?$top=10', { withCredentials: true });
This always returns a "Not logged in error". I do not have access to the server, but I am assuming this is because my get request did not send the cookie in its header.
Running all of this in a lambda, not sure if that makes a difference.
Question: How do I get the cookie from my first post request and use it in my get request?
A: The withCredentials option is for the browser version of axios, and relies on browser for storing the cookies for your current site.
Since you are using it in Node, you will have to handle the storage yourself.
TL;DR
After the login request, save the cookie somewhere. Before sending other requests, make sure you include that cookie.
To read the cookie, check response.headers object, which should have a set-cookie header (which is all cookies really are - headers with a bit of special convention that has evolved into some sort of standard).
To include the cookie in your HTTP request, set a cookie header.
General example
You could also look for some "cookie-handling" libraries if you need something better than "save this one simple cookie I know I'll be getting".
// 1. Get your axios instance ready
function createAxios() {
const axios = require('axios');
return axios.create({withCredentials: true});
}
const axiosInstance = createAxios();
// 2. Make sure you save the cookie after login.
// I'm using an object so that the reference to the cookie is always the same.
const cookieJar = {
myCookies: undefined,
};
async function login() {
const response = await axiosInstance.post('http://localhost:3003/auth', {});
cookieJar.myCookies = response.headers['set-cookie'];
}
// 3. Add the saved cookie to the request.
async function request() {
// read the cookie and set it in the headers
const response = await axiosInstance.get('http://localhost:3003',
{
headers: {
cookie: cookieJar.myCookies,
},
});
console.log(response.status);
}
login()
.then(() => request());
You could also use axios.defaults to enforce the cookie on all requests once you get it:
async function login() {
const response = await axios.post('http://localhost:3003/auth', {});
axios.defaults.headers.cookie = response.headers['set-cookie']
}
async function request() {
const response = await axios.get('http://localhost:3003');
}
As long as you can guarantee that you call login before request, you will be fine.
You can also explore other axios features, such as interceptors. This may help with keeping all "axios config"-related code in one place (instead of fiddling with defaults in your login function or tweaking cookies in both login and request).
Lambda
AWS Lambda can potentially spawn a new instance for every request it gets, so you might need to pay attention to some instance lifecycle details.
Your options are:
*
*Do Nothing: You don't care about sending a "login request" for every lambda run. It doesn't affect your response time much, and the other api doesn't mind you sending multiple login requests. Also, the other api has no problem with you having potentially multiple simultaneous cookies (e.g. if 10 lambda instances login at the same time).
*Cache within lambda instance: You have a single lambda instance that gets used every once in a while, but generally you don't have more than one instance running at any time. You only want to cache the cookie for performance reasons. If multiple lambda instances are running, they will each get a cookie. Beware the other api not allowing multiple logins.
If this is what you need, make sure you put the axios config into a separate module and export a configured instance. It will be cached between runs of that one lambda instance. This option goes well with interceptors usage.
const instance = axios.create({...});
instance.interceptors.response.use(() => {}); /* persist the cookie */
instance.interceptors.request.use(() => {}); /* set the cookie if you have one */
export default instance;
*Cache between lambda instances: This is slightly more complicated. You will want to cache the cookie externally. You could store it in a database (key-value store, relational, document-oriented - doesn't matter) or you could try using shared disk space (I believe lambda instances share some directories like /tmp, but not 100% sure).
You might have to handle the case where your lambda gets hit by multiple requests at the same time and they all think they don't have the cookie, so they all attempt to login at the same time. Basically, the usual distributed systems / caching problems.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66219323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Elegant Reflection in Python? I have a small number of different Python classes (each standing for a different kind of strategy) and numerous configurations (e.g. parameters) for them. Currently I have organised the configurations in json files, which may look like this:
{
"script": "xyz.strategy.strategyA",
"datafile": "//datasrv10//data$//data//bloom.csv",
"assets": ["B EQUITY","A EQUITY"],
...
}
So, I have to use some sort of reflection to initialize the correct class. I found importlib for that purpose. I managed to import modules. Now each script is its own module and each module has a build function with the same interface:
def build(configuration, data):
return Risk(configuration, data)
class Risk(Strategy):
All of this looks like bad old-school Java to me. Can anyone show me the light in Python?
A: If you need some sort of configuration for this setup then there is no silver bullet. You can use json, yaml or maybe a relational database to store the configuration. Some improvement could come from allowing python code to be used for config, but this creates security issues if configuration can be provided externally.
The second step is translating configuration into actual python class instances, assign correct parameters etc. importlib serves great for this purpose and there isn't much to be improved upon here. You need some kind of factory for your classes, just try not to abstract too much (this is very Java-like and not pythonic), maybe one global method that is able to create objects based solely on configuration fragment?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22126602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: General architecture for backend? We are trying to be forward looking in our architecture choice on some of the new systems we are designing. Pretty much we want to architecture back end system that no matter what interface we decide to use (WinForms, Silverlight, MVC, Webforms, WPF, IOS (IPad/Iphone), ect...) which i believe just screams REST. Our organization generally will only use Microsoft APIs but since i have no idea when WCF-Web-Api will be released and we want to get started soon it looks like we have no other choice.
We want to take baby steps here to increase the chances of buy off. So we don't want to have to set up another server with IIS.
In the foreseeable future we will only be using WinForms & WebForms. What i was thinking we could use Nancy on the local machine but communicate with it in a RESTFul way. That way in the future it should be as simple as setting up a server and redirecting all the clients to that server rather than locally.
I've never used either NancyFX or OpenRasta, but, from what ive heard, it sounded like a good fit.
So the questions are:
*
*Is the way i'm thinking on approaching this a good approach
*Does it sound like NancyFX or OpenRasta would be a better fit?
*Any reason why we should wait for WCF-Web-API and if so does anyone have an approx release date.
A: OpenRasta was built for resource-oriented scenarios. You can achieve the same thing with any other frameworks (with more or less pain). OpenRasta gives you a fully-composited, IoC friendly environment that completely decouples handlers and whatever renders them (which makes it different from MVC frameworks like nancy and MVC).
I'd add that we have a very strong community, a stable codebase and we've been in this for quite a few years, we're building 2.1 and 3.0 and our featureset is still above and beyond what you can get from most other systems. Compare this to most of the frameworks you've highlighted, where none have reached 1.0.
Professional support is also available, if that's a deciding factor for your company.
But to answer your question fully, depending on your scenario and what you want to achieve, you can make anything fits, given enough work. I'd suggest reformulating your question in terms of architecture rather than in terms of frameworks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7905667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Typegoose - ObjectId that references a Sub-Document I'm trying to pull a ref from a Subdocument. The Documents looks like this.
Sample Categories Doc:
{
"_id": 12345
"keyword": "smart phones",
"subCategories": [
{
"_id": 6789,
"keyword": "apple"
},
{
"_id": 101123,
"keyword": "samsung"
}
]
}
Sample Dictionary Doc:
{
"_id": 12345
"keyword": "iPhone",
"category": 12345,
"subCategory": 6789
}
Here's what I've tried to do on Typegoose model definition:
For Dictionary (notice the subCategory prop, I'm not sure if that's the right way to reference a subdocument):
export class Dictionary extends Typegoose {
_id!: Schema.Types.ObjectId;
@prop({
default: 1
})
type!: IDictionaryInput['type'];
@prop()
mainKeyword!: string;
@arrayProp({ items: Synonym })
synonyms!: Synonym[];
@prop({ ref: Category })
category!: Ref<Category>;
@prop({ ref: CategoryModel.subCategories })
subCategory!: Ref<Subcategory>;
@prop({
default: true
})
status!: IDictionaryInput['status'];
}
For Categories:
export class Category extends Typegoose {
_id!: Schema.Types.ObjectId;
@prop()
keyword!: ICategoryInput['keyword'];
@prop({
default: 1
})
type!: ICategoryInput['type'];
@arrayProp({ items: Subcategory })
subCategories!: Subcategory[];
@prop({
default: true
})
status!: ICategoryInput['status'];
@prop()
insertTimestamp!: ICategoryInput['insertTimestamp'];
}
Then I tried to populate the references by doing:
DictionaryModel.findOne({ _id: id })
.populate({
path: 'category',
model: CategoryModel
})
.populate({
path: 'subCategory',
model: CategoryModel.subCategories
});
I can successfully populate the ref from category but not on subCategory.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56659251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Is the HttpServletRequest-object unique for each request? I have implemented with a servlet with below doGet & doPost methods:
public void doGet(HttpServletRequest request, HttpServletResponse response){
// code
}
public void doPost(HttpServletRequest request, HttpServletResponse response){
// code
}
My question: Is the HttpServletRequest object in these methods unique for every unique ?
I am using request.setAttribute("att","value"); method. to save some values.
I want to know if I save an attribute in
first request, will be it present in the next request object. (Provided both requests are received at
almost same time)
A: No - its a new request every time - any attributes set in one request, will not be there when the next request comes in.
If you want to set attributes that are persistent across requests, you can use:
request.getServletContext().setAttribute("att","value");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65392146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: MySQL returns zero rows(Empty set) without limit because of broken/outdated index I work on a large table with around 1.5k entries,
CREATE TABLE `crawler` (
`id` int(11) NOT NULL AUTO_INCREMENT,
...
`provider_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `crawler_provider_id` (`provider_id`),
...
) ENGINE=MyISAM ...
provider_id is used to link this table to another table 'providers, which was cleared and repopulated with new data. I was trying to recreate connections from 'crawler' to 'providers' (which should not really matter in case of MyISAM), but for some reason in my script MySQL returns zero rows if i don't provide a limit.
mysql> SELECT `crawler`.`id` FROM `crawler` WHERE `crawler`.`provider_id` > 1371;
Empty set (0.40 sec)
but
mysql> SELECT COUNT(*) FROM `crawler` WHERE `crawler`.`provider_id` > 1371;
| 346999 |
and
mysql> SELECT `crawler`.`id` FROM `crawler` WHERE `crawler`.`provider_id` > 1371 LIMIT 10;
10 rows in set (0.01 sec)
If I select some data from table and check it by myself I can see values greater than 1371.
I was able to fix this by deleting indexes (and recreating later), but I am extremely confused. I've never seen indexes going out of sync with table data (and I was unaware that they can affect values of returned rows). Unfortunately I haven't performed "CHECK TABLE" before deleting indexes, but it has "status=ok" right now, I can't see anything wrong in logs, and "REPAIR TABLE" shows no problems.
So, is this a common problem? What can be the reason? This server had some low RAM problems before, could it be the issue here as well?
A: Your query is almost certainly related to table corruption in MyISAM.
I did
root@localhost [kris]> create table crawler (
id integer not null auto_increment primary key,
provider_id int(11) DEFAULT NULL,
PRIMARY KEY (id),
KEY crawler_provider_id (provider_id)
) engine = myisam;
root@localhost [kris]> insert into crawler ( id, provider_id ) values ( NULL, 1 );</code>
and then repeated
root@localhost [kris]> insert into crawler ( id, provider_id)
select NULL, rand() * 120000 from crawler;
until I had
root@localhost [kris]> select count(*) from crawler;
+----------+
| count(*) |
+----------+
| 524288 |
+----------+
1 row in set (0.00 sec)
I now have
root@localhost [kris]> SELECT COUNT(*) FROM `crawler` WHERE `crawler`.`provider_id` > 1371;
+----------+
| COUNT(*) |
+----------+
| 518389 |
+----------+
1 row in set (0.27 sec)
which is somewhat comparable in size to what you gave in your example above. I do get two different plans for the query with and without a LIMIT clause.
Without a LIMIT clause I get a full table scan (ALL) not using any index:
root@localhost [kris]> explain SELECT `crawler`.`id` FROM `crawler` WHERE `crawler`.`provider_id` > 1371\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: crawler
type: ALL
possible_keys: crawler_provider_id
key: NULL
key_len: NULL
ref: NULL
rows: 524288
Extra: Using where
1 row in set (0.00 sec)
With the LIMIT clause, the INDEX is used for a RANGE access
root@localhost [kris]> explain SELECT `crawler`.`id` FROM `crawler` WHERE `crawler`.`provider_id` > 1371 LIMIT 10\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: crawler
type: range
possible_keys: crawler_provider_id
key: crawler_provider_id
key_len: 5
ref: NULL
rows: 518136
Extra: Using where
1 row in set (0.00 sec)
In your example, without the LIMIT clause (full table scan) you get no data, but with the LIMIT clause (range access using index) you get data. That points to a corrupted MYD file.
ALTER TABLE, as REPAIR TABLE or OPTIMIZE TABLE, will normally copy the data and the kept indexes from the source table to a hidden new version of the table in a new format. When completed, the hidden new table will replace the old version of the table (which will be renamed to a hidden name, and then dropped).
That is, by dropping the indexes you effectively repaired the table.
A: Maybe you can delete and recreate the index, and after that repair or optimize the table so all indices get rebuilt. That may help you. And look at your configuration to see if the memory settings are appropriate.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4918091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to prevent access to object properties that do not yet exist Often I have this situation:
var obj = { field: 2 };
and in the code:
do_something( obj.fiel ); // note: field spelt wrong
ie The property is incorrectly typed. I want to detect these bugs as early as possible.
I wanted to seal the object, ie
obj = Object.seal(obj);
But that only seems to prevent errors like obj.fiel = 2; and does not throw errors when the field is simply read.
Is there any way to lock the object down so any read access to missing properties is detected and thrown?
thanks,
Paul
EDIT: Further info about the situation
*
*Plain javascript, no compilers.
*Libraries used by inexperienced programmers for math calc purposes. I want to limit their errors. I can spot a misspelt variable but they can't.
*Want to detect as many errors as possible, as early as possible. ie at compile time (best), at runtime with thrown error as soon as wrong spelling encountered (ok), or when analysing results and finding incorrect calculation outputs (very very bad).
*Unit tests are not really an option as the purpose of the math is to discover new knowledge, thus the answers to the math is often not known in advance. And again, inexperienced programmers so hard to teach them unit testing.
A: I don't think this will work in Javascript since objects are written like JSON and thus properties will be undefined or null but not throw an error.
The solution will be writing a native getter/setter.
var obj = {
vars: {},
set: function(index, value) {
obj.vars[index] = value;
},
get: function(index) {
if (typeof(vars[index]) == "undefined") {
throw "Undefined property " + index;
}
return vars[index];
}
};
A: There is no reliable getter/setters in javascript right now. You have to check it manually. Here's a way to do that:
if (!('key' in object)) {
// throw exception
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13757631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Can't resolve jar library in InteliJ when adding to libraries in SBT project I want to add the Apache commons library to my Scala project. Before moving it to SBT it worked by just adding it to the library or global library setting within my InteliJ project. Now that I moved to sbt it doesn't resolve anymore and I'm getting errors.
How do I add a local jar library as an SBT dependency within InteliJ?
A: You need to make sure SBT is able to find that dependency. Follow a standard way of adding unmanaged dependencies to your project as described here. Citing that reference:
Unmanaged dependencies
Most people use managed dependencies instead of unmanaged. But
unmanaged can be simpler when starting out.
Unmanaged dependencies work like this: add jars to lib and they will
be placed on the project classpath. Not much else to it!
You can place test jars such as ScalaCheck, specs, and ScalaTest in
lib as well.
Dependencies in lib go on all the classpaths (for compile, test, run,
and console). If you wanted to change the classpath for just one of
those, you would adjust dependencyClasspath in Compile or
dependencyClasspath in Runtime for example.
There’s nothing to add to build.sbt to use unmanaged dependencies,
though you could change the unmanagedBase key if you’d like to use a
different directory rather than lib.
To use custom_lib instead of lib:
unmanagedBase := baseDirectory.value / "custom_lib"
baseDirectory is the project’s root directory, so here you’re changing
unmanagedBase depending on baseDirectory using the special value
method as explained in more kinds of setting.
There’s also an unmanagedJars task which lists the jars from the
unmanagedBase directory. If you wanted to use multiple directories or
do something else complex, you might need to replace the whole
unmanagedJars task with one that does something else.
To test if it works well just run SBT externally (outside of IntelliJ from cmd) and execute update or compile tasks. If your library is used in the code and you get no errors then SBT is happy. Afterwards simply use "Import Project" in IntelliJ and select "Use auto-import" option in one of the wizard steps.
A: Add this to your build.sbt:
libraryDependencies += "org.apache.commons" % "commons-math3" % "3.5"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24351972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to handle relative paths in a bash setup script? I often run into the situation where I would like to provide some kind of 'setup' or 'install' script, for something like a git repository, although it could also be used in other situations.
Just to be clear: with 'setup script' I mean a script which places some files, checks some things, creates certain dependencies and so on.
The problem is that if I want to use resources relative to the script or want to create links that target files in the repository I somehow need to be able to reference resources relative to the repository root or build absolute paths.
Currently I always go with this:
SCRIPT_DIR=$(cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd)
ROOT=$(realpath "$SCRIPT_DIR/..")
But this seems really bad, as it hard to understand and basically replicated in every repo or even file.
Is there a better way to do this? Are scripts like this unwanted?
A:
Is there a better way to do this?
No.
Are scripts like this unwanted?
No. That's normal.
Is there another way to go about this?
My fingers are used to typing "$(dirname "$(readlink -f "$0")")", but that's not better.
Also do not use UPPER CASE VARIABLES. Not only they shout, but are meant for environment variables, like PWD LINES COLUMNS DISPLAY UID IFS USER etc. Prefer lower case variables in your scripts. (I would say that ROOT is a very common and thus bad variable name.)
A: In a git repo, you might use git rev-parse --show-toplevel to find the root of the worktree, and then go from there.
In general, it is a somewhat hard problem. There are too many ways to invoke a script that can alter what $0 actually means, so you can't really rely on it. In my opinion, the best you can do there is to establish a standard (for example, by expecting certain values in the environment or insisting the script be executed as ./path/from/root/to/script) and try to exit with good error messages if not.
A: Can this improve scripts management and readability ?
#!/usr/bin/env bash
source $HOME/common-tools/include.sh "${BASH_SOURCE[0]}"
echo $ROOT
in $HOME/common-tools/include.sh :
SCRIPT_DIR=$(cd "$( dirname "$1" )" >/dev/null 2>&1 && pwd)
ROOT=$(realpath "$SCRIPT_DIR/..")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64357906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Subsets and Splits