text
stringlengths 0
13M
|
---|
Title: QtNetworkAccessManaget issue getting Japanese character containing web page source.
Tags: qt
Question: In my Qt QUI application I need to get WebPage source using QNetworkAccessManager.
My problem is while I am trying to get Page Source of those belongs Japan Country and containing Japanese words so that Japanese words are coming in some undefined format.
How can I get that page source as it is with Japanese chartacters and save it into QString object.
Example Page Url is :http://www.amazon.co.jp/BUFFALO-外付けハードディスク-Regza-HD-LB2-0TU2-フラストレーションフリーパッケージ/dp/B0052VIGXA/ref=sr_1_1?s=electronics&ie=UTF8&qid=1366439116&sr=1-1
Here is another answer: What do you mean by 'undefined' format and how you analyse a content of QString you have. QNetworkAccessManager gets raw data received from http, so then you do something like.
```QByteArray data = reply->readAll();
```
you should analysed received header for encoding and make appropriate conversion.
Comment for this answer: check QTextCodec documentation
Comment for this answer: Yes I am receiving reply using same QByteArray data = reply->readAll();
And also analysed received header that is {Content-Type: text/html; charset=Shift_JIS}. But I am not getting how to make conversion?
Undefined Format Means like following:
" BUFFALO �O�t���n�[�h�f�B�X�N PC/�Ɠd�Ή� egza[���O�U]/Aquos[�A�N�I�X]) 2TB HD-LB2.0TU2/N [�t���X�g���[�V�����t���[�p�b�P�[�W(FFP)] "
|
Title: How can I replace text that is not part of an anchor tag in Perl?
Tags: regex;perl;text;anchor
Question: What is a Perl regex that can replace select text that is not part of an anchor tag? For example I would like to replace only the last "text" in the following code.
```blah <a href="http://www.text.com"> blah text blah </a> blah text blah.
```
Thanks.
Comment: @Jay: Presumably he's doing `s/text/replacement/g`, so the blahs don't match. But this is not a job for a regex (alone).
Comment: Aren't the first and last two "blahs" also "not part of an anchor tag?"
Comment: Ah... got it. Yes, refer to the seminal text on the subject: http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454
Comment: It is said that in Ulthar, which lies beyond the river Skai, no man may parse html with a regex.
Comment: gulp. Regex and html. goes to hide...
Comment: @Jay - I assume the OP wants to `magic_replace(html, 'text', 'link still ok')`
Here is the accepted answer: I have temporarily prevailed:
```$html =~ s|(text)([^<>]*?<)(?!\/a>)|replacement$2|is;
```
but I was dispirited, dismayed, and enervated by the seminal text; and so shall pursue Treebuilder in subsequent endeavors.
Comment for this answer: Your regex will also replace the "text" inside `text`, because it only looks at the first end tag.
Comment for this answer: Use of regex html parsers will cause you to wind up like Charles Dexter Ward.
Comment for this answer: it depends on what you're parsing - if they are small, regular lines of HTML output by another process for example, then a regex might be appropriate. if they are actual full HTML pages, then a proper HTML parser makes sense...
Here is another answer: You don't want to try to parse HTML with a regex. Try HTML181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16TreeBuilder instead.
```use HTML181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16TreeBuilder;
my $html = HTML181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16TreeBuilder->new_from_file('file.html');
# or some other method, depending on where your HTML is
doReplace($html);
sub doReplace
{
my $elt = shift;
foreach my $node ($elt->content_refs_list) {
if (ref $$node) {
doReplace($$node) unless $$node->tag eq 'a';
} else {
$$node =~ s/text/replacement/g;
} # end else this is a text node
} # end foreach $node
} # end doReplace
```
Here is another answer: Don't use regexps for this kind of stuff. Use some proper HTML parser, and simply use plain regexp for parts of html that you're interested in.
|
Title: Rails / Rspec - writing spec involving custom validation and belongs_to associations
Tags: ruby-on-rails;factory-bot;rspec-rails
Question: I have the following AR has_many, belongs_to relationships:
League --> Conference --> Division --> Team
I have an Event model that looks like this:
```class Event < ActiveRecor181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16Base
belongs_to :league
belongs_to :home_team, :class_name => 'Team', :foreign_key => :home_team_id
belongs_to :away_team, :class_name => 'Team', :foreign_key => :away_team_id
validate :same_league
def same_league
return if home_team.blank? || away_team.blank?
errors.add :base, "teams must be in the same league" if home_team.league != away_team.league
end
end
```
And some factories:
```FactoryGirl.define do
factory :league do
name 'NFL'
end
end
Factory.define :conference do |f|
f.name 'NFC'
f.association :league
end
Factory.define :division do |f|
f.name 'North'
f.association :conference
end
Factory.define :team do |f|
f.name 'Packers'
f.locale 'Green Bay'
f.association :division
end
FactoryGirl.define do
factory :event do
association :league
association :home_team, :factory => :team
association :away_team, :factory => :team
end
end
```
So with all that, how would I go about writing a spec for the same_league validation method?
```describe Event do
pending 'should not allow home_team and away_team to be from two different leagues'
end
```
My issue is knowing what the simplest way to go about creating two teams in different leagues and associating one with home_team and the other with away_team in the event model.
Here is the accepted answer: You can store instances you generate with factories and then explicitly use their ID's to fill in the foreign keys for subsequent factories.
Here I'm creating two leagues, then setting up two tests. One where the event has two teams in the same league and another with two teams in different leagues. This way I can test if the event object is properly validating:
```describe Event do
before(:each) do
@first_league = Factory(:league)
@second_league = Factory(:league)
end
it "should allow the home_team and away_team to be from the same league" do
home_team = Factory(:team, :league_id => @first_league.id)
away_team = Factory(:team, :league_id => @first_league.id)
event = Factory(:event, :home_team_id => home_team.id, :away_team_id => away_team.id)
event.valid?.should == true
end
it "should not allow the home_team and away_team to be from two different leagues" do
home_team = Factory(:team, :league_id => @first_league.id)
away_team = Factory(:team, :league_id => @second_league.id)
event = Factory(:event, :home_team_id => home_team.id, :away_team_id => away_team.id)
event.valid?.should == false
end
end
```
|
Title: how to merge two files in unix
Tags: shell;unix
Question: I want to merge two files in Unix. How can I do this?
eg file1 contains:
```host1:90:/users:user1
host2:90:/users:user1
host3:90:/users:user1
host4:90:/users:user1
host5:90:/users:user1
host6:90:/users:user1
host7:90:/users:user1
```
file2 contains:
```host1:owner_name
host2:owner_name
host3:owner_name
host4:owner_name
host5:owner_name
host6:owner_name
host7:owner_name
```
output result:
```host1:90:/users:user1:owner_name
host2:90:/users:user1:owner_name
host3:90:/users:user1:owner_name
host4:90:/users:user1:owner_name
host5:90:/users:user1:owner_name
host6:90:/users:user1:owner_name
host7:90:/users:user1:owner_name
```
I have used this command ```paste -d ':' file1 file2 >merged_file```, but this is what I am getting:
```host1:90:/users:user1:host1:owner_name
host2:90:/users:user1:host2:owner_name
host3:90:/users:user1:host3:owner_name
host4:90:/users:user1:host4:owner_name
host5:90:/users:user1:host5:owner_name
host6:90:/users:user1:host6:owner_name
host7:90:/users:user1:host7:owner_name
```
Here is the accepted answer: Use ```join``` instead:
```% join -t':' file1 file2
host1:90:/users:user1:owner_name
host2:90:/users:user1:owner_name
host3:90:/users:user1:owner_name
host4:90:/users:user1:owner_name
host5:90:/users:user1:owner_name
host6:90:/users:user1:owner_name
host7:90:/users:user1:owner_name
```
Comment for this answer: @pratik That can be fixed by sorting around the first field before joining. The true question is: are you trying to just merge the files linewise or are you actually trying to join them around the first column?
Comment for this answer: `join` works but suppose if file2 is not in sorted order then it gives error . Can we use sed or awk
Comment for this answer: i also want to merge file on first column basics .
|
Title: Changing navbar text color
Tags: html;css;twitter-bootstrap;colors
Question: I'm working with Bootstrap navbar and I want to change colour of one of the links. I can easily change colour of all links with the code below, but i need to change just one.
```.navbar-custom .navbar-nav .nav-link {
color: red;
}
```
and HTML
```<nav class="navbar navbar-expand-md navbar-custom fixed-top navbar-light bg-light">
<div class="container-fluid">
<ul class="navbar-nav mb-2 mb-lg-0">
<li class="nav-item">
<a class="nav-link active" aria-current="page" href="#">Home</a>
</li>
<li class="nav-item">
<a class="nav-link" href="/about">About</a>
</li>
<li class="nav-item">
<a class="nav-link" href="/login">Login</a>
</li>
</ul>
</div>
</nav>
```
Let's say I want to change colour of the Login link to red. How to do this without changing About link colour at the same time and what's the rules of changing style of one element instead of all in nested elements like lists in navbar?
Here is another answer: Simply add ```text-danger``` class to your required tag to change color.
Bootstrap provides some different theme colors through classes. Go through the document here
You required for ```login```. Hence added ```text-danger``` to that respected ```<a href='/login'>Login</a>``` tag.
```<nav class="navbar navbar-expand-md navbar-custom fixed-top navbar-light bg-light">
<div class="container-fluid">
<ul class="navbar-nav mb-2 mb-lg-0">
<li class="nav-item">
<a class="nav-link active" aria-current="page" href="#">Home</a>
</li>
<li class="nav-item">
<a class="nav-link" href="/about">About</a>
</li>
<li class="nav-item">
<a class="nav-link text-danger" href="/login">Login</a>
</li>
</ul>
</div>
</nav>
```
Using ```:n-th-of-type()```
If you have similar tags as present here and want to change only selected ones then you can use this method. You can see n-th-of-type(), nth-child(), nth-last-of-type()
```li:nth-of-type(3) a{ // It will select third li a element
color: red
}
li:nth-of-type(2n) a{ // It will select even li a element like 2nd, 4th, ..
color: red
}
li:nth-of-type(2n+1) a{ // It will select odd li a element like 1st, 3rd, ..
color: red
}
```
Comment for this answer: That works, but what if I want to change another style element separately? Attribute selector suggested by Sebastian doesn't work or I'm doing something wrong.
Comment for this answer: @JustasG There are many ways and it depends on situation. You can also use `li:nth-of-type(3) a{ color: red; }` in above example.
Comment for this answer: @JustasG updated the answer. I hope it will clear your solution
Here is another answer: One option would be to use an Attribute Selector:
```a[href="/login"] {
color: red;
}```
```<nav class="navbar navbar-expand-md navbar-custom fixed-top navbar-light bg-light">
<div class="container-fluid">
<ul class="navbar-nav mb-2 mb-lg-0">
<li class="nav-item">
<a class="nav-link active" aria-current="page" href="#">Home</a>
</li>
<li class="nav-item">
<a class="nav-link" href="/about">About</a>
</li>
<li class="nav-item">
<a class="nav-link" href="/login">Login</a>
</li>
</ul>
</div>
</nav>```
|
Title: VBA - passing objects between functions
Tags: arrays;vba;class;object;ms-access-2007
Question: First question here. I've been looking for answers to this question for days, and have worked out MOST of what I've been trying to do myself. I'd like to share what I found, and hope that someone can suggest a better way to make this work.
I have a function that uses several of one class, and then needs to return this information in an array for use elsewhere. The code that uses this needs to break this information back down, and reuse the information returned. The reason it's not all one block of code is that the function is called many times without asking for the return values.
The part of the Class module that we are concerned about is below. This is the ONLY way I have been able to get it to work. Mind the parentheses and count the parameters.
```Option Compare Database
Option Explicit
Private p_intaryAgentNames() As Integer
Public Property Get AgentNames() As Variant
AgentNames = p_intaryAgentNames()
End Property
Public Property Let AgentNames(IncomingArray As Variant)
If IsArray(IncomingArray) Then
If UBound(p_intaryAgentNames) <> UBound(IncomingArray) Then ReDim Preserve p_intaryAgentNames(UBound(IncomingArray))
p_intaryAgentNames() = IncomingArray
Else
MsgBox ("Invalid information passed to AgentNames array")
End If
End Property
Public Property Get Filter() As String
Filter = p_strFilter
End Property
Public Property Let Filter(value As String)
p_strFilter = value
End Property
Private Sub Class_Initialize()
ReDim p_intaryAgentNames(0)
End Sub
```
When finishing my function, this is how I use my object members, put them into an array, then pass it to the calling code:
```Function CalculateRecords() As Variant
Const CONSTANTSTRINGZEROLENGTH As String = ""
Dim objRec1 As cAgent
Dim objRec2 As cAgent
Dim objRec3 As cAgent
Set objRec1 = New cAgent
Set objRec2 = New cAgent
Set objRec3 = New cAgent
Dim objAgents(2) As cAgent
'some things happen
Set objAgents(0) = objRec1
Set objAgents(1) = objRec2
Set objAgents(2) = objRec3
CalculateRecords = objAgents
End Function
```
This is how I plan to unpack the data, but it doesn't seem like I'm using it right:
Private Sub cmdAssignRecords_Click()
```Dim cAgentInfo As Variant
Dim objRec1 As cAgent
Set objRec1 = New cAgent
Dim objRec2 As cAgent
Set objRec2 = New cAgent
Dim objRec3 As cAgent
Set objRec3 = New cAgent
cAgentInfo = CalculateRecords() 'A variant to catch a variant
Set objRec1 = cAgentInfo(0)
Set objRec2 = cAgentInfo(0)
Set objRec3 = cAgentInfo(0)
```
What I would rather do is use cAgentInfo() directly, and this works for other objects of the class. What I cannot do is access cAgentInfo(0).AgentNames(0), while objRec1.AgentNames(0) and cAgentInfo(0).filter work just fine.
I'm sure I'm just missing something here, or maybe I'm just throwing myself against a wall. Any suggestions as to what I'm missing, or how I can improve are well-appreciated. I feel like doing it this way will improve readability, but isn't it also wasting namespace?
Here is the accepted answer: I can't catch your big design, so I try suggesting
```Private Sub cmdAssignRecords_Click()
'Dim cAgentInfo As Variant
Dim cAgentInfo() As cAgent
...
```
and
```'Function CalculateRecords() As Variant
Function CalculateRecords() As cAgent()
```
that way you can use cAgentInfo(0).AgentNames(0) in function ```CalculateRecords```
Comment for this answer: This is what worked. I was not aware I could declare the function as an array of the class I created. I thought I could only pass arrays as a variant.
Here is another answer: Try declaring
```Dim cAgentInfo As Variant
```
as an array like this:
```Dim cAgentInfo() As Variant
```
Comment for this answer: I tried that, as it seemed to be the most straightforward approach. Unfortunately, I could not assign the variant array returned by the function to a new variant array. Paraphrasing, I got an error stating that I could not assign to an array. It works though when I change cAgentInfo to a cAgent data type.
|
Title: Can we delete the cloudformation file after migrating to the cdk?
Tags: aws-cdk
Question: We're looking at migrating our toolchain to using the CDk. We currently use cloudformation files to generate stacks for our various apps.
For a given app, we currently have the following structure:
```|- cloudformation/
|- cloudformation.json
|- src/
```
We want to be able to use the same cloudfromation stack but remove the ```cloudformation.json``` file.
Looking at this: https://aws.amazon.com/blogs/developer/migrating-cloudformation-templates-to-the-aws-cloud-development-kit/
It seems that the preferred way to migrate to using the cdk for a given app is to keep the cloudformation file and import it into the cdk. Then any new changes we need to make are done in the cdk stack.
I tried the instructions in the blog post, but when I went to delete the old cloudformation file, it wouldn't work(obviously).
Is there a way to delete the cloudformation file? Or is the expectations for migrating to the cdk that the cloudformation file sticks around?
Here is another answer: The suggestion seems to be to abandon your old git project and maintain things in new one. Is that hard for your use case?
Comment for this answer: yeah, we have multiple projects that are years old. Right now we're looking at the trade offs between keep around the "old" cloudformation stack(just importing cloudformation.json into the cdk) and creaking a new stack that's just using the CDK.
|
Title: perform segue after admob ad is dismissed
Tags: ios;swift;admob;segue
Question: any idea on how to perform this segue. Once users sign up and they are authenticated they will be shown an interstitial ad. however once the ad is done or dismissed. the segue to the next view controller should be performed. I not exactly sure what im missing in my code:
```@IBAction func signUpBtn_TouchUpInside(_ sender: Any) {
view.endEditing(true)
ProgressHUD.show("Waiting...", interaction: false)
if let profileImg = self.selectedImage, let imageData = UIImageJPEGRepresentation(profileImg, 0.1) {
AuthService.signUp(username: usernameTextField.text!, email: emailTextField.text!, password: passwordTextField.text!, imageData: imageData, onSuccess: {
ProgressHUD.showSuccess("Success")
if self.interstitial.isReady {
self.interstitial.present(fromRootViewController: self)
} else {
print("Ad wasn't ready")
self.performSegue(withIdentifier: "signUpToTabbarVC", sender: nil)
}
self.performSegue(withIdentifier: "signUpToTabbarVC", sender: nil)
}, onError: { (errorString) in
ProgressHUD.showError(errorString!)
})
} else {
ProgressHUD.showError("Profile Image can't be empty")
}
}
```
and help or feedback is always greatly appreciated
Comment: what is the error ?
Here is another answer: You should conform your ```ViewController``` to ```interstitial``` ```delegate``` and perform the ```segue``` when ```interstitial``` is dismissed
```extension ViewController: GADInterstitialDelegate {
func interstitialDidDismissScreen(_ ad: GADInterstitial) {
self.performSegue(withIdentifier: "signUpToTabbarVC", sender: nil)
}
}
```
And update ```signUpBtn_TouchUpInside``` method as below,
```@IBAction func signUpBtn_TouchUpInside(_ sender: Any) {
view.endEditing(true)
ProgressHUD.show("Waiting...", interaction: false)
if let profileImg = self.selectedImage, let imageData = UIImageJPEGRepresentation(profileImg, 0.1) {
AuthService.signUp(username: usernameTextField.text!, email: emailTextField.text!, password: passwordTextField.text!, imageData: imageData, onSuccess: {
self.handleSignupSuccess()
}, onError: { (errorString) in
ProgressHUD.showError(errorString!)
})
} else {
ProgressHUD.showError("Profile Image can't be empty")
}
}
private func handleSignupSuccess() {
ProgressHUD.showSuccess("Success")
if self.interstitial.isReady {
self.interstitial.delegate = self
self.interstitial.present(fromRootViewController: self)
} else {
print("Ad wasn't ready"
self.performSegue(withIdentifier: "signUpToTabbarVC", sender: nil)
}
}
```
|
Title: Running setup.py install for Twisted … error ModuleNotFoundError: No module named 'twisted'
Tags: python;twisted
Question: I am trying to install twisted package from the source code. I have cloned the git repository and ran ```python3 setup.py build``` but it resulted in the error ```ModuleNotFoundError: No module named 'twisted'```. How to install the latest code? Pip install is not suitable as it has compatibility issues between Python3 and names module as mentioned in this post - Python Twisted pip package not compatible with Python3.
Comment: Hi, I am using Ubuntu 18 and trying to install twisted. I can make pip work if I use Python2, but Twisted has moved to Python3 in its latest source code but somehow did not update the pip module from 20.3 to whichever one supporting python 3. Their git repository tells to use pip3, but pip3 does not work for named module as told in the above link. So, I am trying to install the [latest source code](https://github.com/twisted/twisted/tree/twisted-20.11.0.dev5) by cloning the repository which is python3 compatible and using `python3 setup.py` but ran into the above error.
Comment: Hello. Welcome to Stack Overflow. There is not enough information in your question to provide an answer. Please provide a complete description of your environment and a complete transcript of your interaction. Also, consider making pip work because pip is the correct way to install Python software and "python setup.py" is mostly the incorrect way.
Comment: There are many versions of pip. "pip" and "pip3" do not clearly identify any particular version. If you are using the Ubuntu 18-packaged version of pip (whether it is called "pip" or "pip3") then you probably need to upgrade. I guess I can make an answer out of that.
Here is another answer: First create a "virtualenv":
```virtualenv ~/playing-around-environment
```
Then activate it for your current shell:
```. ~/playing-around-environment
```
Then upgrade pip to get a version that deals with Python 2/3 distinctions better:
```pip install --upgrade pip
```
Then install Twisted into the virtualenv:
```pip install twisted
```
If you want to use a different version of Python, tell the virtualenv command at the beginning about it. For example:
```virtualenv --python=python27 ~/playing-around-environment
```
or
```virtualenv --python=python38 ~/playing-around-environment
```
```python27``` or ```python38``` should give the name of a Python interpreter executable that's installed on your system. The rest of the steps remain the same.
Comment for this answer: What I am trying to say is not about any issue with Pip or Python but about Twisted. On [Twisted GitHub webpage](https://github.com/twisted/twisted), they claim that it supports Python 3.5+, but that is not entirely true. I tried running `twistd -n dns --pyzone example-domain.com` after installing Twisted with Pip latest version on Ubuntu bionic, 20.2.4. That command gave the error `AttributeError: 'dict' object has no attribute 'iterkeys'` because Pip installed Twisted 20.3. So, how to install the latest Twisted version (20.11.5) by cloning their Git repository is my question?
Comment for this answer: Did you try installing into a virtualenv?
|
Title: HTTP and Java - modified headers in gzip encoding
Tags: java;http
Question: I've got strange issue with my http communication. I wrote following very simple http server and client.
Http server:
```public class SimpleHttpServer {
public void run() throws IOException {
final ServerSocket serverSocket = new ServerSocket(8080);
while(true)
{
final Socket connectionSocket = serverSocket.accept();
new Thread(new Runnable() {
@Override
public void run() {
try {
try(BufferedReader in = new BufferedReader(new InputStreamReader(connectionSocket.getInputStream()));
DataOutputStream out = new DataOutputStream(connectionSocket.getOutputStream())){
System.out.println(in.readLine());
String line = null;
while((line = in.readLine()) != null && !line.trim().isEmpty()){
System.out.println(line);
}
byte[] message = loadFile(); //load some gzipped jpg image from disk
out.write(ascii("HTTP/1.1 200 OK\n"));
out.write(ascii("Content-Type: image/jpg\n"));
out.write(ascii(String.format("Content-Length: %s\n", message.length)));
out.write(ascii("content-encoding: gzip\n"));
out.write(ascii("\n"));
out.write(message);
}
}catch(Exception e){
e.printStackTrace();
}
}
}).start();
}
}
}
```
Http client:
```public class SimpleHttpClient {
public void run() throws IOException {
String host = "localhost";
int port = 8080;
Socket socket = new Socket(host, port);
try (OutputStream out = socket.getOutputStream()) {
out.write(ascii("GET / HTTP/1.1\n"));
out.write(ascii("User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/681.350.8737 Safari/537.36\n"));
out.write(ascii("Accept-Encoding: gzip, deflate, sdch\n"));
out.write(ascii("Accept-Language: pl-PL,pl;q=0.8,en-US;q=0.6,en;q=0.4\n"));
out.write(ascii("\n"));
try (BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream()))) {
String line = null;
while ((line = in.readLine()) != null && !line.trim().isEmpty()) {
System.out.println(line);
}
}
}
}
}
```
So, when I remove line with user agent header, I get result as I expect:
```HTTP/1.1 200 OK
Content-Type: image/jpg
Content-Length: 378632
content-encoding: gzip
```
And this is clear and obvious.
But when this header will appear, the answer is different:
```HTTP/1.1 200 OK
Content-Type: image/jpg
Transfer-Encoding: chunked
```
I didn't change anything on server side. On every request I send the same response. The user agent was copied from chrome browser.
Both implementations are in plain java socket, not on third party libriares.
So my question is: how it happend? Something was cached? But where?
Please don't look at code quality. This is just an example. I don't want to use it on production :)
EDIT 1:
```private byte[] ascii(String value) throws UnsupportedEncodingException {
return value.getBytes(StandardCharsets.US_ASCII);
}
```
EDIT 2:
I've done some research. It happens when user agent contains "AppleWebKit", protocol name is HTTP and port is 80 or 8080.
Moreover it happens on Windows 10 (I checked on two different computers). On Linux (Ubuntu 16.04 and Debian 8) everything is fine.
Maybe I should change the title of question? But how can I explain the problem in one short sentence?
Comment: Seems impossible. Your client receives a 'Transfer-Encoding' header that is not produced by the server code. I suspect you don't connect to the right server. Make sure your server is the only process listening to port 8080 - maybe there is already one listening to 681.350.8737:8080. Use a step debugger to verify that your server code is executed.
Comment: check if there is some exception may be
Comment: Do not use general Exception class implement the catch for every type of exception your code can throw
Comment: post the ascii method
Comment: How `ascii` method is defined?
Comment: I was add implementation of `ascii` method.
@Cristian - Of course you right. I shouldn't use general Exception class, but this is a simple example, only for testing :)
Comment: @blafasel I thought the same :) I even changed content type header to `Content-Type: image/impossible` on server side and reran client. Everything is correct. I'm connecting to right server. Maybe this is configuration of my computer or something was cached because earlier on this port was running netty http server? But where and why? Previously I saw the issue on chrome debug console.
|
Title: UnicodeDecodeError: Converting type string to unicode
Tags: python;python-2.7;unicode
Question: I am trying to replace text. Unfortunately, the main string is stored as type unicode, but the string which describes the text to be replaced is stored as type string. Below is a reproducible example:
```mystring = u'Bunch of text with non-standard character in the name Rubén'
old = 'Rubén'
new = u'newtext'
mystring.replace(old, new)
```
This throws an error:
```UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 3: ordinal not in range(128)```
I get the same error when I try to convert ```old``` to unicode with ```unicode(old)```. Several answers solve the problem for specific characters, but I cannot find a generic solution.
Here is the accepted answer: You need to convert the ```old``` value to Unicode with an explicit codec. What that codec is depends entirely on how you sourced ```old```.
If it is a string literal in the source code, use the source code encoding. Python won't accept your source file unless you specified a valid codec at the top in a comment; see PEP 263
Pasting your ```old``` definition into a terminal will use your terminal codec (the terminal sends Python encoded bytes as you paste).
If the data is sourced from anywhere else, you'll need to determine the encoding from that source. For HTTP data, check the ```Content-Type``` header for a ```charset``` parameter, for example.
Then decode:
```old = old.decode(encoding)
```
When you use ```unicode(old)``` without an explicit codec, or try to use the bytestring in ```unicode.replace()```, Python uses the default codec, ASCII.
Demo in my terminal, configured to use UTF-8:
```>>> import sys
>>> sys.stdin.encoding # reflects the detected terminal codec
'UTF-8'
>>> old = 'Rubén'
>>> old # shows encoded data in python string literal form
'Rub\xc3\xa9n'
>>> old.decode('utf8') # unicode string literal form
u'Rub\xe9n'
>>> print old.decode('utf8') # string value written to the terminal
Rubén
>>> mystring = u'Bunch of text with non-standard character in the name Rubén'
>>> new = u'newtext'
>>> mystring.replace(old, new)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 3: ordinal not in range(128)
>>> mystring.replace(old.decode('utf8'), new)
u'Bunch of text with non-standard character in the name newtext'
```
Generally speaking, you want to decode early, encode late; make your data flow a Unicode Sandwich. As soon as your receive text, decode it all to Unicode values, and don't encode again until the data is leaving your program.
Comment for this answer: @Michael: People are used to using ASCII only. It is all plain text, really.
Comment for this answer: Thanks! Key piece of info missing from answers to other questions is how to figure out what the codec was `sys.stdin.encoding`. Unfortunately, plain text is not plain text but (in my case) cp1252.
|
Title: iphone Core Data - Filtering NSFetchedResultController?
Tags: iphone;core-data;nsfetchedresultscontroller;nspredicate
Question: I was given a framework written by other programmers to access the core data.
In this situation i receive a pre-loaded NSFetchedResultController which I need to filter, to display a part of it's data.
Here is what I tried:
```NSPredicate *predicate = [NSPredicate predicateWithFormat:@"category==%@", @"current"];
[NSFetchedResultsController deleteCacheWithName:@"Root"];
[myResultController.fetchRequest setPredicate:predicate];
myResultController.fetchedObjects = [myResultController.fetchedObjects filteredArrayUsingPredicate:predicate];
```
And i get an error saying that object cannot be set, either setter method is missing, or object is read-only.
So whats the best way to filter an NSFetchResultController which is already loaded, without having to store the filtered data into another array?
Here is the accepted answer: ```fetchedObjects``` is readonly. You cannot set it manualy.
What you need to do is to perform a new fetch with your ```myResultController```.
```NSPredicate *predicate = [NSPredicate predicateWithFormat:@"category==%@", @"current"];
[NSFetchedResultsController deleteCacheWithName:@"Root"];
[myResultController.fetchRequest setPredicate:predicate];
//myResultController.fetchedObjects = [myResultController.fetchedObjects filteredArrayUsingPredicate:predicate];
[myResultController performFetch:nil];
```
|
Title: Dummy Soundcard for Amazon linux server
Tags: linux;amazon-web-services;amazon-ec2
Question: I need to use an application which needs a soundcard in an amazon ec2 instance with the default ubuntu 16.04 installed on it. Problem is that there's no soundcard available. I've tried everything on google on how to create a dummy soundcard so the program runs with no problem, but nothing helped because it was outdated. This is what ```lspci```returns:
```00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 01)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
```
lsmod | grep snd does not return ANYTHING which makes me think that I might me missing all modules, and not just ```snd-dummy```.
I've been trying to setup a dummy by using the command ```sudo modprobe snd-dummy``` which returns the following error:
```modprobe: FATAL: Module snd-dummy not found in directory /lib/modules/4.4.0-1013-aws
```
Any clues?
Comment: have you tried this already? https://superuser.com/questions/344760/how-to-create-a-dummy-sound-card-device-in-linux-server
Comment: Hey, thanks for the response! unfortunately yes, I've tried it. The problem is that when I execute modprobe, it doesnt find the dummy module :(
Here is another answer: If anyone still has the same issue:
You are probably using an AMI that comes with a kernel compiled without the ```snd-dummy``` module. The ALSA wiki suggests building this modules from source (```alsa-driver```) but this is out of date.
I was able to run an application that needs a soundcard on EC2 by installing pulseaudio
```sudo apt install pulseaudio
pulseaudio --start
```
After which I get:
```$ aplay test.wav
Playing WAVE 'test.wav' : Unsigned 8 bit, Rate 22257 Hz, Mono
```
And the soundcard-needing application runs normally. If this is not enough for you, you might need to enable the default sink:
```pactl load-module module-null-sink sink_name=auto_null
pactl set-default-sink auto_null
```
More details here Linux application fails with "Invalid CTL" and "Unknown PCM"
|
Title: Resize svg image as background in chrome browser
Tags: html;css;google-chrome;svg
Question: I am using svg image as background and I am stretching SVG image through background-size. I want it to be strech only width wise. Its working perfectly fine in firefox, IE9 + but chrome. Please suggest me how i can achieve it.
```.homecallouts ul li {
background-image: url('blue_arow_callout.svg');
background-size: 100% 100%;
width: 21%;
height: 42px;
```
see the jsbin code
http://jsbin.com/uvijuc/4/
when i resize in firefox only width stretch but in chrome both width and height are stretching. I want only width to stretch.
Comment: possible duplicate of [background-size:100% 100%; doesn't work properly in Chrome](http://stackoverflow.com/questions/9334095/background-size100-100-doesnt-work-properly-in-chrome)
Here is another answer: Maybe adding preserveAspectRatio="none" to open tag in the SVG file could help?
```<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 16.0.4, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
width="282.05px" height="61.974px" viewBox="286.287 26.514 282.05 61.974" enable-background="new 286.287 26.514 282.05 61.974"
xml:space="preserve" preserveAspectRatio="none">
<polygon fill="#0063AF" points="538.337,26.514 286.287,26.514 316.287,57.5 286.287,88.488 538.337,88.488 568.337,57.5 "/>
</svg>
```
JSBin example
Comment for this answer: Ilya Streltsyn i love you...seriously you probably just saved my live, thank you so much!!!
Here is another answer: It's a Chrome bug. Regression, in fact.
http://code.google.com/p/chromium/issues/detail?id=113414
Comment for this answer: In may case, though, setting `preserveAspectRatio="none"` did help.
Here is another answer: I do not have enough reputation to upvote or comment, so only answering it again will work. I solved a similar case by just adding preserveAspectRatio="none".
```<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="591.42859"
height="250.24918"
id="svg2"
version="1.1"
preserveAspectRatio="none"
inkscape:version="0.48.2 r9819"
sodipodi:docname="background.svg">
```
Here is another answer: I think technically chrome is correct on this one, you need to adjust your background-size values to what you actually want. Keeping them at 100% is forcing the aspect ratio to remain constant.
Comment for this answer: Would be helpful if your answer contained the code to make it the way the OP wants.
Comment for this answer: I have specified 42px hight and background-size height also 100% so why height is stretching. oh the other hand firefox is working what i desired.
Here is another answer: Don't use background-size. What you need to do is have the following values for width, height and preserveAspectRatio in your SVG file.
```<svg width="100%" height="100%" preserveAspectRatio="xMidYMid slice" viewBox="..." />
```
Note that in order for this to work, your SVG needs to have a valid viewBox as well. Which it does appear to do.
|
Title: ¿Cuál es la diferencia entre usar triple comillas dobles (""") y almohadilla (#) para comentar en Python?
Tags: python;python-3.x
Question: Actualmente me encuentro aprendiendo Python, anteriormente me encontraba con Java y los comentarios no eran nada más que como muestra a continuación:
``` //Para una sola línea de código en Java
```
``` /* Para poder realizar un comentario
en varias líneas en Java*/
```
Hace un tiempo utilicé ```Python``` para desarrollar un examen, creé un sistema web (siguiendo tutoriales) con ayuda de ```Django```, dentro de las hojas de trabajo aparecían comentarios con 3 comillas dobles (```"""```), como muestra a continuación:
``` """
Django settings for libreria project.
Generated by 'django-admin startproject' using Django 2.1.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.1/ref/settings/
"""
```
```
Lo anterior lo he sacado directamente desde un archivo de mi sistema
```
Y por otro lado:
```#Para un comentario de una sola línea en `Python`
```
Ahora me encuentro con que para realizar un comentario en varias líneas
es :
```#Para poder realizar un comentario
#de varias líneas, al colocar una almohadilla
#al final de la frase el IDE genera la almohadilla
#izquierda automáticamente#
```
Donde sólo se utilizan almohadillas para los comentarios, ya sean de una sola línea o más.
Entonces:
¿Tanto triple comillas dobles y almohadillas sirven para comentarios de varias líneas?
¿Existe alguna diferencia entre utilizar uno u otro al momento de comentar dentro del código?
¿Existen reglas para utilizar uno u el otro (depende de algo?)?
Here is the accepted answer: Los comentarios, estrictamente hablando, se realizan con la almohadilla exclusivamente.
Pueden ser comentarios de bloque:
```# Esto es un comentario de bloque en Python
# que hace uso de varias lineas.
#
# Esto es otro párrafo del comentario de bloque
if foo == 8:
pass
```
Por convención siguen las siguientes normas:
Se aplican a parte o todo el código que le siguen (bloque).
Están indentados al mismo nivel que el código que comentan.
Cada línea de un comentario en bloque comienza con un # y un espacio.
Los párrafos dentro de un comentario en bloque se separan con una linea que contiene solo un ```#```
Comentarios de linea:
```foo = 4 # Soy un comentario inline
```
Por convención:
Se definen en la misma línea que una sentencia que comenta.
Deben estar separados por al menos dos espacios de la sentencia que comentan.
Deben empezar con un ```#``` seguido de un espacio.
Las comillas triples (tanto dobles ```"""``` como simples ```'''``` ) son una forma de crear literales de cadena que además pueden ser multilínea:
```cad = """Soy una cadena
con varias
líneas
"""
print(cad)
```
```
```Soy una cadena
con varias
líneas
```
```
También podemos crear literales de cadena con solo una comilla doble y simple en Python:
```cad = "Hola"
cad = 'Hola'
```
No hay diferencia alguna entre ambas formas, pero si se usa una para especificar el literal, la otra puede usarse dentro del literal sin necesidad de escapar:
```cad = "Hola tiene una 'h' y una 'l'"
```
Aunque es común ver "comentarios" en código Python usando literales definidos con comillas triples sin asignar a ninguna variable, no son verdaderos comentarios. Lo que si es cierto es que el intérprete (no interactivo) ignora dichas líneas (sin asignación) a la hora de generar el bytecode e interpretarlo, por lo que se convierten virtualmente en comentarios, sin serlo realmente y sin ser la forma correcta de hacerlo.
Hay una excepción a tener en cuenta, si este literal es declarado en la primera linea tras definir una función, método o clase tienen funcionalidad especial, son lo que se conocen como docstring o cadenas de documentación. Son cadenas que sirven como documentación y guía de uso para ese objeto. Las convenciones para las cadenas de documentación están definidas en PEP 257.
Resumiendo:
La primera linea debe ser un resumen breve del propósito del objeto, no debe indicar explícitamente el nombre o tipo del objeto y siempre debe comenzar con una letra mayúscula y terminar con un punto.
Si solo existe la linea anterior no se debe agregar espacios antes ni después de ella. Las comillas deben cerrarse en la misma linea.
No debe ser la firma de la función, esto se consigue por introspección y sería redundante. Solo debe especificarse la firma de la función si está creada en C/C++ (C/C++ API), dónde la introspección no llega.
Si hay más líneas la segunda línea debe estar en blanco, separando visualmente el resumen del resto de la descripción.
Las lineas extra se facilita información sobre las convenciones de llamada del objeto, sus efectos secundarios, retorno, etc
Cuando hay más de una linea la triple comilla de cierre debe estar sola en la linea final, preferiblemente precedida de una línea en blanco.
Aunque se puede usar un literal de cadena declarado con comillas simples o dobles, por convención (y porque suele ser más de una linea) se usan comillas triples aunque sean de una sola línea.
```def sin(x: float, unidad: str = "radian") -> float:
"""Retorna el seno de x (en radianes).
Argumentos keywords:
unidad -- radian o grado (radian por defecto)
"""
pass
def sqrt(x: float) -> float:
"""Retorna la raiz cuadrada de x."""
pass
```
Esta cadena (además de ayudar a entender el código por los humanos que lo lean) puede ser accedida mediante el atributo especial ```__doc__``` del objeto y este método podrá ser usado por el buitin ```help```, usado cuando se especifica el argumento ```-h```/```--help``` al invocar un script en la terminal, por los propios entornos de desarrollo y editores de código para mostrar la ayuda emergente a la vez que se escribe código y en general por cualquier otro generador o parser de documentación:
```
```>>> help(sin)
Help on function sin in module __main__:
sin(x: float, unidad: str = 'radian') -> float
Retorna el seno de x (en radianes).
Argumentos keywords:
unidad -- radian o grado (radian por defecto)
```
```
Todos los paquetes, scripts, módulos, métodos, clases y funciones de uso público debería tener definido su docstring. No son necesarios en los métodos no públicos, pero no está de más tener un comentario que describa lo que hacen.
Para terminar de aclarar, por si alguien llegado a este punto se pregunta porqué Django usa las triples comillas para comentarios... El ejemplo que pones no es un comentario, es de hecho el docstring para los módulos que genera automáticamente Django. En este caso concreto es el docstring por defecto para ```settings.py```, igualmente que para las funciones, métodos, etc los parseadores de documentación hacen uso del mismo, entre ellos ```help```, el argumento ```-h```/```--help``` en linea de comandos, etc:
```
```>>> import settings
>>> help(settings)
Help on module settings:
NAME
settings - Django settings for ExampleApp project.
DESCRIPTION
Generated by 'django-admin startproject' using Django 1.9.6.
For more information on this file, see
https://docs.djangoproject.com/en/1.9/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.9/ref/settings/
DATA
ALLOWED_HOSTS = []
AUTH_PASSWORD_VALIDATORS = [{'NAME': 'django.contrib.auth.password_val...
....
```
```
Comment for this answer: Muchas gracias!, excelente explicación, no sabía lo de docstring, esto se puede ocupar las funciones o clases, esto es similar a la información que otogan los ids al momento de ocupar un método o función?
Comment for this answer: Las cadenas de documentación se pueden definir en clases, funciones/métodos, módulos, scripts (modulo ejecutable) y paquetes. Y si, efectivamente, los entornos de desarrollo y editores usan los docstring (ademas de hacer uso de introspección, type hints, etc dependiendo del IDE) para mostrar información de ayuda, si colocas las dos funciones ejemplo que pongo arriba en un editor como vscode con la extensión Python instalada al momento de escribir `sin(` mostrará la información del docsting.
Comment for this answer: Lo acabo de comprobar y he quedado loca! hahah esto es bastante útil al momento de querer usar funciones creadas por uno mismo, y pensé que la pregunta era algo estúpida, he encontrado información realmente valiosa! muchas gracias por tu tiempo escribiendo tan excelente respuesta! muchas gracias :)
|
Title: Verifying if an email contains a s/mime in python
Tags: python;smime
Question: Is it possible, using python without os calls, to detect if an incoming email message is signed ? (Don't care about the certificate validity, I just want to know if the message contains a signed content)
The corresponding openssl command I'd like to reproduce is the following:
```openssl smime -verify -in /tmp/mails/d4fa5d0f-2250-4acd-8d3d-14c4e9743392```
I can run ```os.system('openssl smime -verify -in /tmp/mails/d4fa5d0f-2250-4acd-8d3d-14c4e9743392')``` but I would like to use a python lib instead if possible.
All the examples I have found implies that I have a public key to decode the certificate, I only want to know if the email contains a signed content or not.
Comment: What about https://pypi.python.org/pypi/pyOpenSSL
Comment: It is possible without calling `openssl`. Just use a openssl library inside python.
Here is another answer: ```import email
msg = email.message_from_file(open('/home/mailuser/testmail.mail'))
if 'signed' in msg.get_content_type():
<your code here>
```
...works for me
|
Title: discord.py bot won't play longer music in vc: OSError: [Errno 9] Bad file descriptor
Tags: python;discord;discord.py
Question: I'm trying to make my bot play music in a voicechannel. playing shorter videos works, but when the videos get longer (~5 min), the bot doesn't play them and prints me the following error: ```OSError: [Errno 9] Bad file descriptor```. This is my code:
```elif message.content.casefold().startswith("rachel play"):
os.chdir(r"/home/pi/rachel")
song_there = os.path.isfile("song.mp3")
try:
if song_there:
os.remove("song.mp3")
except PermissionError:
await message.send("Warte")
voice = discord.utils.get(self.bot.voice_clients, guild=message.guild)
parts = message.content.split()
url = parts[-1]
channel = message.author.voice.channel
ydl_opts = {
'format': 'bestaudio/best',
'postprocessors': [{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '192',
}]
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
ydl.download([url])
for file in os.listdir("./"):
if file.endswith(".mp3"):
os.rename(file, "song.mp3")
voice.play(discord.FFmpegPCMAudio("/home/pi/rachel/song.mp3"))
```
Thanks in advance for any help!
Comment: sure can^^
`Exception in voice thread Thread-5
Traceback (most recent call last):
File "/home/pi/.local/lib/python3.7/site-packages/discord/player.py", line 603, in run
self._do_run()
File "/home/pi/.local/lib/python3.7/site-packages/discord/player.py", line 596, in _do_run
play_audio(data, encode=not self.source.is_opus())
File "/home/pi/.local/lib/python3.7/site-packages/discord/voice_client.py", line 638, in send_audio_packet
self.socket.sendto(packet, (self.endpoint_ip, self.voice_port))
OSError: [Errno 9] Bad file descriptor`
I hope this helps! @Awesomepotato29
Comment: it is stable. I've looked at it and I would also get a notification if it would be unstable. You can also add me on discord if you want (would make responding easier: Spleens#0002) @Awesomepotato29
Comment: @Awesomepotato29 should I maybe try to get a better connection on my raspberry pi then? I don't know, how I could do that, but I think I will figure something out if necessary
Comment: I already rebooted many times, tried different types of code which all lead to the same problem. Do you by chance know anybody with experience on Pis and also with discord.py? Or do you know a community I could ask? I already asked on many servers and no one could really help me. This is really bugging me because I can't understand what really is causing this error.. @Awesomepotato29
Comment: Could you provide the full stack trace, not just the error type?
Comment: Hmm, the error seems to be network-related. Have you made sure your internet connection is stable?
Comment: It seems to me that discord.py is trying to write a packet with the sound you are playing to a socket. For some reason, this returns a bad file descriptor. I suspect this is due to network-related problems, but I'm not entirely clear on the inner workings of discord.py.
Comment: I have no experience with Raspberry Pis, sorry. The best I can do is recommend a reboot.
|
Title: jQuery dynamic content calculate
Tags: javascript;jquery
Question: My code is working perfectly with static part, but when i add a new row it won't calculate the field. What am i doing wrong?
It should calculate also the dynamic fields which are added via Add Row button
Live DEMO
```<div class="container">
<table id="t1" class="table table-hover">
<tr>
<th class="text-center">Start Time</th>
<th class="text-center">End Time</th>
<th class="text-center">Stunden</th>
<th> <button type="button" class="addRow">Add Row</button></th>
</tr>
<tr id="row1" class="item">
<td><input name="starts[]" class="starts form-control" ></td>
<td><input name="ends[]" class="ends form-control" ></td>
<td><input name="stunden[]" class="stunden form-control" readonly="readonly" ></td>
</tr>
</table>
</div>
```
js
```$(document).ready(function(){
$('.item').keyup(function(){
var starts = $(this).find(".starts").val();
var ends = $(this).find(".ends").val();
var stunden;
s = starts.split(':');
e = ends.split(':');
min = e[1]-s[1];
hour_carry = 0;
if(min < 0){
min += 60;
hour_carry += 1;
}
hour = e[0]-s[0]-hour_carry;
min = ((min/60)*100).toString()
stunden = hour + "." + min.substring(0,2);
$(this).find(".stunden").val(stunden);
});
// function for adding a new row
var r = 1;
$('.addRow').click(function () {
if(r<10){
r++;
$('#t1').append('<tr id="row'+ r +'" class="item"><td><input name="starts[]" class="starts form-control" ></td><td><input name="ends[]" class="ends form-control" ></td><td><input name="stunden[]" class="stunden form-control" readonly="readonly" ></td></tr>');
}
});
// remove row when X is clicked
$(document).on("click", ".btn_remove", function () {
r--;
var button_id = $(this).attr("id");
$("#row" + button_id + '').remove();
});
});
```
Comment: i have deleted that second static row and still doesn't work! It seems to me as it doesn't see this new dynamic rows which are being added.
Comment: first off, if you already have 2 rows, you should be initialising r at 3, not 1, ids should be unique
Here is the accepted answer: The best thing would be to use the .on() event which is used to attach one or more event handlers to the element:
```$(document).on('keyup', '.item',function(){
//your code
}
```
Here is another answer: When you dynamically add a new row to your table, the "keyup" event wont automatically be bound to it. Essentially you need to wrap the "keyup" event binding into a function, then call it after you've added the new row on. Something along the lines of:
```function rebindKeyup(){
$('.item').keyup(function(){
// Key up logic
}
}
```
Comment for this answer: Agramer has the best answer for me! But Thank you!
Comment for this answer: No problem, glad you're sorted!
|
Title: How to get Postback Data on LinkButton Click Event?
Tags: asp.net;.net
Question: I have a LinkButton in aspx page.
```<asp:TextBox ID="textBoxNote" runat="server" />
<asp:LinkButton ID="linkButtonUpdateNote" Text="Update" OnClick="ButtonUpdateNoteClicked" runat="server" />
```
the click event handler has the following code
``` protected void ButtonUpdateNoteClicked(object sender, EventArgs e)
{
var note = textBoxNote.Text;
}
```
On Postback textBoxNote.Text is empty. I don't get the posted value. How to get the value?
Comment: I set the value to the textbox only on if it is not postback.
Comment: Does the `TextBox` gets an initial value somewhere? Remember to do that only on the first load and not on postbacks. Otherwise you would override the value always. The initial value does not need to be set on postbacks if you have enabled `ViewState`(default).
Here is the accepted answer: It seems like you are possibly resetting the value in your ```Page_Load```.
Check that you are using ```IsPostback``` check in the ```Page_Load``` function. see - http://msdn.microsoft.com/en-us/library/system.web.ui.page.ispostback.aspx
```private void Page_Load()
{
if (!IsPostBack)
{
DoThisOnce();
}
DoThisOnEachPostback();
}
```
|
Title: Image Matching for homogeneous surfaces
Tags: image;opencv;image-processing;sift;surf
Question: I have built an image-matching code via OpenCV and the feature detection method that I am using currently is SIFT. The main problem that I am experiencing is that if the image has a homogenous surface, the matches will either fail or give a false positive.
Is there another way that I can match images that are homogeneous in nature (i.e. little or no features identified)
Comment: Can you add those images as examples?
Comment: Hi, I am sorry for the late response. I am new to this forum and did not see your comment. So i am referring to something such as a ceramic tile for example. If I take two images of the same ceramic tile, the algorithm fails to find enough feature points to match the images. I was wondering if there is a way that I can increase those feature points
|
Title: Cannot format flash drive due to write protection
Tags: 12.04;usb;windows;windows-7;windows-xp
Question: Okay, just the other day, I was trying to install Ubuntu on an 8GB flash drive. Near the end of the installation process, the install failed. The flash drive is now entirely dysfunctional.
At first, the flash drive would mount on Windows, but not on Ubuntu. In Ubuntu, by Disk Utility, GParted, and Terminal, I was not able to format the flash drive, it only returned errors. When I took it to Windows, it will mount it, but I can't move any files because it will return an error that the flash drive is write protected. Same thing happens when I try to format it, both by the formatting tool and Command Prompt. I tried another format tool I found online after searching elsewhere for this problem, but the program crashed, and now Windows won't even mount the flash drive, saying that I need to format it.
Comment: http://askubuntu.com/questions/502488/usb-turn-write-protection-off-forced/559071#559071
Comment: Sounds as if the flash drive failed, not sure there is a fix.
Here is another answer: See if there is a switch on it. If there is try moving it. If not continue reading.
I had two flash drives that this happened to. I've tried everything available, and nothing worked. Formatting with Linux, adjusting registry settings in Window, nothing. Until I came across OnBlay.
OnBelay. Download, install and use it to try and low-level format the drive. If it works, great, otherwise your USB flash drive, well... OnBlay is a windows application, since you mentioned that you're running Windows as well.
If it doesn't work, I would recommend just going out and purchasing a new USB drive.
Note: It only worked on one of my drives, and I had to buy another one. Hope it will help you.
Comment for this answer: suggestion:Tried software/firmware from manufacturers?
|
Title: Cannot install OpsHub Visual Studio Online Migration Utility
Tags: opshub
Question: Installing the opshub migration tool while logged in as user 'Administrator' fails during setup with error message
```ops-003: You are not running the installer with appropriate settings! Please verify the user running the installer is an Admin user.
```
I searched for this error message in google and looked in the Q&A Visual Studio Gallery site for the migration tool but found nothing concerning this error.
I am trying to install the utility on a virtual server logged in as Administrator.
Download version is:
OVSMU-V51.243.225.55
Operating system:
Windows 2008 R2 Datacenter with Service Pack 1 (This is the machine where TFS 2012 is installed).
Thanks for any help.
Comment: We are having the same issue today. Any resolution to this?
Comment: Of course: I start the executable and after confirming the security warning about running files from the internet, the OPS-003 message appears and the setup stops. Even running the setup explicitly via context menu "Run as Administrator" (which should be the same as I'm logged in as "Administrator", I'm getting the same result. The is no welcome dialog or any other Setup-Wizard appearing, just the 003 error.
Comment: I have solved this issue by installing opshub on seperate machine which folows opshub guideline to not install opshub on the same machine where TFS is running. This way, no installation issues occurred and I was able to successfully trasfer the data to visual studio online.
Comment: Hi, are you facing this during the installation process? Can you divulge the details about the step where you get this error?
Comment: can you send screen shot to [email protected], support will verify and update proper answer/resolution here.
|
Title: Merging a long JSON with another using jq
Tags: json;bash;jq
Question: I need to do some CURLs from which I'm building one output JSON. This is how I managed to perform a merge using a function:
```...
ADDITIONALJSONDATA="{\"$DATATYPE\" : "$DATA"}"
MERGEDENTRY=$(echo $SOURCE | jq --argjson json "$ADDITIONALJSONDATA" '. += $json' | tr -d '\r\n')
...
```
It seems when JSON inside ```$DATA``` is big enough then I'm getting ```Argument list too long``` error. Is it possible (in a nice way) to treat JSON to merge as a single argument here?
Comment: As an aside -- all-caps variable names are in a reserved namespace; see http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap08.html for POSIX guidelines, specifying that all-caps names are used for shell and environment variable names with meaning to the shell and POSIX-defined utilities, whereas lowercase names are reserved for applications and guaranteed not to conflict.
Comment: Also, much better to use `jq -c` to tell jq not to put extra whitespace in to begin with rather than trying to use `tr` to take it out after-the-fact. You might also want the `-j` argument to remove trailing whitespace, though this isn't always pertinent in a command substitution since those remove a trailing newline if one exists anyhow.
Here is another answer: You could use process substitution and ```--slurpfile``` option to solve your issue:
```MERGEDENTRY=$(echo "$SOURCE" | jq --slurpfile json <(printf '%s\n' "$ADDITIONALJSONDATA") '. += $json[0]' | tr -d '\r\n')
```
As per Charles' suggestion, we can simplify further by using ```<<<``` instead of ```echo "$SOURCE" | ...```:
```MERGEDENTRY=$(jq --slurpfile json <(printf '%s\n' "$ADDITIONALJSONDATA") '. += $json[0]' <<< "$SOURCE" | tr -d '\r\n')
```
Here is another answer:
If, as seems to be the case here, you already have $DATA and $DATATYPE, there is no need for ADDITIONALJSONDATA
In general, using ```tr -d '\r\n'``` here is very bad practice, e.g. because it could scrunch 1 and 2 together to make 12
Hopefully the following will meet your requirements:
```MERGEDENTRY=$(jq -c --arg TYPE "$DATATYPE" --slurpfile A <(printf '%s\n' "$DATA") '
. += {($TYPE): $A[0]}' <<< "$SOURCE")
```
(We need to use ```$A[0]``` here because "slurping" puts $DATA into an array.)
Comment for this answer: Can you test whether process substitution is working without using jq? Have you tried putting $DATA into a file so you can check the jq command without using process substitution?
Comment for this answer: I'm getting `Bad JSON in --slurpfile A /proc/5992/fd/63: Could not open /proc/5992/fd/63: No such file or directory` Can MINGW64 and WinOS be a reason for that?
This is the SOURCE:
`{
"id": "devlocal",
"name": "devlocal",
...}`
This is the DATA:
`[{
"username": "52352",
"enabled": true,
},
..]`
Here is another answer: I managed to do it far easier:
```MERGEDENTRY=${sour181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16-1}","\"$data_type\"":"$data","```
And then just keep in mind we need to end JSON payload properly by replacing ```'``` by ```}```.
I can do this as I'm always adding new JSON payload at the end of existing one.
|
Title: Is this php code safe and how to hide the javascript link?
Tags: php;javascript;security
Question: People will be able to see the javascript confirm link in the status bar of the browse. So, is that code below secure enough and how to hide the javascript to DONT show this:
```javascript:a = confirm('Are you sure you want to purchase this reward?'); if (a) { location.href='./?page=vote&act=rewards&id=8'} else void(0)```
```Script:```
```if ($_SESSION['nVotePoints'] >= $data['nCost']) {
$url = './?page=vote&act=rewards&id=' . $data['id'];
$confirm = "javascript:a = confirm('Are you sure you want to use purchase this reward?'); if (a) { location.href='{$url}'} else void(0)";
$data['URL'] = $confirm;
}
else
$data['URL'] = 'javascript: alert(\'' . stripslashes(Templat181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16GetLangVar('VOTE_NEED_VP')) . '\');';
$column[$i++] = Templat181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16Load('vote-reward-column', $data);
```
Kind Regards.
Comment: You're using the HTTP GET method to change state of the server (="purchasing reward") and that is a bad practice. State-changing actions should always be called via POST, PUT or DELETE. GET actions can be easily called by bots, crawlers, they are stored in browser history etc.
Comment: In that case it's still vulnerable with CSRF.
Comment: Exposing the javascript does not seem to be a problem. What do you think is insecure with exposing it? What could the user exploit? I think the end point, the script the user will arrive at when clicking the link will do the deduction from an account. In that case all they can "exploit" is pay for something on a guessed link if they manipulate the url. Which will cost them. So, I don't really see the insecurity of this?
Comment: You can obfuscate it and make it look weird and unreadable with a glance but you cannot completely hide a client side script i.e. JS from the client
Comment: Go ahead and screw it up, that will be best learning exercise when you know it screwed up and you have to fix it. Try it
Comment: If I leave it like that is it secure enough ? I don't care if they will be able to see the actual link to the reward but I care if people won't able to exploit something that's why. So ?
Comment: Is it possible to show me how to edit my code, so that I wont screw it up ? I will be really grateful if you can help me with it. I really want to avoid screwing it up.
Comment: I can't since I really don't have a clue and will be completely useless. If you can do it for me. It will be really appreciated. If not, it's okay.
Comment: I know but thats how it is made. Also, only registered users can access the link for purchase reward. I just need to secure the javascript mainly in my script ...
Comment: So, what I should do to prevent this ?
Comment: You shoudl put you javascript in a function, and add it with an event handler to the link, in stead of putting the javascript directly in the href attribute. This way it will not show up in the status bar of the browser when you hover over the link.
Here is another answer: Keep in mind that even if you could hide the JavaScript, this would not be a secure system. Someone can fire up WireShark, IE's F12 Developer Tools, Firefox' Firebug, or Chrome's Developer Tools and see exactly which page things go to, or debug any call that touches DOM, even if your code is complete gibberish.
If you want to secure things like this you can't trust the client, you need to do it on the server. Otherwise someone can write their own code that calls your service, runs no JavaScript at all, and completely bypasses your validation logic.
|
Title: Cannot convert MotorCursor object to list in async function using standard method
Tags: python;pyqt5;python-asyncio;tornado-motor
Question: I have been trying to have an async function get data from a Motor database in Python and get the list of data corresponding to the search. Here is the function to get the data and the one to print it:
```async def do_find_by_run_name(run_name):
"""
:param run_name: the name of a the run being searched for (string)
:return: MotorCursor for run found
"""
cursor = db.data_collection.find({"run_name":run_name})
loop = asyncio.get_event_loop()
return cursor
async def print_arr(cursor):
for d in await cursor.to_list(length=2):
pprint.pprint(d)
```
I have a PyQt slot which I want to use to call the find_by_run_name function on a button press. Here is the code for that slot:
```@pyqtSlot()
def on_find_runs_button_clicked(self):
try:
new_loop = asyncio.get_event_loop()
d = new_loop.run_until_complete(server.do_find_by_run_name("default"))
print(d)
new_loop = asyncio.get_event_loop()
v = new_loop.run_until_complete(server.print_arr(d))
except Exception as err:
try:
raise TypeError("Again !?!")
except:
pass
traceback.print_exc()
```
When I press the button corresponding to this slot I see the following in my terminal:
```AsyncIOMotorCursor(<pymongo.cursor.Cursor object at 0x06F51EB0>)
Traceback (most recent call last):
File "C:/Users/Rohan Doshi/Documents/websockets/server\GUI.py", line 93, in on_find_runs_button_clicked
v = new_loop.run_until_complete(server.print_arr(d))
File "C:\Users\Rohan Doshi\AppData\Local\Programs\Python\Python36-32\lib\asyncio\base_events.py", line 468, in run_until_complete
return future.result()
File "C:/Users/Rohan Doshi/Documents/websockets/server\server.py", line 130, in print_arr
for d in await cursor.to_list(length=2):
RuntimeError: Task <Task pending coro=<print_arr() running at C:/Users/Rohan Doshi/Documents/websockets/server\server.py:130> cb=[_run_until_complete_cb() at C:\Users\Rohan Doshi\AppData\Local\Programs\Python\Python36-32\lib\asyncio\base_events.py:177]> got Future <Future pending> attached to a different loop
```
This indicates to me that the do_find_by_run_name function ran properly but that there is an issue with running the print_arr function.
In an attempt to fix this issue do_find_run_name was changed to:
```async def do_find_by_run_name(run_name):
"""
:param run_name: the name of a the run being searched for (string)
:return: MotorCursor for run found
"""
cursor = db.data_collection.find({"run_name":run_name})
print(cursor)
for d in await cursor.to_list(length=2):
pprint.pprint(d)
```
and I changed my PyQt slot to:
```@pyqtSlot()
def on_find_runs_button_clicked(self):
try:
new_loop = asyncio.get_event_loop()
future = asyncio.run_coroutine_threadsafe(
server.do_find_by_run_name("default"),
new_loop
)
assert future.result(timeout=10)
except Exception as err:
try:
raise TypeError("Again !?!")
except:
pass
traceback.print_exc()
```
When this change is made I don't see anything printed. It seems like the do_find_run_name couroutine is never executed.
Comment: What exactly do you mean by "nothing happens"? If you add a `print("x")` after the `async for` loop in `do_find_by_run_name`, does it get executed? Perhaps an exception is being raised and hidden by some outer try/except block, or propagated to the top-level of a background task.
Comment: Wrap the loop in `try ... except Exception: import traceback; traceback.print_exc()`. This will show you the error that you're getting.
Comment: The new code completely removes the `async for` from `do_find_by_run_name` and reveals what is possibly a different issue. It seems that you have two event loops. If you have a call to `new_event_loop` or something like that, remove it. Instead, look into using [`asyncio.run_coroutine_threadsafe`](https://docs.python.org/3/library/asyncio-task.html#asyncio.run_coroutine_threadsafe) to pass a coroutine to an event loop already running in another thread.
Comment: You need to call the `result()` method on the object returned by `run_coroutine_threadsafe` to actually wait for the result to become available. Please refer to examples in the [documentation](https://docs.python.org/3/library/asyncio-task.html#asyncio.run_coroutine_threadsafe) I linked the first time. Also, you don't need a `try/raise TypeError/except: pass` block inside each `try/except`.
Comment: You are passing the wrong event loop to `run_coroutine_threadsafe` - you should be passing it the actual event loop used by asyncio in another thread, not the one set up for this thread.
Comment: It does, but the `RuntimeError` is telling you that the event loop is running in a *different* thread. That's the event loop you need to submit the coroutine to. It's really hard to help you without understanding the structure of your application and how you use asyncio.
Comment: Nice to hear! Good luck with asyncio.
Comment: I added a print statement before and after the for loop. The print statement before loop gets executed, and I see the text in my terminal. The statement after the loop does not get executed, and my program hangs. If I remove the for loop entirely the do_find_by_run_name function runs completely and returns a MotorCursor object, and the program does not hang.
Comment: I should also mention that I am calling this function from a PyQt5 slot
Comment: I followed your advice and added this statement. I have edited my post with a more clear issue.
Comment: I used the run_couroutine_threadsafe function as suggested, but it seems like then the couroutine never gets executed. I put these changes in another edit to the post.
Comment: I looked at the documentation and changed my try block (see post edits)
No matter what I change timeout to I always get a concurrent.futures._base.TimeoutError, which means that my coroutine never executes.
Comment: Could you please elaborate I don't fully understand. Does asyncio.get_event_loop() not get the event loop in the current thread?
Comment: Thanks! I finally found the issue. There was another event loop called in another function where I was starting a Motor Client and a tornado web application, and I was able to use that loop in the run_coroutine_threadsafe function.
|
Title: VSTO Template UI doesn't appear when 2 seperate dll's are in the project
Tags: c#;user-interface;vsto
Question: I have been trying to get a VSTO template for excel to install but I can't see the VSTO ribbon buttons or task pane when I open the template after the install. At first I didn't think it was [email protected]. The project works fine in visual studio.
I followed the article here http://msdn.microsoft.com/en-us/library/ff937654.aspx & as above the UI elements were not visible. I then followed the same article for a very basic template project & it worked fine.
I removed all the code from the UI of my template project, recreated the install package & it installed & I could see the UI. I uncommented the code till i found the lines which were preventing me from seeing the UI.
There are 2 seperate Dll's that the template uses, they are both in the references of the template project and they are both showing as being detected dependencies in the install project & they are both placed in the install directory of the template. The problem is the UI doesn't appear if I use the dll's in my templates code. I just have to attempt to create an instance of one of the 2 dll's classes and the UI stops appearing.
Does anyone know why this may be happening?
Here is another answer: Probably you have not included the following -
```using System.Runtime.InteropServices
```
|
Title: Error when importing mysql database (about 80mb's) using BigDump
Tags: php;mysql;phpmyadmin;mysqldump
Question: So I have this mysql database that is way to huge to import via phpmyadmin. When I set it up with bigdump I get this error right off the bat
```
Stopped at the line 339.
```
At this place the current query includes more than 300 dump lines. That can happen if your dump file was created by some tool which doesn't place a semicolon followed by a linebreak at the end of each query, or if your dump contains extended inserts."
So after 300 lines with no break it crashes. I went in and pasted this string in line 200 just to make sure that much was correct:
```INSERT DELAYED INTO `invites_statistic`
(`user_id`,`purchaseid`,`prodid`,`reg_length`,`invites_count`,`used`,`code`)
VALUES
```
I tried changing the length of the "Maximum length of created query" in phpmyadmin but then I was getting even more errors telling me I couldn't have that string in certain places.
I jumped around the SQL file adding the previous string a few lines before everytime it broke and that was working but this is a HUGE file 400k lines. Anyone know a good solution? Am I doomed?
Comment: Try redoing the dump with `--skip-extended-insert`, so each row gets its own insert statement, rather than multiple rows for each insert. It'll make the dump file much larger, but will vastly decrease the per-insert size.
Comment: Only if something else is using the database at the time you're loading the dump. Otherwise it's useless.
Comment: Should I still use delayed inserts?
Comment: Thanks for the help! I will post the solution now.
Here is another answer: I solved this problem by changing the ```$max_query_lines``` value from 300 to a higher number.
```// How many lines may be considered to be one query (except text lines)
$max_query_lines = 10000;
```
|
Title: GLEW _First was nullptr
Tags: c++;glfw;glew
Question: So I have been working on a game through C++ with GLFW and GLEW and everything was going fine until I implemented GLEW then I get an error that I have never encountered before and I don't have the foggiest on what to do, I have googled it and done some research but to no avail. I have noticed that in the stack frame section it says something to do with this line of code in my main.cpp
```st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << glGetString(GL_VERSION) << st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16endl;
```
Also it says something to do with memory. I'll leave the rest of my code down below and if there's any info you need just ask and I will try my best to provide it
So I just discovered if i take out
```st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << glGetString(GL_VERSION) << st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16endl;
```
then it works however the window isn't created.
Where do I go from here?
Any idea?
``` #include "src\graphics\window.h"
int main() {
using namespace benji;
using namespace graphics;
Window window("Benji Engine", 1280, 720);
glClearColor(0.2f, 0.3f, 0.8f, 1.0f);
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << glGetString(GL_VERSION) << st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16endl;``
while (!window.closed()) {
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << window.getWidth() << ", " << window.getHeight() << st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16endl;
window.clear();
glBegin(GL_TRIANGLES);
glVertex2f(-0.5f, -0.5f);
glVertex2f(0.0f, 0.5f);
glVertex2f(0.5f, -0.5f);
glEnd();
window.update();
}
return 0;
}
```
main.h
``` #pragma once
class main
{
public:
main();
~main();
};
```
window.cpp
```#include "window.h"
namespace benji { namespace graphics {
void windowResize(GLFWwindow *window, int width, int height);
Window181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16Window(const char *title, int width, int height) {
m_Title = title;
m_Width = width;
m_Height = height;
if (!init()) {
glfwTerminate();
}
}
Window181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16~Window() {
glfwTerminate();
}
bool Window181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16init() {
if (!glfwInit()) {
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << "Failed to initialize GLFW!" << st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16endl;
return false;
}
m_Window = glfwCreateWindow(m_Width, m_Height, m_Title, NULL, NULL);
if (!m_Window) {
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << "Failed to create GLFW window!" << st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16endl;
return false;
}
glfwMakeContextCurrent(m_Window);
glfwSetWindowSizeCallback(m_Window, windowResize);
if (glewInit != GLEW_OK) {
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << "GLEW FAILED!" << st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16endl;
return false;
}
return true;
}
void Window181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16lear() const {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
void Window181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16update(){
glfwPollEvents();
glfwSwapBuffers(m_Window);
}
bool Window181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16losed() const {
return glfwWindowShouldClose(m_Window) == 1;
}
void windowResize(GLFWwindow *window, int width, int height) {
glViewport(0, 0, width, height);
}
}}
```
window.h
``` #pragma once
#include <iostream>
#include <GL\glew.h>
#include <GLFW\glfw3.h>
namespace benji {
namespace graphics {
class Window {
private:
const char *m_Title;
int m_Width, m_Height;
GLFWwindow *m_Window;
bool m_Closed;
public:
Window(const char *title, int width, int height);
~Window();
bool closed() const;
void update();
void clear() const;
int getWidth() const {
return m_Width;
}
int getHeight() const { return m_Height; }
private:
bool init();
};
}
}
```
Comment: What is the full error? And do you have a debug callstack or any other information?
Comment: If `glGetString` is indeed crashing, it is because your OpenGL has not been properly initialized. Are you getting any error outputs during your `Window181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16init`? This is also supported by the fact that when you remove the call to `glGetString` it does not crash, but no window is opened. See also "[Why Could glGetString(GL_VERSION) Be Causing a Seg Fault?](http://stackoverflow.com/questions/6288759/why-could-glgetstringgl-version-be-causing-a-seg-fault)"
Comment: The only other error I can see is Exception thrown: read access violation.
_First was nullptr.
Comment: I have been following a tutorial to initialize OpenGL as it's the first time that I have done it and well it seems to be right to me,,, so im not sure.
Here is another answer: In Window.cpp:
```if (glewInit != GLEW_OK) {
st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16cout << "GLEW FAILED!" << st181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16endl;
return false;
}
```
```glewInit()``` is a function, not a variable. You need to call it as a function. I'm surprised this even compiled.
All other OpenGL functions that come from after version 1.1 will throw errors to the effect of ```ACCESS_VIOLATION READING ADDRESS 0x00000000``` or some similar error, because if ```glewInit()``` is not properly called, none of the function macros provided by GLEW will point to valid function pointers.
Comment for this answer: `I'm surprised this even compiled`, so am I if this wasn't just an error in copying the code over to the question. But good catch, I managed to look right past it multiple times.
Comment for this answer: @ssell I stopped using GLEW a while ago (switched to glBinding) and can't remember if GLEW makes everything decay to function pointers, or just the `gl*` functions themselves. If it does it for everything, that would at least explain why the program compiles (but doesn't run correctly).
Comment for this answer: @RyanMidgley I'm not going to make recommendations on that front. I personally like the interface for glBinding more than I like the interface for GLEW, but not everyone is going to feel the same way.
Comment for this answer: So would you recommend not using GLEW?
Comment for this answer: Just gonna let you know how I fixed it, I forgot the parenthesis after glewInit. Something really insignificant but that's the issue haha
|
Title: NetBeans C++: Linker Can't Find External Libraries Specified in Linker Options
Tags: c++;mingw;linker-errors;netbeans-8
Question: I've been trying to learn how to use NetBeans as a C++ development environment. I installed NetBeans 8.2, installed MinGW, and compiled a simple Hello World program to make sure everything works. I then decided to try to compiler an old OpenGL project (based on this tutorial) that I had up-and-running in Visual Studio. Unfortunately, I keep getting errors saying the linker can't find the glew32 or glut32 library files:
```c:/mingw/bin/../lib/gcc/mingw32/5.3.0/../../../../mingw32/bin/ld.exe: cannot find -lglew32
c:/mingw/bin/../lib/gcc/mingw32/5.3.0/../../../../mingw32/bin/ld.exe: cannot find -lglut32
```
As best as I can tell, however, I've set all of the necessary linker options;
screen-shot here.
What am I missing here?
My project's compile command as stated in the output pane:
```g++ -o dist/Debug/MinGW32-Windows/opengl_tutorial build/Debug/MinGW32-Windows/nbproject/Main.o build/Debug/MinGW32-Windows/nbproject/ReadTGA.o -L\"C\:/C++\ Libraries/glew-1.13.0/lib/Release/Win32\" -L\"C\:/MinGW/lib\" -L\"C\:/C++\ Libraries/glut-3.7/lib\" -lglew32 -lglut32 -lglu32 -lopengl32
```
Things I've Tried
Removing spaces from external libraries' file path.
Placing .lib files in MinGW's lib folder (this gets rid of the original error and results in a slew of undefined reference errors).
Adding each library's bin folder to the Aditional Library Directories list and adding the DLLs to the Libraries list.
Switching between putting the file paths in the Aditional Library Directories list in quotes and not putting them in quotes (without quotes I get undefined reference errors).
Additional System Information
Operating System: Windows 7 Home Premium 64-bit SP1
Processor: 2GHz Intel Pentium Dual-Core
Here is another answer: Try Adding environmental variables in
Properties->Run->Environment
In my case
Name=LD_LIBRARY_PATH
Value=/usr/local/apps/Java/jdk-14/lib:/usr/local/apps/root6.22.02Install/lib/root:/Work/Soft/general_classes/lib (and some more paths which no need to be pasted here)
|
Title: Object has no attribute '__getitem__' (class instance?)
Tags: python;class
Question: It might be a very simple question but I am very confused with where I am going right now. Here is a very basic class:
```class Book(object):
def __init__(self, title, price):
self.book = {'title':title, 'price':price}
```
And when I run this:
```book = Book('foo', 300)
book['price']
```
It spits out:
```TypeError: 'Book' object has no attribute '__getitem__'
```
I know that it's not the conventional way of initializing an instance since I am using a dictionary. But I wonder why that code is spitting out a TypeError. How do I go about solving this issue?
Thank you in advance.
ps. The book instance's type is class?
Here is the accepted answer: It's because a class is not a dict object.
Accessing properties on a instance of a class is done via the dot operator.
```book.book['price']
>> 300
```
If you want to access keys within your dict directly on your class instance you would have to implement the ```__getitem__``` method on your class.
```def __getitem__(self, key):
return self.book[key]
book['price']
>> 300
```
Here is another answer: Yes. book is an object of class Book because you initialized it that way.
```book = Book('foo', 300)
book['price']
```
Try
```print book.book['price']```
So you want to access a dictionary called book of an object referenced as book and you want to extract the value of price from the dictionary.
Usually [] operator looks for ```__getitem__()``` method and passes the requested key as an argument. Łukasz R. has shown how to do that. Dictionaries perform a key lookup while arrays find the slice or index.
It is explained in details here : https://docs.python.org/2/reference/datamodel.html
Now because you're creating a class here, why do you want to create a separate dictionary for native must-have attributes of the class. Create attributes like the following example:
```class Book(object):
def __init__(self, title, price):
self.title = 'foo'
self.price = 300
book = Book('foo', 300)
print book.title
```
Here is another answer: ```class Book(object):
def __init__(self, title, price):
self.book = {'title':title, 'price':price}
book = Book('foo', 300)
print book.book['price']
```
Here is another answer: ```book.book['price']``` would work.
For accessing proxy member you'll have to implement ```__getitem__``` magic method.
```class Book(object):
def __init__(self, title, price):
self.book = {'title':title, 'price':price}
def __getitem__(self, item):
return self.book[item]
```
|
Title: Variables in python regex? Print statements in airflow?
Tags: python;regex;airflow
Question: I am working on a simple program to see if a file "test-2018-06-04-1358.txt" exists in a directory using airflow. I have two issues.
A) I want to use the variable datestr in my regex. Not sure how to do that.
B) Secondly, Where does my print(filename) show up in airflow UI? I checked my view log but nothing showed up.
```def checksFile():
d = datetime.today()-timedelta(days=1)
datestr = '{:%Y-%m-%d}'.format(d)
for filename in os.listdir('/mnt/volume/home/aabraham/'):
match = re.search('(test)-(2018-06-04)-(\d+)(\.txt)', filename)
print(filename)
if not match:
raise AirflowException("File not Found")
```
Comment: If you find you are not getting the answers you want, this could be a good question to split into 2. 1 for the regex, and 1 for airflow. I could see a regex pro not wanting to answer since they don't know airflow or vice versa.
Comment: will keep in mind, thanks
Here is the accepted answer: You cannot use ```print``` in the same fashion as in the console.
To see logging entries in the ```Log``` page use ```logging.info```. Maybe you need to ```import logging```.
Here is another answer: To answer the regex question, just add the strings together:
```match = re.search('(test)-(' + datestr + ')-(\d+)(\.txt)', filename)
```
This will only work if ```datestr``` doesn't contain any regex literals.
Comment for this answer: @sniperd Can he? He doesn't have 15 rep.
Comment for this answer: @friendly1358 you can upvote both answer though, and that would help the people who answered :)
Comment for this answer: @MegaIng ooopps! You are right. I'll do it by proxy then :)
Comment for this answer: couldn't also mark this as correct, but this works. thank you!
|
Title: Add external plugin to sonarqube
Tags: docker;sonarqube
Question:
I'm running sonarqube in a docker container using this compose docker file:
docker-compose
I want to add an external plugin (jar file). I couldn't manage to do so. Any ideas?
Comment: Since i'm very new to docker, still do not know how to do that.
Comment: The path is: SONARQUBE_HOME/extensions/plugins/
Comment: You can follow the structure of your referenced docker-compose file and bind it to your container through a volume.
Comment: Okay. So you want to add a jar to your service "sonarqube", yes? Where do you want to store it inside of this service (i.e. what should be the path to your jar in the container)?
Here is the accepted answer: Just copy your jars to your local folder "sonarqube_extensions/plugins" which should exist next to your docker-compose.yml file and they will be linked into your container according to your referenced docker-compose.yml file.
Old answer
You can modify your existing docker-compose.yml file. Assuming your jar files are located in a folder named "external_jars" next to the compose file and you want these jars to be available inside the container under, for example, ```/opt/sonarqube/external_jars``` (I am not familiar with sonarQube and I do not know how the correct structure should look like). Then you can add one line to this excerpt of your compose file:
```sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
volumes:
- external_jars:/opt/sonarqube/external_jars # <-- Added this line
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled-plugins:/opt/sonarqube/lib/bundled-plugins
```
Or you just add the jars locally into the folder "sonarqube_extensions" if this is the correct folder. I do not know what you want to achieve, therefore I can only guess what you are trying.
"Volumes" are linked folders between your local machine (which is running the docker engine) and the container. The syntax "sonarqube_extensions:/opt/sonarqube/extensions" means: map the contents of "sonarqube_extensions" of the local machine to the container and make it accessible at the path "/opt/sonarqube/extensions".
Comment for this answer: Thanks @n2o. That worked. A precision though: the local folder "sonarqube_extensions" is created in /var/lib/docker
|
Title: Does Python/Scipy have a firls( ) replacement (i.e. a weighted, least squares, FIR filter design)?
Tags: python;algorithm;math;matlab;digital-filter
Question: I am porting code from Matlab to Python and am having trouble finding a replacement for the firls( ) routine. It is used for, least-squares linear-phase Finite Impulse Response (FIR) filter design.
I [email protected] and nothing there looked like it would do the trick. Of course I was able to replace my remez and freqz algorithsm, so that's good.
On one blog I found an algorithm that implemented this filter without weighting, but I need one with weights.
Thanks, David
Here is another answer: Obviously, this post is somewhat dated, but maybe it is still interesting for some:
I think there are two near-equivalents to firls in Python:
You can try the firwin function with window='boxcar'. This is similar to Matlab where fir1 with a boxcar window delivers the same (? or at least very similar results) as firls.
You could also try the firwin2 method (frequency sampling method, similar to fir2 in Matlab), again using window='boxcar'
I did try one example from the Matlab firls reference and achieved near-identical results for:
Matlab:
```F = [0 0.3 0.4 0.6 0.7 0.9];
A = [0 1 0 0 0.5 0.5];
b = firls(24,F,A,'hilbert');
```
Python:
```F = [0, 0.3, 0.4, 0.6, 0.7, 0.9, 1]
A = [0, 1, 0, 0, 0.5, 0.5, 0]
bb = sig.firwin2( 25, F,A, window='boxcar', antisymmetric=True )
```
I had to increase the order to N = 25 and I also had to add another data point (F = 1, A = 0) which Python insisted upon; the option antisymmetric = True is only necessary for this special case (Hilbert filter)
Comment for this answer: Please see additional responses especially from @Pev Hall. These are not best practices for digital filter implementations and would have comparatively poor performance (as suggested, comparisons should be done on a log scale).
Here is another answer: This blog post contains code detailing how to use ```scipy.signal``` to implement FIR filters.
Comment for this answer: Well, that was intereseting blog post, but not exactly what I was looking for. I saw the firwin( ) function, but it does not have have the ability to express the frequency response the way I need it for Magnetic resonance imaging...
I was hoping not to reinvent the wheel, but it's looking more likely that I will need to do that.
Thanks,
Here is another answer: This post is really in response to
```
You can try the firwin function with window='boxcar'...
```
Do don't use boxcar it means no window at all (it is ideal but only works "ideally" with an infinite number of multipliers - sinc in time). The whole perpose of using a window is the reduce the number of multipliers required to get good stop band attenuation. See Window function
When comparing filters please use dB/log scale.
Scipy not having firls (FIR least squares filter) function is a large limitation (as it generates the optimum filter in many situations).
REMEZ has it's place but the flat roll off is a real killer when your trying to get the best results (and not just meeting some managers spec). ( warning scipy remez implementation can give amplification in stop band - see plot at bottom)
If you are using python (or need to use some window) I recommend using the kasiar window which gets very good results and can easily be tweaked for your attenuation vs transition vs multipliers requirement(attenuation (in dB) = 2.285 * (multipliers - 1) * pi * width + 7.95). It performance is not quite as good as firls but it has the benefit of being fast and easy to calculate (great if you don't store the coefficients).
Here is another answer: I found a firls() implementation attached here in SciPy ticket 648
Minor changes to get it working:
Swap the following two lines:
```
bands, desired, weight = array(bands), array(desired), array(weight)
if weight==None : weight = ones(len(bands)/2)
```
import roots from numpy instead of scipy.signal
Here is another answer: The firls equivalent in python now appears to be implemented as part of the signal package:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.firls.html#scipy.signal.firls
Also I agree with everything that @pev hall stated above especially how firls is optimum in many situations (such as when overall signal to noise is being optimized for a given number of taps), and to not use the boxcar window as he stated, they are not equivalent at all! firls generally outperforms all window and frequency sampling approaches to filter design when designing traditional FIR filters.
Here is another answer: Since version 0.18 in July, 2016 scipy includes an implementation of firls, as scipy.signal.firls.
Here is another answer: It seems unlikely that you'll find exactly what you seek already written in Python, but perhaps the Matlab function's help page gives or references a description of the algorithm?
|
Title: can't adapt type 'product.product' from v8 to v10
Tags: python;odoo-8;odoo
Question: in odoo store there is a module called ```product_pack```, it contains a file ```product.py```, this file contains the function bellow in ```version 8``` which is made to check product availability .
so, after trying to convert it to ```version 10``` I got an error exactly in line 6 and line 11. so my problem is exactly in converting ```res = super(product_product, self)._product_available(cr, uid, list(set(ids) - set(pack_product_ids)),field_names, arg, context)```
it raises:
```
pobjs = [adapt(o) for o in self._seq]
ProgrammingError: can't adapt type 'product.product'
```
version 8
```def _product_available(self, cr, uid, ids, field_names=None, arg=False, context=None):
pack_product_ids = self.search(cr, uid, [('pack', '=', True),
('id', 'in', ids),])
res = super(product_product, self)._product_available(
cr, uid, list(set(ids) - set(pack_product_ids)),
field_names, arg, context)
for product in self.browse(cr, uid, pack_product_ids, context=context):
pack_qty_available = []
pack_virtual_available = []
for subproduct in product.pack_line_ids:
subproduct_stock = self._product_available(cr, uid, [subproduct.product_id.id], field_names, arg,
context)[subproduct.product_id.id]
sub_qty = subproduct.quantity
if sub_qty:
pack_qty_available.append(math.floor(
subproduct_stock['qty_available'] / sub_qty))
pack_virtual_available.append(math.floor(
subproduct_stock['virtual_available'] / sub_qty))
res[product.id] = {
'qty_available': (
pack_qty_available and min(pack_qty_available) or False),
'incoming_qty': 0,
'outgoing_qty': 0,
'virtual_available': (
pack_virtual_available and
max(min(pack_virtual_available), 0) or False),
}
return res
```
version 10
```def _product_available(self, field_names=None, arg=False):
pack_product_ids = self.search([('pack', '=', True)])
###res = super(product_product, self)._product_available(field_names, arg)
for product in self.browse(pack_product_ids):
pack_qty_available = []
pack_virtual_available = []
for subproduct in product.pack_line_ids:
subproduct_stock = self._product_available([subproduct.product_id.id], field_names, arg)[subproduct.product_id.id]
sub_qty = subproduct.quantity
if sub_qty:
pack_qty_available.append(math.floor(subproduct_stock['qty_available'] / sub_qty))
pack_virtual_available.append(math.floor(subproduct_stock['virtual_available'] / sub_qty))
res[product.id] = {
'qty_available': (pack_qty_available and min(pack_qty_available) or False),
'incoming_qty': 0,
'outgoing_qty': 0,
'virtual_available': (pack_virtual_available and max(min(pack_virtual_available), 0) or False),
}
return res
```
Thanks in advance
Comment: Hi, the function that chack the availablity qty already exist in odoo10, serach in `stock/models/product.py` the function `_search_qty_available`
Comment: Make you sure the model uses is `product.produt` not `product.template`
Comment: thank you, but what they did here is new module named product_pack , and they override the function _product_available
Comment: it did work perfectly in v8, the problem here is the conversion of the method into v10 exactly in: res super(.......) and subproduct_stock = self._product_available(...)
Here is another answer: it did work using this one;
```@api.multi
def _product_available(self, field_names=None, arg=False):
pack_products = self.filtered(lambda p: p.pack == True)
res = super(product_product, self - pack_products)._product_available(field_names, arg)
for product in pack_products:
pack_qty_available = []
pack_virtual_available = []
for pack_line in product.pack_line_ids:
subproduct_stock = pack_line.product_id._product_available(field_names, arg)[pack_line.product_id.id]
sub_qty = pack_line.quantity
if sub_qty:
pack_qty_available.append(math.floor(
subproduct_stock['qty_available'] / sub_qty))
pack_virtual_available.append(math.floor(
subproduct_stock['virtual_available'] / sub_qty))
# TODO calcular correctamente pack virtual available para negativos
res[product.id] = {
'qty_available': (
pack_qty_available and min(pack_qty_available) or False),
'incoming_qty': 0,
'outgoing_qty': 0,
'virtual_available': (
pack_virtual_available and min(pack_virtual_available) or False),
}
return res
```
|
Title: Symfony EasyAdminBundle 3 repeated field length
Tags: php;symfony;easyadmin;easyadmin3
Question: I am using EasyAmdinBundle 3 and I have the following CRUD Controller where the "plainPassword" field is a repeated field:
```class UserCrudController extends AbstractCrudController
{
public static function getEntityFqcn(): string
{
return User181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16lass;
}
public function configureFields(string $pageName): iterable
{
yield TextFiel181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16new('username');
$plainPasswordField = TextFiel181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16new('plainPassword')
->setFormType(RepeatedTyp181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16class)
->setFormTypeOptions([
'type' => PasswordTyp181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16class,
'first_options' => ['label' => 'Password'],
'second_options' => ['label' => 'Repeat Password'],
])
->hideOnIndex()
;
if ($pageName === Action181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16NEW) {
$plainPasswordField->setRequired(true);
}
yield $plainPasswordField;
}
}
```
The field is repeated as expected. However, the length of the field is the full width. The other fields are only half that length ("col-sm-6").
I alreade tried ```setColumns(6)``` and ```setCssClass('col-sm-6')``` on ```$plainPasswordField``` but it did not help.
Does anybody know how to set the width of a repeated field in EasyAdminBundle 3?
Here is another answer: You need to put the class attribute into the ```row_attr``` property of first and second options:
```TextFiel181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16new('plainPassword')
->onlyOnForms()
->setFormType(RepeatedTyp181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16class)
->setFormTypeOptions([
'type' => PasswordTyp181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16class,
'first_options' => [
'label' => 'New Password',
'row_attr' => [
'class' => 'col-md-6 col-xxl-5',
],
],
'second_options' => [
'label' => 'Repeat Password',
'row_attr' => [
'class' => 'col-md-6 col-xxl-5',
],
],
],
),
```
|
Title: Dynamic Cache name in service worker
Tags: javascript;service-worker
Question: how create dynamic cache name in service worker?
I want to check API server to know the last version of my app. it will be cache name of my app.
I used fetch but it always return ```undefined```,
```staticCacheName = (() => {
fetch('https://unitedspace.co.id/check_version.php')
.then(response => response.json())
.then(myJson => myJson.app_name + '-' + myJson.version + '.' + myJson.build_number);
})();
```
this is my service worker code :
```self.addEventListener('activate', function(e) {
e.waitUntil(caches.keys().then(function(cacheNames) {
return Promise.all(
cacheNames.map(function(cacheName) {
if (cacheName != 'blog-{{ site.github.build_revision }}') {
return caches.delete(cacheName);
}
})
);
}));
});
self.addEventListener('fetch', function(e) {
e.respondWith(caches.match(e.request).then(function(response) {
return response || fetch(e.request);
}));
});
```
Here is another answer: You do not have any return in your first snippet. is it normal ?
Comment for this answer: it return promise resolve, I think fetch only return promise only in first `then`, but it always return promise. I think I need to make all script in the bottom wait until my cache name success. but I don't know to do that
Comment for this answer: Where is used staticCacheName ?
|
Title: If Condition for onclick not working
Tags: onclick;if
Question: I am looking for a way to insert a record and close the window with the VF-Page, when everything was entered correctly and user clicks button. The values are entered correctly, if a subject was entered. For that, I made a condition
button type="button" onclick="if(theTask.Subject != '') {createTask();window.close();}"
Unfortunately, this condition is not recognized. Why?
The controller is:
```public with sharing class LogACallExtension {
private final SObject parent;
public Task theTask {get; set;}
public String lastError {get; set;}
public LogACallExtension(ApexPages.StandardController controller) {
parent = controller.getRecord();
theTask = new Task();
theTask.WhoId = parent.id;
theTask.Status = 'Completed';
theTask.ActivityDate = date.today();
theTask.Type = 'Call';
theTask.Priority = 'Normal';
lastError = '';
}
public PageReference createTask() {
createNewTask();
theTask = new Task();
theTask.WhoId = parent.id;
return null; }
private void createNewTask() {
try {
insert theTask;
} catch(System.Exception ex){
lastError = ex.getMessage();
}
}
}
```
Here is another answer: Edit
Trying to make as little changes as possible to your original code, this is something close to what you are looking for.
The idea behind is that when the controller succeeds(or fails), it will refresh the view. If it succeeds, it will render the javascript window.close() and close the window.
Anyway I have the feeling you forgot to put the ```lastError``` somewhere in the view to notify the users ;)
View :
```<apex:page standardcontroller="Lead" extensions="LogACallExtension" showHeader="false">
<script type='text/javascript' src='/canvas/sdk/js/publisher.js'/>
<apex:form >
<apex:pageBlock title="Log a Call" mode="edit">
<apex:actionFunction action="{!createTask}" name="createTask" />
<apex:pageBlockSection title="" columns="1">
<apex:inputField value="{!theTask.Not_reached__c}"/>
<apex:inputField value="{!theTask.Subject}" id="test123" required="true" style="width: 570px; height: 20px" />
<apex:inputField value="{!theTask.Description}" style="width: 600px; height: 200px"/>
</apex:pageBlockSection>
<button type="button" onclick="window.close();" style="height:20px;width:50px;" id="cancelTask">{!$Label.Cancel_Button}</button>
<button type="button" onclick="createTask();" style="height:20px;width:50px;" id="addTaskButton">{!$Label.Save_Button}</button>
<script type="text/javascript">{!IF(success,'window.close();','')}</script>
</apex:pageBlock>
</apex:form>
</apex:page>
```
Controller
```public with sharing class LogACallExtension {
private final SObject parent;
public Task theTask {get; set;}
public String lastError {get; set;}
public Boolean success {get;set;}
public LogACallExtension(ApexPages.StandardController controller) {
parent = controller.getRecord();
theTask = new Task();
theTask.WhoId = parent.id;
theTask.Status = 'Completed';
theTask.ActivityDate = date.today();
theTask.Type = 'Call';
theTask.Priority = 'Normal';
lastError = '';
}
public PageReference createTask() {
createNewTask();
theTask = new Task();
theTask.WhoId = parent.id;
success = true;
return null; }
private void createNewTask() {
try {
insert theTask;
} catch(System.Exception ex){
success = false;
lastError = ex.getMessage();
}
}
}
```
There is no actual need to check for the empty value of ```Subject``` because it is a required field, and therefor Salesforce will check it for you.
if you still want to check it yourself, you could put to the input a ```styleClass="js-subject"``` and in the onclick do something like ```if(document.getElementsByClassName('js-subject')[0].value !== ''){...```
Comment for this answer: This is my VF Page:
Comment for this answer: yes, I have an action function called createTask
Comment for this answer: Reply updated. I was actually expecting you to update your question, not posting the reply as an answer, so we keep the answers to a minimum and cleaner
|
Title: datatables width behavior change between 1.9.4 and 1.10.10
Tags: jquery;datatables
Question: I am upgrading the datables version on my site from 1.9.4 to 1.10.10. (I'm also upgrading from yadcf 0.6.9 to 0.8.8)
See the 1.9.4 version at my production site and the 1.10.10 version at my sandbox site, with simpler version of sandbox use of datatables in viewstandings function within TestStandings.js here
Due to yadcf interface change to exFilterColumn confusion, in the sandbox site you have to select a gender to see the problem I'm discussing now.
As you can see the table header and the table data widths are sized differently.
I see that the ```div``` with class ```dataTables_scrollHeadInner``` has a smaller width attribute value in the sandbox site than in the production site, where it looks nice and takes the whole width.
What I don't know is what is causing that. I'm guessing there is a new configuration parameter I need to set, or the way I had it set for 1.9.4 doesn't work well for 1.10.10.
Comment: How is this question related to yadcf? I mean it is tagged with yadcf but I dont see how yadcf can help to solve it
Comment: may not be, but I wasn't sure -- my mistake was I was looking at the next div first which had a yadcf class, but after I'd realized there was an outer div without that class I forgot to remove the tag, which has been removed now
Comment: Thanks to Gyrocode.com for the edits!
Here is the accepted answer: My problem was that I was setting sScrollX: "100%". Not sure why this caused the behavior, and not sure why I'd set this when using 1.9.4.
I had a hunch after looking at the datatables code around where the width was being set (line 3753), and it panned out.
|
Title: Stack (Haskell) build cache of source files with GitHub Actions
Tags: haskell;caching;cabal;haskell-stack;github-actions
Question: When building my Haskell project locally using ```stack build```, only the changed source files are re-compiled. Unfortunately, I am not able to make Stack behave like this on GitHub Actions. Any suggestions please?
Example
I created a simple example with ```Lib.hs``` and ```Fib.hs```, I even check that cached .stack-work folder is updated between builds but it always compiles both files even when just one is changed.
Here is the example:
(no cache used, builds both ```Lib.hs``` and ```Fib.hs``` + dependencies): https://github.com/MarekSuchanek/stack-test/runs/542163994
(only ```Lib.hs``` changes, builds both ```Lib.hs``` and ```Fib.hs```): https://github.com/MarekSuchanek/stack-test/runs/542174351
I can observe from logs (verbose Stack) that something in cache is being updated, but it is totally not clear to me what and why. It correctly finds out that only ```Lib.hs``` is changed: "```stack-test-669.596.7562: unregistering (local file changes: src/Lib.hs)```" so I can't understand why all gets compiled. I noticed that in 2. ```Fib.hi``` is not updated in ```.stack-work``` but others (```Fib.o```, ```Fib.dyn_hi```, and ```Fib.dyn_o```) are.
Note
Caching of ~/.stack is OK as well as no-build when no source file is changed. Of course, this is dummy example, but we have different projects with many more source files where it would significantly speed up the build. When non-source file is changed (e.g. README file), nothing is being built as expected.
Comment: As I see nobody knows how Stack actually "works"
Comment: See the answer I provided ;) I guess some people have an idea on how it works. ;P
Here is the accepted answer: The culprit for this problem is that stack uses timestamp (as many other tools do) to figure out if a source file has changed or not. When you restore cache on CI and you do it correctly, none of the dependencies will get rebuild, but the problem the source files is that when the CI provider clones a repo for you, the timestamps for all of the files in the repo are set to the date and time when it was cloned.
Hopefully the cause for recompilation of unchanged source files makes sense now. What do we do about working around this problem. The only real way to get it is to restore the timestamp of the last git commit that changed a particular file. I noticed this quite a while ago and a bit of googling gave me some answers on SO, here is one of them I think: Restore a file's modification time in Git
A modified it a bit to suite my needs and that is what I ended up with:
``` git ls-tree -r --name-only HEAD | while read filename; do
TS="$(git log -1 --format="%ct" -- ${filename})"
touch "${filename}" -mt "$(date --date="@$TS" "+%Y%m%d%H%M.%S")"
done
```
That worker great for a while for me on Ubuntu CI, but solving this problem in an OS agnostic manner with bash is not something I wanted to do when I needed to setup Azure CI. For that reason I wrote a Haskell script that works for all GHC-8.2 version and newer without requiring any non-core dependencies. I use it for all of my projects and I'll embed the juice of it here, but also provide a link to a permanent gist:
```main = do
args <- getArgs
let rev = case args of
[] -> "HEAD"
(x:_) -> x
fs <- readProcess "git" ["ls-tree", "-r", "-t", "--full-name", "--name-only", rev] ""
let iso8601 = iso8601DateFormat (Just "%H:%M:%S%z")
restoreFileModtime fp = do
modTimeStr <- readProcess "git" ["log", "--pretty=format:%cI", "-1", rev, "--", fp] ""
modTime <- parseTimeM True defaultTimeLocale iso8601 modTimeStr
setModificationTime fp modTime
putStrLn $ "[" ++ modTimeStr ++ "] " ++ fp
putStrLn "Restoring modification time for all these files:"
mapM_ restoreFileModtime $ lines fs
```
How would you go about using it without much overhead. The trick is to:
use ```stack``` itself to run the script
use the exactly samel resolver as the one for the project.
Above two points will ensure that no redundant dependencies or ghc versions will get installed. All in all the only two things are needed are ```stack``` and something like ```curl``` or ```wget``` and it will work cross platform:
```# Script for restoring source files modification time from commit to avoid recompilation.
curl -sSkL https://gist.githubusercontent.com/lehins/fd36a8cc8bf853173437b17f6b6426ad/raw/4702d0252731ad8b21317375e917124c590819ce/git-modtime.hs -o git-modtime.hs
# Restore mod time and setup ghc, if it wasn't restored from cache
stack script --resolver ${RESOLVER} git-modtime.hs --package base --package time --package directory --package process
```
Here is a real project that uses this approach and you can dig through it to see how it works: ```massiv-io```
Edit @Simon Michael in the comments mentioned that he can't reproduce this issue locally. Reason for this is that not everything is the same up on CI as it is locally. Quite often an absolute path is different, for example, possibly other things that I can't think of right now. Those things, together with the source file timestamp cause the recompilation of the source files.
For example follow this steps and you will find your project will be recompiled:
```~/tmp$ git clone [email protected]:fpco/safe-decimal.git
~/tmp$ cd safe-decimal
~/tmp/safe-decimal$ stack build
safe-decimal> configure (lib)
[1 of 2] Compiling Main
...
Configuring safe-decimal-669.596.7562...
safe-decimal> build (lib)
Preprocessing library for safe-decimal-669.596.7562..
Building library for safe-decimal-669.596.7562..
[1 of 3] Compiling Numeric.Decimal.BoundedArithmetic
[2 of 3] Compiling Numeric.Decimal.Internal
[3 of 3] Compiling Numeric.Decimal
...
~/tmp/safe-decimal$ cd ../
~/tmp$ mv safe-decimal safe-decimal-moved
~/tmp$ cd safe-decimal-moved/
~/tmp/safe-decimal-moved$ stack build
safe-decimal-669.596.7562: unregistering (old configure information not found)
safe-decimal> configure (lib)
[1 of 2] Compiling Main
...
```
You'll see that the location of the project triggered project building. Despite that the project itself was rebuild, you will notice that none of the source files were recompiled. Now if you combine that procedure with a ```touch``` of a source file, that source file will get recompiled.
To sum it up:
Environment can cause the project to be rebuild
Contents of a source file can cause the source file (and others that depend on it) to be recompiled
Environment together with the source file contents or timestamp change can cause the project together with that source file to be recompiled
Comment for this answer: Thanks! Timestamps really solved this but additionally, GitHub actions use by default only very limited fetch without any history, so it had to be adjusted to [fetch all history](https://github.com/actions/checkout#fetch-all-history-for-all-tags-and-branches) in order to recover timestamps correctly.
Comment for this answer: I'm confused by this, because I don't seem to see timestamp affecting my local stack builds. Eg if I `touch` a source file, it's not rebuilt.
Comment for this answer: Likewise if I touch the .{dyn_hi,dyn_o,hi,o} files.
Comment for this answer: Thank you for the detailed info, very helpful. I saw it, as you say: changed paths (eg from renaming the folder) causes a rebuild of (a) Setup.hs and (b) any other modules whose timestamp has changed. Do you know of any issue for this in https://github.com/commercialhaskell/stack/issues ?
Comment for this answer: PS I've seen some unexplained rebuilds in my github actions jobs too. I'm not seeing which path would be different - CWD seems to be /home/runner/work/PROJ/PROJ always - but perhaps there is one..
Comment for this answer: Maybe: https://github.com/commercialhaskell/stack/issues/5125
Comment for this answer: Reposting from the [related reddit thread](https://www.reddit.com/r/haskell/comments/g00ldn/haskell_stack_on_github_actions): [Here's](https://github.com/simonmichael/hledger/commit/6057070cfd6deb16f65f625c4c6a7c9ee32bf9f4) an example of the proposed fix, including both parts. It's not entirely working for me. Eg, with no modules changed, previously it would recompile 49 of 50 modules, now it recompiles just 10 of 50 (the same ones each time: 22, 40-47, 50).
Comment for this answer: @SimonMichael I added an example to the answer. In short, you need to trigger the rebuild of a project in order for the timestamp to trigger recompilation.
Comment for this answer: No, I am not aware of any issues related to this behavior. To be honest, I don't know what else can cause a stack rebuild besides the path change, but I know for sure that rebuilds happen on CI even when the path doesn't change. Doesn't really pose a problem for me since the solution I provided works for me pretty good :)
Comment for this answer: @SimonMichael It doesn't look like you are using the actual Haskell script that I included in this answer. I would not recommend using the bash script, I included only for historical reasons
Comment for this answer: Use these lines here instead. For Linus and Mac: https://github.com/lehins/massiv/blob/bad71fc1f38612710bfc1fde0ccdf26af90aa4f0/.azure/pipelines.yml#L16-L19 For Windows: https://github.com/lehins/massiv/blob/bad71fc1f38612710bfc1fde0ccdf26af90aa4f0/.azure/pipelines.yml#L48-L54
Here is another answer: I have provided a PR fix for this so modified time is no longer relied on!
Comment for this answer: This is merged now in stack 2.5.1 - thank you @Andres S. Unfortunately even with stack 2.5.1 I continued to see the error `Trouble loading CompilerPaths cache` that brought me to this thread. For me it was the caching key, which was not correctly identified: `key: ${{ runner.os }}-${{ matrix.ghc }}` did not work, `key: ${{ runner.os }}-${{ matrix.ghc }}-stack` did.
Comment for this answer: I see - so maybe my issue about dependency caching is entirely unrelated to this thread after all. I'll leave the comment here anyway, because maybe somebody comes across it just like I did. Keep up the good work!
Comment for this answer: @nevrome with stack (and cabal) now caching by content correctly, unfortunately ghc itself is not. I have spent too much time inside of the ghc build code to realize this. If I ever get some extra time I'll see about writing a proposal/PR to fix this but it will be an undertaking. If you compile a simple codebase with ghc, change the modified time of a file, and recompile the project with ghc, you'll notice that the file is recompiled.
Comment for this answer: Good news! A WIP PR was just opened against GHC https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5130
|
Title: Edit the Jet Framework
Tags: javascript;oracle;oracle-jet
Question: Is it actually possible to edit the javascript from Jet Framework from Oracle.
I want to edit the Javascript from the Jet Framework that it doesn't do a line break after a line. And they should overlap themselv. That means it looks like this:
wwwwww
The character w are 2 labels overlaping. And this is important they have to overlap themselv.
So it should look that all labels are on the same field. I have done this in the Javascript from the Debug Version.
PS: The Jetframework has two versions, when you download the Framework, the minified version and the dubug version.
My Javascript works fine but in the Jet Framework it should be minified. So here my Question: What minifyer uses the Jet Framework to minifye the Javascript.
Sources:
Picture
PS I am sorry for my bad english. English is not my native Language. When you have questions please Ask in the Comments.
Thanks in Advance
Ivo
Comment: What I want to achieve is that the grunt "Labels" can overlapp themself in one line.
Comment: What do you want to achieve? What is the goal?
Comment: I think it's [Grunt](http://docs.oracle.com/middleware/jet220/jet/developer/GUID-661048AC-2510-4BFC-A1EA-944BEDF1C620.htm#JETDG-GUID-7158F1A6-14AE-4CAB-95F9-BC24B2C53472)
Here is another answer: There shouldn't be any need to hack into the JavaScript for this. Just use CSS as it should be used. Look at the browser dev tools to see which classes are handling the labels and adjust the CSS to get the overlap settings that you want.
Comment for this answer: just override the css classes that are used.
|
Title: Find and delete Rows where cell value is "#N/A"
Tags: vba;excel;find;delete-row
Question: I have an excel document that I use to analyze data sets each data asset I bring in has varying amounts of data. I have tried to write a macro that I assign to a button that can identify delete rows based on the value of a cell. It does not work. What am I doing wrong?
```Sub Button2_Click()
[vb]
'This will find how many rows there are
With ActiveSheet
lastRow = .Cells(.Rows.Count, "A").End(xlUp).Row
MsgBox lastRow
End With
Sub sbDelete_Rows_Based_On_Criteria()
Dim lRow As Long
Dim iCntr As Long
lRow = lastRow
For iCntr = lRow To 1 Step -1
'Replaces XX with the variable you want to delete
If Cells(iCntr, 1) = "#N/A" Then
Rows(iCntr).Delete
End If
Next
End Sub
[/vb]
End Sub
```
Comment: Im sorry I dont understand how you wrote the question; what does your excel spreadsheet looks like? Are you saying that each row is an entry, but each entries can have any amount of columns? could you add a screenshot.
When you press the button what should happen exactly? Delete rows matching a cell?
Comment: When I hit the button I need it to evaluate Colum A & Column B and delete the Rows with #N/A. Only one column has to have N/A in it for the requirement for the delete to be met. I use the Spreadsheet as a template so the amount of rows I have varies every time I use the SS.
Here is the accepted answer: Your logic is pretty much there, but your syntax is off. Additionally, you are only checking column A for the value and not column B (per your comments above).
```Sub Button2_Click()
Dim lRow As Long
'This will find how many rows there are
With ActiveSheet
lRow = .Cells(.Rows.Count, "A").End(xlUp).Row
MsgBox lastRow
End With
Dim iCntr As Long
For iCntr = lRow To 1 Step -1
'Replace "#N/A" with the value you want to delete
' Check column A and B for the value.
If Cells(iCntr, 1).Text = "#N/A" Or Cells(iCntr, 2).Text = "#N/A" Then
Rows(iCntr).Delete
End If
Next
End Sub
```
Or simplified:
```Sub Button2_Click()
Dim iCntr As Long
For iCntr = Cells(Rows.Count, "A").End(xlUp).Row To 1 Step -1
'Replace "#N/A" with the value you want to delete
' Check column A and B for the value.
If Cells(iCntr, 1).Text = "#N/A" Or Cells(iCntr, 2).Text = "#N/A" Then
Rows(iCntr).Delete
End If
Next
End Sub
```
Comment for this answer: @Dan - I realized after posting you are looking for Excel errors (which are noted as "#NA"). I updated the answer to use the `Text` property instead of `Value`. This should resolve the error.
Comment for this answer: I get an error Run-Time '13" type mismatch on this line If Cells(iCntr, 1).Value = "#N/A" Or Cells(iCntr, 2).Value = "#N/A" Then
Here is another answer: Because you have two subs, you must pass lastRow from one to the other:
```Sub Button2_Click()
'This will find how many rows there are
With ActiveSheet
lastRow = .Cells(.Rows.Count, "A").End(xlUp).Row
MsgBox lastRow
End With
Call sbDelete_Rows_Based_On_Criteria(lastRow)
End Sub
Sub sbDelete_Rows_Based_On_Criteria(lastRow)
Dim lRow As Long
Dim iCntr As Long
lRow = lastRow
For iCntr = lRow To 1 Step -1
'Replaces XX with the variable you want to delete
If Cells(iCntr, 1).Text = "#N/A" Then
Rows(iCntr).Delete
End If
Next
End Sub
```
Note:
the sub are unnested
Use .Text
|
Title: numpy polyfit not giving best linear fit according to chi squared
Tags: python;numpy;data-analysis
Question: I have to find a model fit for some data and then calculate its chi squared value to see how good a fit it is. So I used numpy's polyfit function to find the fit, then calculated chi squared and plotted it. It looks like a pretty good fit, however, I have managed to get a better fit by optimizing for minimum chi squared. I would have thought that numpy's polyfit would try to do the same, no?
Here's my code:
```import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
data = np.loadtxt('MW_Cepheids.dat', usecols=(1,2,3,4,5,6), dtype=float)
plx = data[:,0] # Parallax data
plx_err = data[:,1] # Error in plx.
P = data[:,2] # Period data
log_P = np.log10(P) # Logarithm of P.
m = data[:,3] # Apparent magnitude data
A = data[:,4] # Extinction data
A_err = data[:,5] # Error in ext.
dst = 1000/plx # Distance calculation
dst_err = 1000*(plx_err/plx**2) # Error in dist. calc.
# Calculating Absolute Magnitude and its error
# This would correspond to y and the error in y
M = m - 5*np.log10(dst) + 5 - A
M_err = np.sqrt((5*dst_err/(np.log(10)*dst))**2 + A_err**2)
# Using numpy's polyfit function
a,b = np.polyfit(log_P, M, 1)
M_m = a*log_P + b
# Calculating chi squared from the polyfit result
chi2 = np.sum(((M-M_m)/(M_err))**2)
print(a, b, chi2)
def lin_fit (x, y, y_err):
""" Cycles through probable values of gradient and intercept, calculating chi2 for each.
Returns best values for a and b, ie when chi2 is smallest."""
chi2_min = 999999
a_range = np.arange(-2.5, -2.3, 0.001)
b_range = np.arange(-1.7, -1.4, 0.001)
for a in a_range:
for b in b_range:
y_m = a*x + b
chi2 = np.sum(((y-y_m)/(y_err))**2)
if chi2 < chi2_min:
chi2_min = chi2
a_fit = a
b_fit = b
return a_fit, b_fit, chi2_min
a, b, chi2 = lin_fit(log_P, M, M_err)
print(a, b, chi2)
```
And the data is in this file: We transfer link
I know the simple choice here to just stick with the other method of finding the fit, but it's a lot messier, and I would like to make sure I am calculating chi2 properly and understand why polyfit isn't finding the 'best' fit.
Many thanks!
Comment: @rpoleski sorry, hopefully that's more helpful now. Thank you.
Comment: Please show [minimal working example](https://stackoverflow.com/help/minimal-reproducible-example) - in this case your data are needed.
|
Title: Is there any way to not stop service even if app is stopped through the multitasking screen?
Tags: android-service
Question: My service gets stopped when app is closed.
Code already provided.
My Service code is:
```public class MusicService extends Service {
MediaPlayer myPlayer;
@Nullable
@Override
public IBinder onBind(Intent intent) {
return null;
}
@Override
public void onCreate() {
Toast.makeText(this, "Service Created", Toast.LENGTH_LONG).show();
myPlayer = MediaPlayer.create(this, R.raw.nokiatune);
myPlayer.setLooping(false); // Set looping
}
@Override
public void onStart(Intent intent, int startid) {
Toast.makeText(this, "Service Started", Toast.LENGTH_LONG).show();
myPlayer.start();
}
@Override
public void onDestroy() {
Toast.makeText(this, "Service Stopped", Toast.LENGTH_LONG).show();
myPlayer.stop();
}
}
```
I have developed a Service in android. It is working fine. The only thing is that service stops when app is closed through the multitasking screen. Is there any way to not stop the service even if the app is closed through the multitasking screen ?
Here is another answer: You could try the approach discussed here. Basically, you register a BroadcastReceiver that restarts your Service if it is destroyed. On your ```AndroidManifest.xml```:
```<receiver
android:name="yourpackagename.RestartServiceBroadcastReceiver"
android:enabled="true"
android:exported="true"
android:label="RestartServiceWhenStopped">
</receiver
```
Your ```BroadcastReceiver```:
```public class RestartServiceBroadcastReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context context, Intent intent) {
context.startService(new Intent(context, MusicService.class));
}
}
```
Then on your Service's ```onDestroy```, you send a Broadcast so that the BroadcastReceiver can restart your service.
```@Override
public void onDestroy() {
Intent broadcastIntent = new Intent(this,RestartServiceBroadcastReceiver .class);
sendBroadcast(broadcastIntent);
myPlayer.stop();
}
```
Also, you have to move your logic inside the ```onStart``` to your Service's ```onStartCommand``` and have it return ```START_STICKY```, like this:
```@Override
public int onStartCommand() {
myPlayer.start();
return START_STICKY;
}
```
Returning this tells Android not to kill your service--however, there's no guarantee that Android will honor this and can still kill your service.
Finally in the Activity where you start the Service, you need to make sure that you check first if the service is already existing before starting the service, and stop the service during ```onDestroy```, so that the BroadcastReceiver can restart the service.
A warning though: this approach will not work above Android O, please see this for more details.
|
Title: Looping binded @input on ngFor - not updated properly
Tags: angular;typescript;ngfor
Question: I have a component with two binded inputs (one big array and two markers (positions) for the array).
Component:
```export class listSequence {
@Input() info: Data;
@Input() position: Markers;
..
...}
```
View
I'm looping the binded data @input and using the @input markers (position.start and position.end) to slice only wanted elements
(markers could change in any moment).
```<g *ngFor="#p of info.data | slice:position.start:position.stop+1 ; let i = index ">..</g>
```
When one of marker changes while the info.data is looping sometimes the results are messed.
Sometimes one iteration (or two) are treated later after the change on markers (position.start or position.stop).
After updating component.start iterations should go from i:0 to i:14
Comment: Do you modify `Data` and `Markers` on the outside (where they are passed in)
Comment: I think you need to provide a Plunker that allows to reproduce. It's hard to guess about code that's not visibile and a subtle bug without being able to observe it happening.
Comment: yes in another component also using the same @input Marker. ----> this.position.start = '10'
Comment: I could do that. The info.data loop is not loading properly (sometimes) after changes in @input (as you can see in the console.logs screen capture)
Comment: in app.ts I load a big array, (notice the array ends with "X","Y","Z" elements). dragging the navigator makes you move on the array that is displayed below. When you drag the left side of navigator to the right sometimes elements are messed. you can check the "X","Y","Z" and console to see how the iterator messes sometimes
Here is another answer: plnkr link
In app.ts I load a big array, (notice the array ends with "X","Y","Z" elements).
Dragging the navigator makes you move on the array displayed below (red section). When you drag the left side of navigator towards the right sometimes elements are not displayed in the correct position. you can check with the "X","Y","Z" at the end.
Also in the console you can see how the iterator is messed (sometimes)
|
Title: When Query to Parent, Child also being fetched, Hibernate
Tags: java;json;hibernate
Question: In my application there are entities that have a one-to-many relationship. When I query the parent, children are also getting fetched. I want to fetch only the parent. I tried by adding ```fetchType``` as ```Lazy```, but it still fetches the children. The entities are as follows:
```
Parent Entity
```
```@Entity
@Table(name = "INSTITUTE_LIST_MST")
@Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
@JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "listId")
public class InstituteInfoMaster
{
@Id
@Column(name = "LIST_ID")
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer listId;
@Column(name = "LIST_DESC")
private String description;
@Column(name = "LIST_VALUE")
private String value;
@Column(name = "URL")
private String url;
@Column(name = "LOGO", unique = false, length = 100000)
private byte[] logo;
// @JsonProperty("instituteInfoDetails")
// @JsonBackReference
// @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
@OneToMany(fetch = FetchType.LAZY, mappedBy = "instituteInfotMaster", cascade = CascadeType.ALL)
private Set<InstituteInfoDetails> instituteInfoDetails = new HashSet<InstituteInfoDetails>();
@Column(name = "CREATED_DT")
@Temporal(TemporalType.TIMESTAMP)
private Date createdDate = new Date();
@Column(name = "CREATED_BY")
private String createdBy;
@Column(name = "UPDATED_DT")
@Temporal(TemporalType.TIMESTAMP)
private Date updatedDate;
@Column(name = "UPDATED_BY")
private String updatedBy;
@Column(name = "RECORD_STATUS")
private String recordStatus = "A";
public Integer getListId()
{
return listId;
}
public void setListId(Integer listId)
{
this.listId = listId;
}
public String getDescription()
{
return description;
}
public void setDescription(String description)
{
this.description = description;
}
public String getValue()
{
return value;
}
public void setValue(String value)
{
this.value = value;
}
public Date getCreatedDate()
{
return createdDate;
}
public void setCreatedDate(Date createdDate)
{
this.createdDate = createdDate;
}
public String getCreatedBy()
{
return createdBy;
}
public void setCreatedBy(String createdBy)
{
this.createdBy = createdBy;
}
public Date getUpdatedDate()
{
return updatedDate;
}
public void setUpdatedDate()
{
this.updatedDate = new Date();
}
public String getUpdatedBy()
{
return updatedBy;
}
public void setUpdatedBy(String updatedBy)
{
this.updatedBy = updatedBy;
}
public String getRecordStatus()
{
return recordStatus;
}
public void setActiveRecordStatus()
{
this.recordStatus = "A";
}
public void deleteRecord()
{
this.recordStatus = "D";
}
public Set<InstituteInfoDetails> getInstituteInfoDetails()
{
return instituteInfoDetails;
}
public void setInstituteInfoDetails(Set<InstituteInfoDetails> instituteInfoDetails)
{
// this.instituteInfoDetails = instituteInfoDetails;
/*
* for (InstituteInfoDetails ins : instituteInfoDetails) {
* ins.setComListMaster(this); }
*/
this.instituteInfoDetails = instituteInfoDetails;
}
public byte[] getLogo()
{
return logo;
}
public void setLogo(byte[] logo)
{
this.logo = logo;
}
public String getUrl()
{
return url;
}
public void setUrl(String url)
{
this.url = url;
}
public String getByteArrayString()
{
if (this.logo != null)
{
return new String(Base64.encode(this.logo));
} else
{
return "";
}
}
}
```
```
Child Entity
```
```@Entity
@Table(name = "INSTITUTE_LIST_DETAILS")
@JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "listDtlId")
@Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
public class InstituteInfoDetails
{
@Id
@Column(name = "LIST_DTL_ID")
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer listDtlId;
@ManyToOne(fetch = FetchType.LAZY, cascade = CascadeType.ALL)
@JoinColumn(name = "LIST_ID", nullable = false)
// @JsonManagedReference
private InstituteInfoMaster instituteInfotMaster;
@Column(name = "LIST_DTL_VALUE")
private String value;
@Column(name = "LIST_DTL_DESC", length = 5000)
private String description;
@Column(name = "STRING1", length = 5000)
private String string1;
@Column(name = "STRING2", length = 5000)
private String string2;
@Column(name = "STRING3", length = 5000)
private String string3;
@Column(name = "SEQUENCE_NO")
private Integer sequenceNo;
@Column(name = "NUMBER1")
private Double number1;
@Column(name = "NUMBER2")
private Double number2;
@Column(name = "NUMBER3")
private Double number3;
@Column(name = "DOCUMENT", unique = false, length = 100000)
private byte[] document;
@Column(name = "DOCUMENT_TYPE", length = 1)
private Integer documentType;
@Column(name = "DOCUMENT1", unique = false, length = 100000)
private byte[] document1;
@Column(name = "DOCUMENT1_TYPE", length = 1)
private Integer document1Type;
@Column(name = "DOCUMENT2", unique = false, length = 100000)
private byte[] document2;
@Column(name = "DOCUMENT2_TYPE", length = 1)
private Integer document2Type;
@Column(name = "CREATED_DT")
@Temporal(TemporalType.TIMESTAMP)
private Date createdDate = new Date();
@Column(name = "CREATED_BY")
private String createdBy;
@Column(name = "UPDATED_DT")
@Temporal(TemporalType.TIMESTAMP)
private Date updatedDate;
@Column(name = "UPDATED_BY")
private String updatedBy;
@Column(name = "RECORD_STATUS")
private String recordStatus = "A";
public Integer getListDtlId()
{
return listDtlId;
}
public void setListDtlId(Integer listDtlId)
{
this.listDtlId = listDtlId;
}
public String getValue()
{
return value;
}
public void setValue(String value)
{
this.value = value;
}
public String getDescription()
{
return description;
}
public void setDescription(String description)
{
this.description = description;
}
public InstituteInfoMaster getComListMaster()
{
return instituteInfotMaster;
}
public void setComListMaster(InstituteInfoMaster instituteInfotMaster)
{
this.instituteInfotMaster = instituteInfotMaster;
}
public Date getCreatedDate()
{
return createdDate;
}
public void setCreatedDate()
{
this.createdDate = new Date();
}
public String getCreatedBy()
{
return createdBy;
}
public void setCreatedBy(String createdBy)
{
this.createdBy = createdBy;
}
public Date getUpdatedDate()
{
return updatedDate;
}
public void setUpdatedDate(Date updatedDate)
{
this.updatedDate = updatedDate;
}
public String getUpdatedBy()
{
return updatedBy;
}
public void setUpdatedBy(String updatedBy)
{
this.updatedBy = updatedBy;
}
public String getRecordStatus()
{
return recordStatus;
}
public void setRecordStatus(String recordStatus)
{
this.recordStatus = recordStatus;
}
public InstituteInfoMaster getInstituteInfotMaster()
{
return instituteInfotMaster;
}
public void setInstituteInfotMaster(InstituteInfoMaster instituteInfotMaster)
{
this.instituteInfotMaster = instituteInfotMaster;
}
public String getString1()
{
return string1;
}
public void setString1(String string1)
{
this.string1 = string1;
}
public String getString2()
{
return string2;
}
public void setString2(String string2)
{
this.string2 = string2;
}
public String getString3()
{
return string3;
}
public void setString3(String string3)
{
this.string3 = string3;
}
public Integer getSequenceNo()
{
return sequenceNo;
}
public void setSequenceNo(Integer sequenceNo)
{
this.sequenceNo = sequenceNo;
}
public Double getNumber1()
{
return number1;
}
public void setNumber1(Double number1)
{
this.number1 = number1;
}
public Double getNumber2()
{
return number2;
}
public void setNumber2(Double number2)
{
this.number2 = number2;
}
public Double getNumber3()
{
return number3;
}
public void setNumber3(Double number3)
{
this.number3 = number3;
}
public byte[] getDocument()
{
return document;
}
public void setDocument(byte[] document)
{
this.document = document;
}
public Integer getDocumentType()
{
return documentType;
}
public void setDocumentType(Integer documentType)
{
this.documentType = documentType;
}
public byte[] getDocument1()
{
return document1;
}
public void setDocument1(byte[] document1)
{
this.document1 = document1;
}
public Integer getDocument1Type()
{
return document1Type;
}
public void setDocument1Type(Integer document1Type)
{
this.document1Type = document1Type;
}
public byte[] getDocument2()
{
return document2;
}
public void setDocument2(byte[] document2)
{
this.document2 = document2;
}
public Integer getDocument2Type()
{
return document2Type;
}
public void setDocument2Type(Integer document2Type)
{
this.document2Type = document2Type;
}
}
```
```
DAO
```
```@Override
public List<InstituteInfoMaster> getInstituteInfoMatserList()
{
logger.info("Listing Institute Master");
Session session = sessionFactory.getCurrentSession();
Query query = session.createQuery("select info from InstituteInfoMaster info");
List<InstituteInfoMaster> instituteInfoMaster = query.list();
logger.info("List : " + instituteInfoMaster);
return instituteInfoMaster;
}
```
```
Logs
```
```Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
Hibernate: select institutei0_.LIST_ID as LIST_ID22_10_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_0_, institutei0_.LIST_DTL_ID as LIST_DTL1_9_1_, institutei0_.CREATED_BY as CREATED_2_9_1_, institutei0_.CREATED_DT as CREATED_3_9_1_, institutei0_.LIST_DTL_DESC as LIST_DTL4_9_1_, institutei0_.DOCUMENT as DOCUMENT5_9_1_, institutei0_.DOCUMENT1 as DOCUMENT6_9_1_, institutei0_.DOCUMENT1_TYPE as DOCUMENT7_9_1_, institutei0_.DOCUMENT2 as DOCUMENT8_9_1_, institutei0_.DOCUMENT2_TYPE as DOCUMENT9_9_1_, institutei0_.DOCUMENT_TYPE as DOCUMEN10_9_1_, institutei0_.LIST_ID as LIST_ID22_9_1_, institutei0_.NUMBER1 as NUMBER11_9_1_, institutei0_.NUMBER2 as NUMBER12_9_1_, institutei0_.NUMBER3 as NUMBER13_9_1_, institutei0_.RECORD_STATUS as RECORD_14_9_1_, institutei0_.SEQUENCE_NO as SEQUENC15_9_1_, institutei0_.STRING1 as STRING16_9_1_, institutei0_.STRING2 as STRING17_9_1_, institutei0_.STRING3 as STRING18_9_1_, institutei0_.UPDATED_BY as UPDATED19_9_1_, institutei0_.UPDATED_DT as UPDATED20_9_1_, institutei0_.LIST_DTL_VALUE as LIST_DT21_9_1_ from INSTITUTE_LIST_DETAILS institutei0_ where institutei0_.LIST_ID=?
```
Comment: How do you know the children are also being fetched? With lazy-load, the children will only be fetched if you try to access them.
Comment: It seems likely that your code that produces the json format is including the collection of children. When it hits the collection, lazy loading will cause the DB to be queried.
Comment: i am receiving data in json format in which i am getting child values along with parent. also in logs child table query is being printed
Comment: I tried without JSON still its fetching child
Here is another answer:
Make sure that when logging the root entity, its toString() implementation doesn't explicitly or implicitly iterate over the children collection:
```logger.info("List : " + instituteInfoMaster);
```
If the Hibernate ```Session``` is still open, and you try to serialize the root to JSON, when the children collection is accessed, it will be initialized and each associated child will be fetched too. To prove this, try adding ```@JsonIgnore``` to the children collection and check it again.
Try adding ```@Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)``` on the children collection too and let the 2nd level collection cache handle this for you.
Comment for this answer: I tried with adding @JsonIgnore. in result i am receiving only Parent details but still in logs its showing query to child tables.
|
Title: How to prove (forall n m : nat, (n <? m) = false -> m <= n) in Coq?
Tags: coq;theorem-proving
Question: How to prove ```forall n m : nat, (n <? m) = false -> m <= n``` in Coq?
I got as far as turning the conclusion into ```~ n < m``` using by ```apply Nat.nlt_ge```.
Doing ```SearchAbout ltb``` yields ```ltb_lt: forall n m : nat, (n <? m) = true <-> n < m```, but I don't know how to apply this since it only deals with ```(n <? m) = true```, not ```(n <? m) = false```.
Comment: Ah, I think I got it: `intros. apply Nat.ntl_ge. contradict H. apply Nat.ltb_lt in H. rewrite H. discriminate. Qed.`
Comment: Sorry typo, that should have been `Nat.nlt_ge`.
Comment: Where is `Nat.ntl_ge` defined?
Here is the accepted answer: Here is a proof that uses induction on n.
```Require Import NPeano.
Theorem my_thm: forall n m, (n <? m) = false -> m <= n.
induction n; destruct m; intros ; auto using (Le.le_n_S); discriminate.
Qed.
```
|
Title: How to append to a CSV file?
Tags: python;csv
Question: Using Python to append CSV file, I get data every other row.
How do I fix?
```import csv
LL = [(1,2),(3,4)]
Fn = ("C:\Test.csv")
w = csv.writer(open(Fn,'a'), dialect='excel')
w.writerows(LL)
```
```C:\test.csv``` when opened looks like this:
```1,2
3,4
1,2
3,4
```
Comment: @poke, @Chris: The OP is getting an unexpected empty row inserted after each expected data row.
Comment: "I get data every other row. How do I fix?" What precisely does this mean? Can you paste sample output versus desired sample output?
Comment: What is the question/problem? It's not clear to me..
Here is the accepted answer: Appending is irrelevant to the problem; notice that the first two rows (those from the original file) are also double-spaced.
The real problem is that you have opened your file in text mode.
CSV is a binary format, believe it or not. The csv module is writing the misleadingly-named "lineterminator (should be "rowseparator") as ```\r\n``` as expected but then the Windows C runtime kicks in and replaces the ```\n``` with ```\r\n``` so that you have ```\r\r\n``` between rows. When you "open" the csv file with Excel it becomes confused
Always open your CSV files in binary mode ('rb', 'wb', 'ab'), whether you are operating on Windows or not. That way, you will get the expected rowseparator (CR LF) even on *x boxes, your code will be portable, and any linefeeds embedded in your data won't be changed into something else (on writing) or cause dramas (on input, provided of course they're quoted properly).
Other problems:
(1) Don't put your data in your root directory (```C:\```). Windows inherited a hierarchical file system from MS-DOS in the 1980s. Use it.
(2) If you must embed hard-wired filenames in your code, use raw strings ```r"c:\test.csv"``` ... if you had ```"c:\test.csv"``` the '\t' would be interpreted as a TAB character; similar problems with ```\r``` and ```\n```
(3) The examples in the Python manual are aligned more towards brevity than robust code.
Don't do this:
```w = csv.writer(open('foo.csv', 'wb'))
```
Do this:
```f = open('foo.csv', 'wb')
w = csv.writer(f)
```
Then when you are finished, you have ```f``` available so that you can do ```f.close()``` to ensure that your file contents are flushed to disk. Even better: read up on the new ```with``` statement.
Comment for this answer: In python3, opening in binary mode: `_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)`
Comment for this answer: Interestingly, while most of the examples in the `csv` docs for Python 2 docs use binary mode, the [`DictReader`](https://docs.python.org/2/library/csv.html#csv.DictReader) examples don't, and nor do [any of the examples for Python 3](https://docs.python.org/3/library/csv.html). Is there a good reason for this? If not, perhaps you'd like to try getting the docs changed?
Comment for this answer: Darn. I just figured this out and was halfway through an answer saying the same. +1.
Here is another answer: I have encountered a similar problem with appending an already created csv file, while running on windows.
As in this case writing and appending in "binary" mode avoids adding extra line to each rows written or appended by using the python script. Therefore;
```w = csv.writer(open(Fn,'ab'),dialect='excel')
```
|
Title: Views created progmatically aren't inheriting theme
Tags: android;android-theme;android-styles
Question: I am trying to create a view pragmatically and then add it to my activity. This bit is working fine, however the theme for the view group isn't inherited by my new view
My theme:
```<style name="CustomButtonTheme" parent="@style/Widget.AppCompat.Button">
<item name="android:textColor">#FF0000</item>
<item name="android:background">#00FF00</item>
</style>
```
My layout:
```<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/buttonArea"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:orientation="vertical"
android:theme="@style/CustomButtonTheme">
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="This button inherits CustomButtonTheme" />
</LinearLayout>
```
Java code
```AppCompatButton button = new AppCompatButton(getContext());
button.setText("This button does not inherit CustomButtonTheme");
LinearLayout buttonArea = findViewById<LinearLayout>(R.id.buttonArea);
buttonArea.addView(button);
```
Comment: This is perfect! The second method has the added benefit that I don't need to hard code the theme in the context wrapper. If you add this as an answer I'll mark it on here to give you the well deserved credit. Thanks :)
Comment: That `theme` attribute in your layout will only have effect during inflation. It won't be applied to your `Activity`'s overall theme. All it does, however, is cause the `LayoutInflater` wrap its current `Context` with a `ContextThemeWrapper`. You can do the same; e.g., `ContextThemeWrapper wrapper = new ContextThemeWrapper(getContext(), R.style.CustomButtonTheme);`, `... new AppCompatButton(wrapper);`.
Comment: I just realized that there's a simpler way. Use the `LinearLayout`'s `Context` to create the `AppCompatButton`; i.e., `... new AppCompatButton(buttonArea.getContext());`. No need for your own separate `ContextThemeWrapper`.
Here is the accepted answer: An ```android:theme``` attribute in a layout will only have effect during inflation, and only on that particular subtree. It won't be applied to the ```Activity```'s overall theme.
All that attribute does, though, is cause the ```LayoutInflater``` to wrap its current ```Context``` with the specified theme in a ```ContextThemeWrapper```. We could do something similar ourselves, and just to illustrate the basic usage:
```ContextThemeWrapper wrapper = new ContextThemeWrapper(getContext(), R.style.CustomButtonTheme);
AppCompatButton button = new AppCompatButton(wrapper);
```
However, this has already been done for us, basically, when the ```LayoutInflater``` created a ```ContextThemeWrapper``` internally, for that ```android:theme``` attribute. That ```ContextThemeWrapper``` is the ```Context``` that the ```LinearLayout``` will have been created with, so we can simply use its ```Context``` to instantiate our ```AppCompatButton```:
```AppCompatButton button = new AppCompatButton(buttonArea.getContext());
```
As the OP points out, this has the added benefit of working in pretty much every similar setup without having to know the exact theme needed.
|
Title: Remove specific char from String
Tags: java;android;string;removeall
Question: How To remove a specific Character from a String. I have a Arraylist testingarray.
```String line=testingarray.get(index).toString();
```
I want to remove a specific character from line.
I have Array of uniCodes
```int uniCode[]={1611,1614,1615,1616,1617,1618};
```
i want to remove those characters that have these Unicodes.
Comment: Can you give an example what you mean by _remove a specific character_
Comment: Do you want to remove all `d` characters?
Comment: possible duplicate of [how to remove special characters from an string?](http://stackoverflow.com/questions/7552253/how-to-remove-special-characters-from-an-string)
Comment: For Example I want to remove 'd' from a string( this is a dog ). Result should be( this is a og ).
Comment: @Amulya Khare yes i want to remove all d characters
Here is the accepted answer: use :
```NewString = OldString.replaceAll("char", "");
```
in your Example in comment use:
```NewString = OldString.replaceAll("d", "");
```
for removing Arabic character please see following link
how could i remove arabic punctuation form a String in java
removing characters of a specific unicode range from a string
Comment for this answer: thanks . i did it but i did't get what i want. Basically i want i have 7 unicodes of Arabic characters , int uniCode[]={1611,1614,1615,1616,1617,1618}; i want to remove those words that have these unicodes
Comment for this answer: @ Shayan pourvatan thanks a lot . That is what i was searching i got my solution.
Here is another answer: Try this,
```String result = yourString.replaceAll("your_character","");
```
Example:
```String line=testingarray.get(index).toString();
String result = line.replaceAll("[-+.^:,]","");
```
Comment for this answer: replaceAll("[-+.^:,]","") - this is working as a regEx and if anything is present in line , it is replacing with blank( trimming it).
Here is another answer: you can replace character using replace method in string.
```String line = "foo";
line = line.replace("f", "");
System.out.println(line);
```
output
```oo
```
Comment for this answer: Cleanest solution for my application.
Here is another answer: If it's a single char, there is no need to use replaceAll, which uses a regular expression. Assuming "H is the character you want to replace":
```String line=testingarray.get(index).toString();
String cleanLine = line.replace("H", "");
```
update (after edit):
since you already have an int array of unicodes you want to remove (i'm assuming the Integers are decimal value of the unicodes):
```String line=testingarray.get(index).toString();
int uniCodes[] = {1611,1614,1615,1616,1617,1618};
StringBuilder regexPattern = new StringBuilder("[");
for (int uniCode : uniCodes) {
regexPattern.append((char) uniCode);
}
regexPattern.append("]");
String result = line.replaceAll(regexPattern.toString(), "");
```
|
Title: Azure error AADSTS50020 while loggin in from VS2022
Tags: azure;azure-active-directory;azureportal
Question: AADSTS50020: User account 'my@email' from identity provider 'https://sts.windows.net/783c0fcf-4d70-4426-9bbc-1e83f8b865b2/' does not exist in tenant 'Default Directory' and cannot access the application '872cd9fa-d31f-45e0-9eab-6e460a02d1f1'(Visual Studio).
I am logging in with an account (mine) that is a Global Administrator and owner of that Azure organization. How can I be not authorized? This makes zero sense -__- As a test I invited my other email (on a different domain) as an external guest and the login worked for that account. So I can login as a guest but not as an owner.
Here is another answer: There maybe relatably few possible causes for this error.
```Possible cause 1
```
Please check if you might have already have an active session that
uses a different account (personal) than the one that's intended to
be used where you are admin. Or it maybe meant for guest user
account.
To see if above is the reason, look for the User account and Identity
provider values in the error message. Check if those values match the
expected combination .
See if sign in is done by using organization account to your tenant
instead of home tenant Or is the login is by using a different
personal account than the one that needed to be.
Resolution
To resolve this issue please sign out from active session, then sign in again from a different browser or a private browser session.
```Cause 2
```
Also if you have set Supported account types to Multiple
organizations. But if your authentication call is for specific tenant
i.e., https://login.microsoftonline.com/tenant name or id. In that
case users from other organizations cannot be able to access the
application and those users are required to be added as guests in the
tenant specified in the request.Maybe this seems to be the reason for
as guest account is signed in.
Resolution
So for multiple organizations, authentication request should either
be common or organizations ex:
https://login.microsoftonline.com/`organizations` or
https://login.microsoftonline.com/`common`
Also check Error AADSTS50020 - User account from identity provider does not exist in tenant - Active Directory | Microsoft Docs to troubleshoot in other cases.
|
Title: String split and store in 2 different list
Tags: java;string;split
Question: I have a list which has data like this ```[1-123,2-456,6-654]```.
I need to split by delimiter "-" and store the first part in one List and second part in other list.
```List clientIds = new ArrayList();
List chipIds = new ArrayList();
String delimiter = "-";
for(int i=0;i<selectedClientChips.size();i++){
// How to add them in 2 lists???
}
```
Comment: Is `[1-123,2-456,6-654]` a real element of your list or the structure of the complete list, which results in `1-123` as list objects (where the solution should be pretty easy)?
Comment: "*I need to split by delimiter "-"*" what is stopping you from doing so?
Here is another answer: To split the String 'N-NNNN' you can use the .split Method of the String class.
See the Java Doc:
http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#split(java.lang.String)
The split method returns a string array with the splitted values.
For the String ```"1-123"``` the ```.split("-");``` returns ```{"1", "123"};```
Improved Example:
```import java.util.*;
import java.lang.*;
import java.io.*;
public class Ideone
{
public static void main (String[] args) throws java.lang.Exception
{
String selectedClientChips[] = {"1-123", "2-456", "6-654"};
List clientIds = new ArrayList();
List chipIds = new ArrayList();
String delimiter = "-";
for(int i=0;i<selectedClientChips.length;i++){
String split[] = selectedClientChips[i].split(delimiter);
if (split.length == 2) {
clientIds.add(split[0]);
chipIds.add(split[1]);
}
}
System.out.println("CliendIDs: " + clientIds.toString());
System.out.println("ChipIDs: " + chipIds.toString());
}
}
```
Output:
```CliendIDs: [1, 2, 6]
ChipIDs: [123, 456, 654]
```
Working Example:
http://ideone.com/w2IvNm
Here is another answer: This should work...
```for(int i=0;i<selectedClientChips.size();i++){
// get element and split it
String element[] = selectedClientChips.get(i).split(delimiter);
// add each part to one list
clientIds.add(element[0]);
chipIds.add(element[1]);
}
```
Here is another answer: This should work for your needs:
``` List<String> selectedClientChips = new ArrayList();
selectedClientChips.add("1-123");
selectedClientChips.add("2-456");
List clientIds = new ArrayList();
List chipIds = new ArrayList();
for(int i=0;i<selectedClientChips.size();i++){
String[] r = selectedClientChips.get(i).split("-");
clientIds.add(r[0]);
chipIds .add(r[1]);
}
```
Here is another answer: Add these three lines in the ```for``` loop:
``` String[] splitList = selectedClientChips.get(i).toString().split(delimiter);
clientIds.add(splitList[0]);
chipIds.add(splitList[1]);
```
Comment for this answer: @madhu Wait, why do you need to call `toString()` method? If type of elements in list is not `String` (which is confusing since your question is about "String split...") then you should mention it in your question and describe what exact type it is. Maybe this type contains methods like `getFirst()` or `getLast()` which returns parts before and after `-` and we don't even need to `split`.
Comment for this answer: @madhu Ohh.. I had considered the elements as String. Edited the answer as per your requirement.
Comment for this answer: Thanks its working but with small change: clientIds.add(selectedClientChips.get(i).toString().split(delimiter)[0]);
chipIds.add(selectedClientChips.get(i).toString().split(delimiter)[1]);
Here is another answer: ```Split``` and ```add``` first element in first list and next element in next list.
```String[] strArr;
for (int i = 0; i < selectedClientChips.size(); i++) {
strArr = selectedClientChips.get(i).split(delimiter);
clientIds.add(strArr[0]);
chipIds.add(strArr[1]);
}
```
Here is another answer: You can try this :
```for(int i=0;i<selectedClientChips.size();i++){
String []splitArray=selectedClientChips.get(i).split(delimiter);
clientIds.add(splitArray[0]);
chipIds.add(splitArray[1]);
}
```
|
Title: Can my UpdatePanel reference a DataTable without a DB hit everytime it posts back?
Tags: asp.net;.net;datatable;updatepanel;postback
Question: I have a paging repeater inside of an UpdatePanel so that I can show, say 10 records at a time of a DataTable. When hitting the next/back buttons it will, of course, show the next 10 or previous 10 records. Is there a way I can have it reference the same DataTable when I hit next/back without having to get the DataTable again from the DB on page load? I think I'm just having a bit of a brain fart. Thanks for the help.
Here is the accepted answer: One option is to use .Net's caching APIs. Add your data to the cache for a certain duration or with a dependency, and then retrieve it rather than calling your database query.
http://msdn.microsoft.com/en-us/library/system.web.caching.cache.add.aspx
```public void AddItemToCache(Object sender, EventArgs e) {
if (Cache["Key1"] == null)
Cache.Add("Key1", "Value 1", null, DateTime.Now.AddSeconds(60), Cache.NoSlidingExpiration, CacheItemPriority.High, onRemove);
}
```
Comment for this answer: Storing data in the Session object could result in greater memory usage as it would need to be stored for every session. Storing a large data set in the ViewState may dramatically increase the page size and speed. The Cache object is more suited to this scenario and provides greater flexibility and extensibility.
Comment for this answer: alternatively you could add it to the Session or the ViewState. Not sure which is considered best practice though.
Here is another answer: I agree with @TimS but if you have scenario like you want to show the records based on some criteria you need to use ViewState (though it is most discouraged practise) to store the datatable specific to the page's criteria and do the work of next and previous.
|
Title: Azure Pipelines: dotnet test fails after dotnet build with -o - "It was not possible to find any compatible framework version"
Tags: .net;.net-core;msbuild;azure-pipelines
Question: Here's what I'm trying to do, running a Pipeline on a self-hosted Agent:
Install .NET SDK 6.0.202
Build my Solution to a specific Output Directory:
```- task: DotNetCoreCLI@2
inputs:
command: build
projects: MySolution.sln
arguments: "--configuration MyConfiguration -o $(Build.BinariesDirectory)"
```
Run (NUnit) Unit tests contained in some of the built DLLs
``` - task: DotNetCoreCLI@2
inputs:
command: test
projects: |
$(Build.BinariesDirectory)\**\*Tests.dll
```
However, I get the TestHost exiting with the following error:
```
It was not possible to find any compatible framework version
The framework 'Microsoft.NETCore.App', version '6.0.0' (x64) was not found.
No frameworks were found.
```
The same thing works, if I leave out the -o parameter and just let each project build in its bin/MyConfiguration folder.
Locally, everything works fine, with or without the -o, just doing "dotnet test Outfolder/someTests.dll".
Listing the installed runtimes with dotnet --list-runtimes, shows various runtimes including the wanted 6.0.0.
Using "where.exe dotnet" shows two installation locations, with the correct one (_work_tool\dotnet\dotnet.exe) showing first. Neither indicates a x86 installation, following advice for some similar problems that pointed to the x86 runtime being falsely used/discovered
Both the succeeding step without -o and the failing step with -o use the same dotnet.exe
Getting the agent's OS architecture with "wmic OS get OSArchitecture" returns 64-bit
I tried explicitly installing the .NET runtime 6.0.0 to no avail.
Earlier, all projects were configured via .targets file to build to a common output directory. All test projects were configured to build to a different common directory. Build and test were both successful at that point.
All projects define x64 as Platform, all test projects additionally define PlatformTarget x64
All projects have net6.0-windows as TargetFramework. I am unsure why it is trying to explicitly use the 6.0.0 runtime instead of a later 6.0.x one.
How do I get my build agent to successfully run my tests when explicitly building to an output directory?
Update
Some new info:
Extending the UseDotNet Task with "performMultiLevelLookup: true" replaces "No frameworks were found" with a list of some runtimes found at C:\Program Files (those which are installed globally on the agent I guess).
The issue seems to be that dotnet test does not look for the runtime in the agent working directory (_work_tool\dotnet). PATH does include that path (and ahead of C:\Program Files...), but with a forward slash (C:\agent_work_tool/dotnet) which I am not sure if it is an issue.
Update 2
These issues might be related:
https://github.com/microsoft/vstest/issues/2228
https://github.com/dotnet/runtime/issues/68180
I suspect that the testing task is looking for the runtime in the wrong place, which is why it finds none and some in the default installation location if using performMultiLevelLookup. Why this would happen only if I previously build to a specific output directory though and how to fix it, is puzzling to me.
I tried outputting the DOTNET_ROOT env var at various points throughout the pipeline run. It is set by the UseDotNetTask (to ```C:\agent\_work\_tool/dotnet```) and not changed before or after the testing task. So I guess, either it is set temporarily during the test task, ignored or doesn’t work due to the forward slash. But then: why does it work for dotnet build?
Here is another answer: in our Pipelines running on servers with multiple runtimes, before we use .NET core we always have to include a task like this at the top, give it a try if it makes any difference:
```steps:
- task: UseDotNet@2
inputs:
version: '6.0.x'
```
Comment for this answer: We do have that. It's what I meant with my first bullet point "Install .NET SDK". Additionally, we tried explicitly installing the 6.0.0 runtime with the UseDotNet task and packageType runtime.
|
Title: Why doesn't environment variable get updated in script
Tags: windows;cmd
Question: I am basically a Linux guy forced into a Windows world lately, so I need to write a bat script, but I ran into the following problem.
Here is my .bat script
```///////////////////////////
echo.
echo This is testbat script
echo -----------------------------------------------------------
echo.
if "%1"=="" (
echo "You did not enter an argument
) else (
set "myvar="
echo Argument is %1%
set myvar=%1%
if "%myvar%"=="%1%" (
echo myvar is %myvar%
) else (
echo myvar is not set to %1
)
)
////////////////////////////////////////////////////////
```
It seems that I need to run this script twice to get myvar to change.
For example,
FIRST RUN:
```
testbat.bat hello
```
OUTPUT:
This is testbat script
-----------------------
``` Argument is hello
myvar is not set to hello
```
SECOND RUN:
```
testbat.bat hello
```
OUTPUT:
This is testbat script
-----------------------
``` Argument is hello
myvar is hello
```
NOW CHANGE the argument to bye
THIRD RUN:
```
testbat.bat bye
```
OUTPUT:
This is testbat script
-----------------------
``` Argument is bye
myvar is not set to bye (In fact, it is still hello here)
```
FOURTH RUN (same input as THIRD):
``` > testbat.bat bye
```
OUTPUT:
This is testbat script
-----------------------
``` Argument is bye
myvar is bye (Finally gets updated)
```
////////////////////////////////////
My question is why the script doesn't update the environment variable the first time?
Why do I need to run the script a second time to get the variable to change to the new value in the script? I used the SET command and discovered that the value is changed in the environment, why does the script output reflect the old value. Of course, the value in the environment might not change until after the script completed, not sure.
I'm running the script and then using the up arrow to edit the command line if that makes any difference, it doesn't seem to though.
Comment: Seems as if it takes some finite amount of time for the SET to take effect.
Comment: No, it's a one shot that takes a directory path as input and calculates the CRC of each file in that directory and creates (or updates) an existing file in that directory with the CRC information.
Comment: Are you running your script inside a loop?
Here is the accepted answer: You cannot use ```%1%``` as an environment variable because ```%1``` is a command line replaceable parameter.
To ```set/change``` and ```display``` a variable within parentheses or a loop you need
```@echo off
setlocal enabledelayedexpansion
```
and use ```echo !myvar!```
Comment for this answer: If I replace %1% with %1 I still have the same problem.
Comment for this answer: I'm not in a loop, but I think you are pointing me in the right direction. Any complicated construct, such as my if/else would result in expansion before execution.
I modified the script to use setlocal enabledelayedexpansion and use !myvar! to reference the variable while local is active. After these checks, I am out of the if/else, I do a endlocal, and my variable is set appropriately thereafter.
Thanks
Comment for this answer: See the second part of my answer
|
Title: Why does writing to a named pipe continue after no one is reading?
Tags: c;unix;named-pipes
Question: I'm doing some experiments to learn about named pipes. It's my understanding that the OS will block a program that writes to a named pipe until another program reads from the named pipe. So I've written two programs, ```startloop``` and ```readbyte```. ```startloop``` creates a fifo and continually writes to it with each read of the client (```readbyte```):
```#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/stat.h>
int main(int argc, char *argv[]) {
const char num = 123;
mkfifo("fifo", S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP);
int fd = open("fifo", O_WRONLY | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
while (1) {
printf("loop_start\n");
write(fd, &num, sizeof(num));
}
close(fd);
return 0;
}
```
```readbyte``` reads one byte from the fifo when run:
```#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <fcntl.h>
int main(int argc, char *argv[]) {
char num;
int fd;
if ((fd = open(argv[1], O_RDONLY, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH)) == -1) {
perror("Cannot open input file\n"); exit(1);
}
read(fd, &num, sizeof(num));
printf("%d\n", num);
close(fd);
return 0;
}
```
```readbyte``` prints the number as expected when run on "fifo":
```hostname:dir username$ ./readbyte fifo
65
```
As I expect, ```loopstart``` doesn't print anything until I read from the fifo with ```readbyte```. However, when it becomes unblocked, it writes to "fifo" several times instead of immediately being suspended. Why is this?
```hostname:dir username$ ./startloop
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
loop_start
```
Comment: @Andrew Henle-I know it at least succeeds in some fashion because `readbyte` prints what was written. I did have a version with error checking--I removed all that to make the post shorter, though. I can add it back in if you think that's the problem.
Comment: I agree that adding that information would be a good idea. In this case, it can actually hide the problem. When I add a loop counter, I get a single line of output, `loop 0, wrote 1 bytes`. Sometimes when I run I'll get two lines. The time that the `printf` takes with the extra arguments ends up hiding the number of loops performed.
Comment: How do you know the `write()` call succeeds? You're not checking its return value.
Comment: The fact that your loop continues just means `write()` didn't *block*, not that is *succeeded*. There's a huge difference. If you don't know what's happening and want help, the more data you post the better the help you'll get. In this case, emitting a loop counter along with the return value from `write()` would have been a lot better than a simple "loop_start". This was an easy problem to solve - solving much harder problems will *need* that extra information.
Here is the accepted answer: "It's my understanding that the OS will block a program that writes to a named pipe until another program reads from the named pipe."
That understanding is incorrect. ```write``` will not block unless the pipe/fifo is full. From the pipe manul:
```
A pipe has a limited capacity. If the pipe is full, then a write(2)
will block or fail, depending on whether the O_NONBLOCK flag is set
(see below).
```
As to why the first ```write``` appears to block - it actually doesn't. It is the ```open``` that blocks. From the fifo manaul:
```
The FIFO must be opened on both ends (reading and writing) before data
can be passed. Normally, opening the FIFO blocks until the other end
is opened also.
```
Update: Actually the above is true for the first ```write```. But there is probably more to explanation. Once the ```readbyte``` program closes the fifo, subsequent ```write``` calls should start failing.
Comment for this answer: @rici Thanks and agreed, that is a valuable clarification.
Comment for this answer: Once the reader closes the fifo, the write call will fail with `EPIPE` but only if `SIGPIPE` is being ignored. Since the signal is not ignored by default and the default action is to terminate the process, you can expect several writes to succeed, until the pipe is closed, and then the process to die.
Here is another answer: test the write result
``` while (1) {
printf("loop_start\n");
int ret = write(fd, &num, sizeof(num));
if(ret == -1)
{
perror("error writing to fifo");
exit(1);
}
}
```
|
Title: Convert pandas Dataframe to python native int
Tags: python;sql;pandas;numpy
Question: Versions where problem occurs:
```python 3.6.13
pandas 1.1.5
numpy 1.19.2
```
This seems trivial but I can't find a satifying solultion so far. First, I import data into a pandas Dataframe before loading to an SQL database. The failure message that I've gotten is:
```ProgrammingError: (pyodbc.ProgrammingError) ('Invalid parameter type. param-index=0 param-type=numpy.int64', 'HY105')
```
Apparently, to get the dataframe into the database, the dtype can't be numpy.int64 and must be int. I had found a solution here:
"Invalid parameter type" (numpy.int64) when inserting rows with executemany()
Here is a screenshot of the target column dtype:
The only way I've found to get data to be dtype int is the native function int(), but that can be only used on singular values.
The numpy method .astype(int) for some reason only converts to numpy.int32:
```df = pd.DataFrame(data=[[1,4,5], [2, 'nan', 4]], columns=['A', 'B', 'C'])
df[['A', 'C']] = df[['A', 'C']].astype(int)
df.info()
```
Both the .info() method, as well as checking the type of individual values yields int32 for me.
Can someone please tell me how to turn the whole dataframe into native int that way I can import into my database??
Comment: updated question!
Comment: I mentioned below that I need the values in the dataframe for a couple further steps. When I try to assign the dataframe columns to these generated lists, it converts back to int64
Comment: Hm, interesting suggestion. Not completely viable because I do operations with the frame once more before importing, but I'll see if I can integrate it
Comment: Thank you to those who gave input! I couldn't find a way to convert a pandas df to int, only numpy.intXX, but I found a solution where I write the values individually to the SQL database, so at that point I convert the indiviual values to int. Therefore I circumvented the problem.
Comment: The underlying data structure of the DataFrame is going to be one of the valid numpy types or `object` (even if using some of the pandas experimental types). There is typically some configurations available in the transfer protocol from pandas to sql. You've not provided the code for how you're trying to export from pandas to SQL nor the table schema. That would be helpful to determine what options are available.
Comment: You could try going to string instead and let the Database parse the string input into the appropriate type.
Comment: `df.to_numpy().tolist()` should produce a list of lists of ints. There may also be a `df.to_list()` method
Here is another answer: You should know which int bit length your database uses and convert with the appropriate type: ```np.int8```/```np.int16```/```np.int32```/```np.int64```
Example:
```import numpy as np
df['col'].astype(np.int8)
```
Comment for this answer: In the Microsoft SQL server the value type is int. It's also rejected np.int32 and np.int64. That's why I assume it can't be a numpy dtype.
Comment for this answer: both also fail with the same response. param-type=numpy.intXX respectively.
Comment for this answer: It does but I need the values in the dataframe for a couple further steps. When I try to assign the dataframe columns to these generated lists, it converts back to int64
Comment for this answer: it's worth noteing, that this had worked for a while. then I had to reset my enviorment and something got upgraded that's not alow this anymore.
Comment for this answer: int doesn't mean anything *per se*, there are different types of bit length, have you tried np.int16 and np.int8?
Comment for this answer: You could use `df[['A', 'C']].values.tolist()` to get lists of python int
|
Title: MyBatis create multiple tables for MySQL
Tags: mysql;sql;mybatis
Question: As you can imagine,
```CREATE TABLE table1(id int);
CREATE TABLE table2(id int);
```
is easy executable on MySQL and on nearly every other SQL-Database.
This
```<update id="test">
CREATE TABLE table1(id int);
CREATE TABLE table2(id int);
</update>
```
is executable on MS SQL Server, but not on a MySQL-Database. Error:
```Exception in thread "main" org.apache.ibatis.exceptions.PersistenceException:
### Error updating database. Cause: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CREATE TABLE table2(id int)' at line 2
### The error may involve defaultParameterMap
### The error occurred while setting parameters
### SQL: CREATE TABLE table1(id int); CREATE TABLE table2(id int);
```
Any ideas, why this is the case?
EDIT:
```<update id="test">
CREATE TABLE table(id int);
</update>
```
.. is working everywhere.
EDIT for clarification:
My complete mybatis mapper.xml.
```<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper
PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="InitializationMapper">
<update id="test">
CREATE TABLE table1(id int);
CREATE TABLE table2(id int);
</update>
</mapper>
```
Comment: Edited. Now right before the question-mark. ;)
Comment: I don't use MyBatis Migrations therefore i don't have an environment.properties. Furthermore I cannot find any similar option in MyBatis anywhere..
Comment: http://stackoverflow.com/questions/23000085/mybatis-migrations-migrate-up-causes-org-apache-ibatis-jdbc-runtimesqlexception
`Setting send_full_script=false in enviornment.properties file fixes the problem.`
May find your answer there
Comment: Having never used MyBatis, but doing the little bit of research that I have, it looks to me that the problem occurs bc MySQL doesn't like the way it is passing it those DDL statements together in the script.
Also, from the examples I found, I couldn't anyone executing their DDL in this fashion. I wish I was able to help you.
Comment: Fine. But what's your question?
Here is the accepted answer: Try adding the "allowMultiQueries" option to the JDBC URL in your Mybatis config file, e.g.:
```jdbc:mysql://myserver/mydatabase?allowMultiQueries=true
```
It seemed to work for the folks over here: Multiple queries executed in java in single statement
Comment for this answer: It doesn't work for Oracle DB.
`url=jdbc:oracle:thin:@localhost:1521/testdb?allowMultiQueries=true` errors out in `IO Error: Invalid connection string format, a valid format is: "host:port:sid"`
|
Title: ASP.NET Core serves expired certificate
Tags: asp.net-core;ssl;certificate;kestrel
Question: My SSL certificate for one of my websites expired on 23/1/2022. I renewed the certificate with the issuer, got all the new files (PEM,CRT), converted to PFX and replaced the original file at the server.
However, if I or anyone else visit the website, there is still warning of the invalid certificate and in the details in the browser I see the validity of the expired certificate.
I host my service in Ubuntu using Kestrel.
The configuration looks like this:
``` webBuilder.ConfigureKestrel(serverOptions =>
{
serverOptions.ConfigureHttpsDefaults(listenOptions =>
{
X509Certificate2 certificate = new X509Certificate2("PKCS12_1556384.pfx", password);
listenOptions.ServerCertificate = certificate;
});
});
```
I renamed the new certificate to keep the file name, so I do not need to recompile the source, replaced the original file (all the files related to the certificate, like the parent one I got from the issuer), rebooted the service couple of times, but nothing helped.
Here is another answer: In IIS, we can rebind the new certificate by ```Enable Automatic Rebind if Renewed Certificate```.
So the best practise should be how to rebind the renewed certificated.
|
Title: Xamarin trial - apps built there lost forever if I don't upgrade?
Tags: xamarin
Question: I'm pretty tempted to try out Xamarin. Particularly within Visual Studio. But as this is for non-commercial experimentation only, at least for now, I don't really see myself ever acquiring a license. As such: Will any progress I have made be lost after the trial period ends? The Xamarin website has this to say:
```
Apps built in trial mode can only be run within a 24 hour window after
they are built, and bear a splash screen that indicates they were
built using the Trial SDK. The Xamarin Trial is licensed for
evaluation purposes only.
```
But I should be able to re-use the code etc in other products, right? Or should I be looking at free alternatives from the get-go?
Here is another answer: In 2016 Visual Studio Community edition includes full Xamarin support for free. You can publish whatever you want and there is no limitation on screens, etc...
Here is another answer: The code you write is yours and you can do whatever you want with it.
You just won't be able to use an apps/executables you've built from your source code after 24h.
Comment for this answer: Thanks. I guess I worded that a bit poorly. Will my code from Xamarin be adopted to their framework, leaving me with lots of work to translate it to other tools? Or will I be able to re-use the code as-is?
Comment for this answer: I'm not too read-up on what the best alternatives are, but from what I understand, there are other cross-platform tools similar to Xamarin. No idea of how could they might be, though. If Xamarin is the only viable C# solution, then obviously the entire point is moot.
Comment for this answer: Which other tool are you thinking of? If you're considering using native tools (Xcode on iOS, Eclipse on Android), have in mind that you'd be writing C#, so none of that code would be usable in neither Xcode nor Eclipse.
Here is another answer: If your business logic code is written in plain C# not includes any platform specific code(eg. using Android API) and shared,then you must be able to use that code to any other .NET framework using C#.Although you can use ```Xamarin Studio```,if it expires you can continue with ```starter Edition```.Hope this helps.
|
Title: Scrapy, Xpath, extracting h3 content?
Tags: html;python-3.x;xpath;web-scraping;scrapy
Question: I need to extract everything after h3 class AIRFRAME /h3 but before h3 class ENGINES /h3:
What I need extracted:
"Entry Into Service: December 2010
Total Time Since New: 3,580 Hours" etc.
HTML code photo - not sure how to embed it directly instead of having a link
Below is what I've tried but it doesn't return anything. I'm new to Scrapy and programming in general so I would appreciate some help. I've tried searching through other posts and google in general without any luck.
input = response.xpath("//div[@class='large-6 cell selectorgadget_rejected']/h3/text()").extract()
output = []
Comment: Thanks. This was my first post so I'll keep that in mind for the future.
Comment: Do **never** include code as image. Always copy it as text in a `code` section, because otherwise an attempt could not be reproduced and would be worthless for SO.
Comment: HINT: Check if the value of the @class attribute contains a line-break.
Here is the accepted answer: The code that you are using is referencing another class that doesn't have the text you mentioned.
```input = response.xpath("//div[@class='large-6 cell selectorgadget_rejected']/h3/text()").extract()
```
The name of the class in the picture is ```large-6 cell selectorgadget_selected``` and not ```large-6 cell selectorgadget_rejected```
Also, if you use ```.../h3/text()``` you are going to scrape the text inside the H3 tag.
As I understand you want the text after the H3, between the ```<div>```. So try something like this:
```input = response.xpath("//div[@class='large-6 cell selectorgadget_selected']/text()").extract()
```
Comment for this answer: Thank you. Using your code above as a reference (I made a slight change) I was able to get an output of the content after the `` tag. The code is: `response.xpath("//div[@class='large-6 cell']/text()").extract()` There are 4 different `` tags from which I need to get information, AIRFRAME being one of them. The 4 tags are part of the `large-6 cell` class and thus I get the results for all 4 in the same output. How would I change my code so that the output is only for AIRFRAME?
Comment for this answer: Hi @Arty, that's hard to answer without seeing the actual html. You can try something like this in the XPath: `//div[@class='large-6 cell']/h3[contains(text(), "AIRFRAME")]/parent181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16iv/text()`. It will only find the H3 that contains the text inside the quotes `" "` and will return to the parent `` for the text. - If my answer solved your issue, please accept it by clicking the checkmark.
Here is another answer: To complete @renatodvc's answer, you could add ```normalize-space``` function to ignore whitespace nodes.
```//div[@class='large-6 cell selectorgadget_selected']/text()[normalize-space()]
```
Or use the function directly on the element :
```normalize-space(//div[@class='large-6 cell selectorgadget_selected'])
```
Output :
```AIRFRAME " Entry Into Service: December 2010" " Total Time Since New: 3,58@ Hours" " Total Landings Since New: 1,173" " (as of September 2019)" " Program Coverage: Enrolled on Smart Parts Plus" " Maintenance Tracking: CAMP "
```
Then, to extract the values, you can use regex :
```import re
text = 'AIRFRAME " Entry Into Service: December 2010" " Total Time Since New: 3,58@ Hours" " Total Landings Since New: 1,173" " (as of September 2019)" " Program Coverage: Enrolled on Smart Parts Plus" " Maintenance Tracking: CAMP "'
data = [el.strip() for el in re.findall(':(.+?)\"', text, re.IGNORECASE)]
print(data)
```
Output :
```['December 2010', '3,58@ Hours', '1,173', 'Enrolled on Smart Parts Plus', 'CAMP']
```
Comment for this answer: Your expression contains a typo. It should be : `response.xpath("//div[@class='large-6 cell selectorgadget_selected']/text()[normalize-space()]").get()`
Comment for this answer: Thanks! The `normalize-space` function doesn't seem to work. I tried the following code `response.xpath("//div[@class='large-6 cell']/text()")[normalize-space()].extract()` and got the following error `name 'normalize' is not defined`
|
Title: How to get shopping cart quantity count to not require page reload or different page to count
Tags: javascript;php;jquery;html;ajax
Question: I have the following code that gives me a live count of how many items are in the shopping cart. The issue is, it is not so live. I have to either reload the page or go to another page for it to show.
The way I have this is the following code is in a page called loadProducts.php. On every page I use the required function to load it.
```//Shopping Cart Quantity Count
if(isset($_SESSION['shopping_cart']) && is_array($_SESSION['shopping_cart'])) {
$totalquantity = 0;
foreach($_SESSION['shopping_cart'] AS $product) {
$totalquantity = $totalquantity + $product['quantity'];
}
}
else {
$totalquantity = 0;
}
```
I do not know much about Ajax at all, but I'm thinking this may be the only option.
Does it matter that this is ran with a session? How could I get a live element of this to show up on every page so when I click add to cart, it happens right then?
Comment: As you called it, you need AJAX. And even with AJAX you'll need additional techniques to keep a truly live count at all times, stuff like meteor, long polling or even setInterval
Here is another answer: You need to add one more line to update the DOM element that displays the shopping cart number.
For example if your shopping cart element is as follows:
```<button class="shopping-cart">Shopping cart (<span class="cart-item-count">0</span>)</button>
```
Then you just need to update it with:
```$('.cart-item-count').text($totalquantity);
```
In your javascript code on the success response where you send a request to add a product to your cart. You must be using an ajax request already in your page to add products to cart, since if that were not the case you would be submitting a page already and reloading thus the count already updating.
Here is another answer: Assuming you are familiar with jQuery you can use the following jquery ajax call:
```$("#yourFormId").submit(function(){ // or if you have link use click function
var jqxhr = $.ajax({
method: "POST", // or GET
url: "Your PHP to find count",
data: // any request param you want to send to your PHP
})
.done(function(content) {
// display the count
})
.fail( function(xhr, textStatus) {
// display the error
})
.always(function() {
});
});
```
Comment for this answer: When it comes to a code that will be repeated, you either have to create a template and let every page to use the template or create an include file and include to your pages. I'm not familiar with PHP, I advice to look into the API.
Comment for this answer: Where would I house this though? I have my current count file required on every page? Would I have to put this in every page or where?
|
Title: Security policy on Applets
Tags: java;security;applet
Question: I want to code an applet wich needs a special security permissions, ie: network access to do an http GET to a site.
I signed the applet myself and did a simple test with this result:
```
java.security.AccessControlException: access denied
("java.net.SocketPermission" "www.google.com:80" "connect,resolve")
```
I also try to add the security policy inside the manifest file with no luck.
I dont really understand what should be the correct producedure,
Should i use a policy file inside the jar? where exactly it should be located?
Should i put some policy definition on the APPLET tag on the HTML?
Should i do something inside the code to ask for privileges/permission?
Should i use other launch method like JNLP? does this make any difference?
Thanks
Here is the accepted answer: The behavior depends on java Version. Starting from java 7u51 both jws and applet need to be signed with a valid certificate (not self signed). http://www.oracle.com/technetwork/java/javase/7u51-relnotes-2085002.html#newft
There are only two security levels sandbox and all-permissions. The attribute permissions must be specified in the manifest and in the jnlp file. To perform http request sandbox is enough. Read this article http://docs.oracle.com/javase/tutorial/deployment/applet/security.html
Comment for this answer: "sandboxed" permission is generally insufficient to violate Same Origin Policy. A limited form of crossdomain.xml is supported, but I don't think that can do anything useful with google.com's policy.
Comment for this answer: Thanks. What have worked for me is implementing PrivilegedAction on my classes and doing the AccessController.doPrivileged().
Here is another answer: I answer similar question here: Warning on Permissions attribute when running an applet with JRE 7u45
you need to make a right manifest file. or you use the command line
```jar ufm jarfile.jar confmanifest.txt```
or you use maven.(Simpliest way to add an attribute to a jar Manifest in Maven)
inside your manifest you'll edit permissions that its needed (socket, file, etc) and its codBase.(cross-origin and security purposes)
Then for running locally without a true CA signed certificate you'll need to edit your jvm java.policy file with ```policytool```
JNLP is for signed jars/applets. But you can use this, its only an applet descriptor and you can excute it from everywhere, like you desktop.
With HTML5 you should use the ```<object>``` tag. I rather prefer deploy the applet via javascript and invoke applet methods with javascript methods.
See http://docs.oracle.com/javase/tutorial/deployment/applet/invokingAppletMethodsFromJavaScript.html
cy@.
Comment for this answer: well with policytool don't solve coz you can't deploy that as solution.
|
Title: What is the behavior of an unsigned int converted to an unsigned char in the C99 standard?
Tags: c;casting;c99;endianness;type-conversion
Question: For example:
```#include <stdio.h>
int main(void){
unsigned int x = 64;
x += 1023;
unsigned char y = x;
printf("%u\n", y);
return 0;
}
```
The variable ```y``` holds the value ```63``` on my machine. Does the C99 standard guarantee that the least significant byte will be stored when an unsigned int is converted to an unsigned char, or does the endianness of the machine affect the convertion?
Comment: It should print 63, not 65.
Comment: A quibble: There is no cast in your program. A cast is an operator that performs an explicit conversion; it consists of a parenthesized type name. You have an implicit conversion (which behaves the same way an explicit cast `unsigned char y = (unsigned char)x;` would).
Comment: @wildplasser Oops I misread the output of my program.
Comment: @KeithThompson Thanks for the semantic correction.
Here is the accepted answer: The standard says this about converting to an unsigned type:
```
When a value with integer type is converted to another integer type
other than _Bool, if the value can be represented by the new type, it
is unchanged.
Otherwise, if the new type is unsigned, the value is converted by
repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.
```
Which is a fancy way of saying the value wraps around if it doesn't fit. So in your case you'll always get 63, on all machines, unless your ```unsigned char``` can actually store more than 255: it has nothing to do with endianness.
Comment for this answer: @R.. True, I added that bit.
Comment for this answer: Mathematically speaking, would the wrapped value be equivalent to the least significant byte?
Comment for this answer: Modulo by 256 (or the respective max value of unsigned char + 1) appears to truncate the higher bytes, so I suspect only the least significant byte is evaluated in the end.
Here is another answer: Endianness does not matter for the cast. Only the C value matters.
|
Title: dlsym returning symbol not found in android after succesful return of dlopen
Tags: android;linux
Question: I am loading one shared library from another. Lets say foo2.so from foo1.so
I am using dlopen followed by dlsym
dlopen succeed with proper handle but dlsym returns with "symbol not found" error .I have used dlerror to print the error.
These are the things i tried. In foo2.so .mk file I added
LOCAL_LDFLAGS += -Wl,--export-dynamic.
I checked the symbol in foo2.so using nm and it is there.
Both the modules are in C except one wrapper file in foo1.so which is in C++, Calling file is in C.
Can any one suggest whether i missed any thing . I am running this on android emulator on froyo.
Here is another answer: I would be tempted to poke around at the implementation level and verify things. Look in /proc/PID#/maps and make sure both libraries are loaded.
objdump both caller and callee and make sure that C++ bit didn't mangle the name.
Are you using a suitable RTLD_ flag, and is dlsym getting a valid handle returned by dlopen ?
Can you build (a simplified version of) the two libraries and test executable for a desktop linux or cygwin in order to make sure what you want to do is generally workable - ie, that the problem is android-specific?
Comment for this answer: Sample code is entered below pHndl = dlopen(pTemp, RTLD_NOW);
if ((err = dlerror()) != NULL) {
LOGE("Error in loading shared lib: %s",err); dlerror();
}
dlsymRet = (OMX_PTR)dlsym(pHndl,pFuncName);
if ((err = dlerror()) != NULL) {
LOGE("Error symbol not found : %s",err);
}
Comment for this answer: it returns valid handle. tried RTLD_LAZY as well. Both the caller and callee resides in a c file so will it name mangling in remaining C++ file matters. regarding the map file i need to check in media player process. This code was workign perfectly on linux. Trying to load openmax components dynamically onto our core. Need to check the id of media player process .
|
Title: JPA @ManyToOne, FetchType.LAZY and FetchMode.SELECT do not load original Parent object
Tags: java;jpa;orm;many-to-one
Question: In my bi-direction JPA association between my Parent-child; I am trying to Lazy load my Parent from my Child Entity through a ManyToOne association.
While I debug, I do get the javaassist Proxy object(something like ```Trade_$$_javaassist_41```) which is always a blank object. When I click the Proxy object, JPA does execute the select Query also, but ultimately, no binding happens with the original Parent Object(tradeBean).
Lazy loading the child entity from Parent(OneToMany) doesn't have such issues and child loads without fail.
Parent Entity(Trade)
```@OneToMany(mappedBy="tradeBean", fetch=FetchType.LAZY, cascade=CascadeType.ALL})
@Fetch(FetchMode.SELECT)
private Set<Product> products = new LinkedHashSet<Product>();
```
Child Entity(Product)
```@ManyToOne(fetch=FetchType.LAZY, targetEntity=Trade.class)
@JoinColumn(name="Trade", referencedColumnName="TradeID", insertable=false, updatable=false)
@Fetch(FetchMode.SELECT)
private Trade tradeBean;
```
EntityManager Retrieval
```EntityManager manager = getEntityManager();
productObj = manager.find(Product.class, productPrimarykey);
if (productObj != null){
productObj.getTradeBean().getTradeID();
}
manager.detach(productObj);
return productObj;
```
Question JPA/Hibernate proxy not fetching real object data, sets all properties to null has the similar problem statement but there are no final members in my entities.
Comment: With or without `targetEntity=Trade.class`; didn't make any difference.
Comment: In General.
In Debug mode i could only see that the Lazy load is indeed happening when i click the entity association. Select query was being executed but still the binding of original object to proxy object is not happening.
Comment: Is the problem occurring only in the debugger or in general?
Comment: targetEntity=Trade.class?????
|
Title: Rails is Using HTML View When Requesting Javascript
Tags: ruby-on-rails;ruby-on-rails-3.2
Question: Weird issue where I'm asking for the JS file, but Rails is serving the HTML file. And it's only on my staging server (Heroku) and not on my local machine.
I have a dynamic Javascript file which is something that needs to be included in other pages via a script to like so:
```<script type="text/javascript" src="http://example.com/embed.js></script>
```
That embed maps to a controller and action which also handles HTML. The relevant route looks like this:
```match "/embed(.:format)" => "articles#embed", as: "embed"
```
And the controller action is pretty standard.
```def embed
respond_to do |format|
format.html do
#it renders some HTML
end
format.js #no block is given
end
end
```
And, I have two views under app/views/articles
embed.html.haml
embed.js.coffee
On my local machine, requesting localhost:3000/embed.js works. It renders the Javascript without a problem. However, on my staging server, here's what I see in the logs:
```Started GET "/embed.js" for 404.204.5303 at 2012-11-04 00:23:01 +0000
Processing by ArticlesController#embed as JS
Rendered articles/embed.html.haml (1.5ms)
Completed 500 Internal Server Error in 2ms
```
The Internal Server Error is not the issue. The issue is that it recognises the request as JS, yet decides to render the HTML template and only on staging.
What's going on?
Comment: Thanks Rodrigo. While I can't figure the discrepancy between development and production, it appears the browsers request javascript as text/html and Rails seems to favour their requested type over the explicit format. In the controller, if I do `format.js { render action: "embed.js.coffee", content_type: "text/javascript" }` that seems to fix the problem on staging.
Comment: Except actually, now, the Coffeescript isn't getting compiled to Javascript so it's pretty useless. Will keep working on it…
Comment: I don't think this will solve the problem, but try using `format.json` instead of `format.js`. If nothing else works, you can make a new route (really, try everything before doing that) and force this route to use js, like `match "/embed(.:format)" => "articles#embed", as: "embed", :format => "js"`
|
Title: create dynamic row in jsp using for loop
Tags: java;jsp
Question: i want to create a table with dynamic no of rows by this way
```<table width="89%" style="margin-left:30px;"> <%
for (int arrayCounter = 0; arrayCounter < documentList.size(); arrayCounter++) {
%>
<%
int test = arrayCounter;
if((arrayCounter%2)==0)){
%>
<tr>
<%
} %>
<td style="width:2%">
</td>
<td style="width:20%;align:left;">
</td>
<td style="width:30%;align:left;">
</td>
<%
if((arrayCounter%2)==0){
%>
</tr>
<% } %>
<%
}
%>
</table>
```
in my jsp this way it will create 4 rows but according coding function it would create 2 row only if ```documentlist.size()=4```;
help me !
Here is another answer: remove if statements from loop and create rows normally.
change your loop with this
for (int arrayCounter = 0; arrayCounter < (documentList.size()/2); arrayCounter++)
and for last row you can have if statement which will compare
if (documentList.size()/2)-1 == arrayCounter).. then
you will get what you are looking for
else
for (int arrayCounter = 0; arrayCounter < documentList.size(); arrayCounter++) {
```if (documentList.size()/2)-1 == arrayCounter){ create 1 row}else{
create 1st row and then arraycounter ++
create 2nd row and then arraycounter ++
```
}
}
Comment for this answer: do you thing it will solve our problem bcoz your loop run by size/2 times but we need to iterate all data ?
Here is another answer: Don't use scriptlets in jsp, Jsp is is view layer, use as a view. there is servlet/java-beans to put your all java code.
There is jstl taglib, which has many inbuild functions, use it. you can get it from here
In your case to loop over a list do like this:
Add jstl library to your classpath
First import jstl taglib in top of your jsp.
Then you have jstl tags to use in your jsp.
To import jstl in jsp do like:
```<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>
```
for looping a ```List``` in jstl, there is ```c:forEach``` tag, you can use it like:
```<c:forEach items="${documentList}" var="doc">
//here you can access one element by doc like: ${doc}
</c:forEach>
```
If you want to generate table rows, for each documentList element, then do like:
```<table width="89%" style="margin-left:30px;">
<c:forEach items="${documentList}" var="doc" varStatus="loop">
<tr>
<td style="width:2%">
//here if you want loop index you can get like: ${loop.index}
</td>
<td style="width:20%;align:left;">
//if you want to display some property of doc then do like: ${doc.someProperty},
jstl will call getter method of someProperty to get the value.
</td>
<td style="width:30%;align:left;">
</td>
</tr>
</c:forEach>
</table>
```
read more here for how to avoid java code in jsp.
Here is another answer: Obviously it will create only 2 tow when size is 4,
when size is 6 it will create 3 row. remove it statement from loop if you want to
create rows equal to number if size
|
Title: Swift - required method not implemented: -[JSQMessagesViewController collectionView
Tags: ios;swift;swift3;jsqmessagesviewcontroller
Question: I am migrating an iOS app to Swift 3 and I keep having this error message on my ChatViewController.
```2017-02-21 16:40:40.599 Jaco[52613:2864859] *** Assertion failure in -[Jaco.ChatViewController collectionView:messageDataForItemAtIndexPath:], /Users/Royal/dev/jab/ios/Pods/JSQMessagesViewController/JSQMessagesViewController/Controllers/JSQMessagesViewController.m:491
2017-02-21 16:40:40.609 Jaco[52613:2864859] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'ERROR: required method not implemented: -[JSQMessagesViewController collectionView:messageDataForItemAtIndexPath:]'
```
Here is part of my code
``` // DATA SOURCE 1
func collectionView(collectionView: JSQMessagesCollectionView!,
messageDataForItemAtIndexPath indexPath: NSIndexPath!) -> JSQMessageData! {
let data = self.messages[indexPath.item]
return data
}
// DATA SOURCE 2
override func collectionView(_ collectionView: UICollectionView,
numberOfItemsInSection section: Int) -> Int {
return messages.count
}
```
I tried adding "override" but the error message is still there.
Any ideas how to fix this? Thanks for your help!
Here is the accepted answer: Change
```func collectionView(collectionView: JSQMessagesCollectionView
```
to
```func collectionView(_ collectionView: JSQMessagesCollectionView
```
|
Title: How do I choose a "Medium" difficulty?
Tags: minecraft-bedrock-edition
Question: How can I change the difficulty to an intermediate level? The toggle snaps to either the Easiest or Hardest difficulty, but nowhere between. How can I achieve a middle ground? I prefer the difficulty setting of the lite version.
Comment: There is no in between setting currently.
Here is another answer: You have to get a program called my minecraft and another called file transfer. When you open my minecraft it will make you watch a porno video but it is fun to watch and worth it, then open file transfer and select minecraft, then your world, then it will offer you different options, one of them a difficulty changer. Hope this helped
Comment for this answer: ...a porno video?
Here is another answer: The settings currently are:
Peaceful: no mobs spawn and attack you
Regular: mobs spawn and will attack you; similar to the normal difficulty on PC.
Here is another answer: The slider is basically a fancier looking version of the Peaceful Mode option in previous versions of Minecraft: Pocket Edition. Sliding it to the left enables Peaceful Mode, and sliding it to the right is regular survival.
Here is another answer: The only options are difficult and peaceful. Unfortunately there is no middle. If you are looking for less mobs, all I can suggest is keeping your world well lit by placing torches or glow stone.
Here is another answer: After upgrading to 0.12.1 you will have four options for difficulty. the max appears to be same as normal mode of old and the lowest is peaceful mode of old... the two in the middle seem to be the medium you are looking for.
there is still no hardcore option for PE
frankly the max position hasn't been much of a problem so I'm not even sure it is the worst possible but it is logical that all the way to the right will be the worst available.... Try it, you might like it!
|
Title: How to resolve 'FIRApp' has different definitions in different modules; first difference is definition in module 'FirebaseCore?
Tags: ios;react-native;react-native-ios
Question: Im migrating from xcode 11.7 to xcode 12. I tried building the app from xcode 12 and then these errors showed up. Do you have any solutions? Thanks in advance!
Here is another answer: For those looking for the solution I found it here https://stackoverflow.com/a/41102620/4444250
Solution:
Open the file ios/ProjectName/AppDelegate.m and you need to replace the import
from @import Firebase
to #import <Firebase/Firebase.h>
|
Title: How to fix Ubuntu getting stuck at creating ext4 file partition on LivecCD?
Tags: installation;boot;dual-boot;partitioning;live-cd
Question: I have an Acer Aspire One. Not sure what version. I've had it for a year or two on Windows and then suddenly, I couldn't boot Windows anymore. So I decided to use Ubuntu. I first tried Ubuntu 13.04. It said my hard disk was faulty. So, I decided to try a lower version: 12.04. Whenever I get to the part where it starts creating ext4 file system, it gets stuck. I know that it is stuck because I have waited for 1-2 hours, and it didn't move a bit.
I don't know what I am doing wrong. Maybe my hard drive is ruined or something. I don't know. Any Ubuntu installer I try never works to go through the whole installation. I have been using a liveCD for my installation.
I have tried the Disk Utility. It says that a few sectors are bad. Only two when I checked. Those sectors didn't seem very important. For the memory test, There was only one failing address. It also said my Err-Bits were 80000 and the Count Chan 1.
Comment: @SimplySimon I tried entering the commands like suggested in the answer, it just comes up with this: /dev/sda1 on /target type ext4 (rw,errors=remount-ro).
Comment: The first thing to do is to scan your hard disk for errors. The reason Windows wouldn't boot is probably the same reason Ubuntu can not create the partition. See http://askubuntu.com/questions/122307/need-to-try-several-times-to-log-into-ubuntu-normally/122310#122310
Comment: @Mitch No, "Check disc for defects" is checking the Ubuntu boot disc for defects (verifying md5sums), not the internal hard drive!
Comment: Run from a live CD/USB and check your hard drive from there: [How can I check my RAM and harddrive for errors?](http://askubuntu.com/q/14303/88802) Include the results in your question by *editing* it.
Here is the accepted answer: Check your hardware.
```
It says my hard disk has a few bad sectors. The Reallocated Sectro Count and the Current Pending Sector Count. Not sure whether that affects my hard drive. I tried the installation again and while copying files, it had an error saying that the hard drive might be faulty and stuff.
```
Replace the hard drive. It has already failed or it will fail soon. This is very likely to be the cause.
```
For the memory test, There was only one failing address.
```
This also does not sound very good. Memory should not fail, or else you will be suffering from data corruption.
Comment for this answer: Okay. Thanks for that. If I need to change my hard drive then I might as well dump the netbook.
Here is another answer: Boot from the 12.04 Live CD/DVD/USB select try Ubuntu without installing click on the Ubuntu symbol at the top left of your screen, write disks in the search field, there should only one programm appear start it.
On the left are your devices listed click on your harddrive, on the right schould now appear SMART-Status: (something) if this says your disk is bad then you should buy a new harddisk.
Comment for this answer: It says my hard disk has a few bad sectors. The Reallocated Sectro Count and the Current Pending Sector Count. Not sure whether that affects my hard drive. I tried the installation again and while copying files, it had an error saying that the hard drive might be faulty and stuff. Thanks for the quick reply btw. :)
|
Title: How can I use VLOOKUP to search for multiple entries in one cell and to output the result similarly?
Tags: excel;vlookup
Question: I have an Excel cell with the data "1, 2, 3".
I want to use VLOOKUP to search for these numbers in another table and to return multiple values associated with them.
From what I looked up I need to make a new function, but none of them seemed to work for me.
The formula I have used so far is
```Function LookupConcat(r As String, lookupColumn As Range, lngOffset As Long) As String
Dim t, u As Long, c As Range, s As String
t = Split(r, ",")
For u = 0 To UBound(t)
Set c = lookupColumn.Find(Trim(t(u)))
If Not c Is Nothing Then s = s & c.Offset(, lngOffset - 1) & ", "
Next
If Len(s) Then LookupConcat = Left(s, Len(s) - 2)
End Function
```
The table I want to use this on is
BC
IDID2
110
15
220
325
I am also using =lookupconcat(A2,B2:C4,2) to return the value
A2 has the values "1,2,3"
And I want the function to return "10, 5, 20, 25"
But All I got was 0
Comment: Please post what formula you have, a sample table, a sample of the expected output. Also, you may instead want to look into [Index/Match](http://thinketg.com/say-goodbye-to-vlookup-and-hello-to-index-match/), which can use multiple values to look something up. I answered a question similar to this a little bit ago: [this may help](http://stackoverflow.com/questions/33784228/vlookup-alternative-using-three-lookup-values/33784647#33784647). Otherwise, please clarify your question a little more and show us what you've tried so far.
Comment: @BruceWayne Hi, I have updated my question
Comment: @ExcelHero Hi, I updated the question. I am really not familiar with coding on excel so I just googled "vlookup comma separated values" and it was in one of the articles there. (http://www.mrexcel.com/forum/excel-questions/757890-function-lookup-comma-separated-list-within-single-cell-return-concatenated-results-single-cell.html)
Comment: @BruceWayne I'm a huge fan of INDEX/MATCH but VLOOKUP can do the same multiple-value lookup. Interesting I/M article you linked. To really understand INDEX, I'd humbly recommend this article: http://www.excelhero.com/blog/2011/03/the-imposing-index.html
Comment: Wow! The update COMPLETELY changes the question! Can you also include how you are calling the `LookupConcat` function?
Comment: What does this User Defined Function have to do with VLOOKUP?
|
Title: Is there a SQL Injection risk with this query? If so, how can I avoid it?
Tags: c#;asp.net;ado.net;sql-injection;parameterized
Question: I usually create parameterized queries in order to avoid SQL Injection attacks. However, I have this particular situation where I haven't been totally able to do it:
```public DataSet getLiveAccountingDSByParameterAndValue(string parameter, string value)
{
string sql = "select table_ref as Source, method as Method, sip_code as Code, " +
" from view_accountandmissed " +
" where " + parameter + " like @value " +
" order by time DESC ";
MySqlCommand cmd = commonDA.createCommand(sql);
cmd.Parameters.Add("@value", MySqlDbType.String);
cmd.Parameters["@value"].Value = "%" + value + "%";
MySqlDataAdapter objDA = commonDA.createDataAdapter(cmd);
DataSet objDS = new DataSet();
objDA.Fill(objDS);
return objDS;
}
```
As you can see, I am creating @value as a parameter but if I tried to do the same with parameter the query would fail.
So, is there a risk of SQL Injection with this query? Also, take into account that parameter is set by a DropDownList's SelectedValue (not a TextBox, so the input is limited). If so, how can I improve this query?
Comment: ASP.NET checks if the items have changed when `EnableEventValidation` is set to `true` since 2.0. You would get an exception: "Invalid postback or callback argument". http://odetocode.com/blogs/scott/archive/2006/03/20/asp-net-event-validation-and-invalid-callback-or-postback-argument.aspx
Comment: There's definitely a risk. Imagine some malicious user modifies your input form via some browser dev tool and enters something like `1=1; DROP TABLE important_table; ...` as `parameter` value...
Comment: +1 for checking. There's definitely a SQL Injection risk with this.
Comment: With websites (as it's tagged asp.net) do not trust anything from a client as it can be modified with JS (including dropdown lists).
Here is the accepted answer: Yes there is:
```" where " + parameter + " like @value " +
```
The value in parameter is your risk. In the postback you should check if the selected value is in the set of start values of the dropdown list.
Make the parameter an enum and pass the enum to your function. That will eliminate the risk (something like: not tested):
```public DataSet getLiveAccountingDSByParameterAndValue(ParameterEnum parameter, string value)
.....
" where " + parameter.ToString() + " like @value " +
```
The ParameterEnum contains a list of all possible values in your dropdown list. In your code behind, parse the selected value to the enum.
Comment for this answer: Since 2.0 ASP.NET checks if the items have changed when `EnableEventValidation` is set to `true`. You would get an exception: "Invalid postback or callback argument". http://odetocode.com/blogs/scott/archive/2006/03/20/asp-net-event-validation-and-invalid-callback-or-postback-argument.aspx
Comment for this answer: Yes, but i thought it is worth noting. Most times people complain about this feature and ask how to get rid of it, but here is a good example when it's useful.
Comment for this answer: You are correct, by default. But if you are someone else does change the default, you would have a risk.
Here is another answer: ```var columns = new [] {"column1", column2", ....};
if (!columns.Contains(parameter))
return or do something else
```
EDIT
The only SQL injection risk is by passing the column name in the where clause using string concatenation. There is no other way. The truly shield is to check that the column name is a valid one, it exists in the table.
Even ASP .Net has event validation (checks that the posted value is one of the dropdowns), you can't base on this since this protection can be disabled.
The parameter used with like is not object to SQL injection
Comment for this answer: How is this answering the question?
Comment for this answer: Still not good, because you might not want all columns to be accessed. And also you're using *Contains*, which allows stuff like "column1; DROP TABLE..."
Comment for this answer: the column names are wellknown so check that the parameter is a valid column name
Here is another answer: ```
So, is there a risk of SQL Injection with this query?
```
I think yes, it's vulnerable to SQL injection. For example, parameter = "1=1 OR value"
```
Also, take into account that parameter is set by a DropDownList's
SelectedValue (not a TextBox, so the input is limited)
```
Doesn't really matter. A malicious user can inject any value on the executable itself or on the network packet (and thus send a value that doesn't exist on the DropDown).
```
If so, how can I improve this query?
```
You should check parameter argument and compare with DropDown values. For more generic data, I think there should be libraries that check such things (but I have no C# idea...).
Here is another answer: Since 2.0 ASP.NET automatically validates postback and callback arguments to see if they differ. So this is a good example when it's useful to ```EnableEventValidation```.
http://odetocode.com/blogs/scott/archive/2006/03/20/asp-net-event-validation-and-invalid-callback-or-postback-argument.aspx
You'll get following exception then:
```
"Invalid postback or callback argument"
```
You could ensure that it's set to ```true``` by explicitely setting it in codebehind, for example in Page's ```Init``` event:
```protected void Page_Init( object sender, EventArgs e )
{
// don't remove this
Page.EnableEventValidation = True;
}
```
Edit: Oops, actually this setting cannot be changed from codebehind, it compiles but throws following runtime error:
```
The 'EnableEventValidation' property can only be set in the page
directive or in the configuration section.
```
Comment for this answer: @aleafonso: Yes, therefore I've showed how to ensure that it's set to true for this page and will not be changed(even when someone sets `EnableEventValidation=false` in the page directive or in web.config).
Comment for this answer: +1 for making us aware of this. Ironically, I used to complain about this property too. Still, I have given the right answer to @peer because I might have disabled the page event validation in some particular cases
Comment for this answer: I wish I could mark both answers as correct since both of them are right. Thanks a lot
|
Title: Is there a testing tool to test a C# .net web service that contains complex types?
Tags: c#;.net;web-services;testing
Question: I have built a C# .net web service that takes a complex type as a parameter. Is there a testing tool that I can run against my web service and pass all of the values to the complex parameter type?
Some background info:
I ran the xsd.exe tool against my XSD and have created a .cs class. This .cs class has the dataset that is used as my complex parameter. Now I need to test out my webmethod and pass values to it as the client would. I have tried WebService Studio and Storm, but it seems that neither of the products can handle complex types. Is there any other product that can do this? Or do I have write my own test application from scratch?
Here is the accepted answer: soapUI will help you do this.
However, the way I usually do it is to write automated unit tests using NUNIT or MSTEST, add a Service Reference and then just write the tests. This has the additional benefit that you are creating the same kind of client code your users will need to do. If your service is hard to use, then you'll find out pretty quick.
Here is another answer: For classic ASMX services, I used Web Service Studio 2.0 which handled every complex type I [email protected]. You can get the classic version (2.0) from http://archive.msdn.microsoft.com/webservicestudio20/.
I know there is an updated version on codeplex that you linked to and it looks like it's been updated to support complex types. (A while back there was a useless tool on codeplex that couldn't do complex types.)
Just curious what specific issue you are having with Web Service Studio?
UPDATE: After re-reading your question, it sounds like you are using a DataSet in your service. If so, then you are going to have interoperability problems consuming that service from most toolkits; they can't handle the DataSet because it is a "dynamic" type. The easiest way around that issue is to avoid DataSets.
If that is the case, then I agree with others that you will need to create your own .NET application that can consume your service.
Comment for this answer: Whenever I have the complex type (i.e. dataset) as a parameter value of my webmethod, when WebServiceStudio compiles the Proxy it throws this error, "error CS0260: Missing partial modifier on declaration of type 'ItemList'; another partial declaration of this type exists". If I remove the complex parameter, the error goes away and webservicestudio works correctly.
Here is another answer: For simple cases you can use WCF Test Client (WcfTestClient.exe) introduced in Visual Studio 2008. Find more on http://msdn.microsoft.com/en-us/library/bb552364.aspx
SoapUI is good for more complex cases.
Here is another answer: I would test it by using Visual Studio with a Windows Form referencing your web service. In this Windows Form you can use NUnit, Fit or anything you normally use to test your application. If you run both your Web Service and Windows Form in debug you can walk through the code to see the results.
This is the method I use, I've never really heard of another way within .net Web Services with custom types.
Here is another answer: Isn't it just as easy to grab a copy of Visual Studio Express (if you don't have the full version) and create a windows application, add a webreference and test it?
Should take you less time, than me reading this question ;)
(and no I'm not a slow reader)
Comment for this answer: I can say that I often need to test these things on a Customer's carefully controlled server where I'm logged into a Citrix Server over a dodgy VPN connection. Installing Visual Studio is not feasable. Copying over a generic tiny portable .Net app is. That's where Web Service Studio comes in handy.
|
Title: getting java.lang.ClassNotFoundException: javazoom.jl.decoder.JavaLayerException on linux but works on windows
Tags: java;linux;mp3;raspberry-pi;jlayer
Question: I am using JLayer to play an mp3 file
the following code works after compiling the project into a jar with the command
java - jar blahblahblah.jar
but not with linux.... any ideas? I get java.lang.ClassNotFoundException: javazoom.jl.decoder.JavaLayerException
```import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import javazoom.jl.decoder.JavaLayerException;
import javazoom.jl.player.Player;
public class Mp3JLayerTest
{
/**
* @param args the command line arguments
*/
String filename;
String directory;
File mp3File;
static Player player;
public static void main(String[] args) throws FileNotFoundException, JavaLayerException
{
FileInputStream fis;
fis = new FileInputStream("kalimba.mp3");
BufferedInputStream bis = new BufferedInputStream(fis);
player = new Player(bis);
player.play();
new Thread((Runnable) new Mp3JLayerTest()).start();
}
public void run() throws JavaLayerException
{
player.play();
}
```
}
Comment: I am attempting to run on raspberry pie with build 1.7.0_40-b43 on linux
Comment: just updated hopefully it will avoid confusion :)
Comment: windows 1.7_45 just checked if that helps, will continue to google and experiment, if I come up with an answer will post back
Comment: yes, within netbeans I have the added jar file and the class is present.
Comment: within windows I am using netbeans to add the jar to the project file compiling out the new jar project file and running via commandline in linux, that way the included jar is within the new compiled jar.... this works in windows by executing the runnable jar file by double clicking or running via command line. in linux I transfer the compiled jar over and run via command line
Comment: windows yes, also commandline windows yes.....
Comment: when i move it over to my pi no
Comment: ill give it a shot and see how it goes
Comment: ended up going a different way with this using python and pygame for mp3 output/gpio control/tcp server and java for control with a Android application.
Comment: Your title says the contrary as your message
Comment: Did you check this class is in the jar ?
Comment: Wait... I thought you were running a jar with command line. What does Netbeans have to do with that ?
Comment: Well that last sentence probably is the problem. Try the same command in the folder where the jar was created. Also, can you run your project from NetBeans ?
Comment: What if you decompress the library jar so that the binary files are on the same level as your classes ? Build your project by hand
|
Title: recycled twice! Runtime Exception in TypedArray
Tags: android;android-5.0-lollipop
Question: I'm receiving some crash reports from devices using android L preview, the issue is
```Caused by: java.lang.RuntimeException: [17, ...... ] recycled twice!
at android.content.res.TypedArray.recycle(TypedArray.java:869)
```
Can't see the code because Android L is still not available, How can I check if the TypedArray is already recycled?
I actually found that the call to recycle is called twice, but anyway, why throw a exception now breaking potentially old working code (even if it's incorrect to call recycle twice).
Comment: Have you noticed the documentation change in _TypedArray.recycle()_ ? It has changed to ; "Recycle the TypedArray, to be re-used by a later caller. After calling this function you must not ever touch the typed array again."
Here is the accepted answer: Due to changes in TypedArray pooling in L, calling TypedArray.recycle() twice or calling a getter on TypedArray after recycle() is not safe. This has never been correct app behavior and prior to L may have introduced subtle errors.
|
Title: ClearCase Stream Configuration for CR Based Approach
Tags: stream;clearcase
Question: I am having trouble creating a ClearCase stream structure that is best suited for a project that works on a ticket (CR) basis. For example, if I have 7 CRs that need to be developed simultaneously , what would be the best approach?
Let's assume that I have three streams: DEV, TEST, and PROD. My 7 CRs move from DEV to TEST through the deliver operation. Of those 7 CRs, only 4 are ready for PROD. How can I move only 4 out of the 7 CRs (now grouped into one deliver) into PROD? What stream structure enables this?
I have read many (sometimes contradicting) suggestions and I have still not managed to find a solid approach.
Regards,
Andrew
Here is another answer: Delivery only some activities and not others is quite dangerous with UCM, mainly because you have the risk to link all the activities together.
```PROD
TEST
DEV
```
That will work if you deliver always from ```DEV``` to ```TEST```, ```TEST``` to ```PROD``` (you can deliver activities then).
You could be blocked, however, by a legitimate activity file-based dependency: see "About activity dependencies in the deliver operation".
If you have any issue delivering activities, then you can use ```findmerge``` to merge only the activities you want.
See more on the "all activities are linked" and ```findmerge``` in "ClearCase : Making new baseline with old baseline activities".
Comment for this answer: Delivering from `DEV` to `TEST` would work for the first set of CRs since you can deliver single activities if necessary (despite losing naming convention), but this approach would fail when delivering from `TEST` to `PROD` because the intermediate baselines created automatically through the first set of delivers would create dependency issues. Are you saying that `findmerge` would help to move individual CRs? This means that before doing so I would have to have an activity already waiting in the target stream. This would remove the deliver operation altogether, correct?
Comment for this answer: Thanks for the update. We are currently struggling with ClearCase because it is very clumsy and complex when dealing with projects of this nature (ones that don't follow the rigid waterfall approach). There is a much larger overhead since you have to plan in advance and group activities. If you create a stream for every CR you would have to manage all the streams (huge overhead) and would lose baselining for all phases but production (which would be the integration stream for the CRs). Do you know where I could find an analysis on these different approaches?
Comment for this answer: If only RTC were free :) I have worked with RTC source control and I find it to be much more liberating. Unfortunately you're at a disadvantage if you're not working with Eclipse compatible technologies. Enforcing the creation of Eclipse projects and the installation of a massive client is a bit silly, especially if you're performing CM for cobol. Thanks again for the updates, I will take a look at the findmerge.
Comment for this answer: out of curiosity, what sort of flexibility does RTC provide for CR based development approach and the selective promotion of activities?
Comment for this answer: @Andrew yes, `findmerge` is the only way to deliver activities which are linked by that "timeline". This is not a deliver operation, but a simlpe merge of files referenced by an activity.
Comment for this answer: @Andrew As usual, [Tamir](http://stackoverflow.com/users/138479/tamir-gefen) mentions that [R&D Reporter](http://www.gomidjets.com/rnd-reporter.php) could automate that `findmerge` approach. It is a non-native (and ultimately commercial) solution that you might want to check out. I don't have any link with GoMidjets.
Comment for this answer: @Andrew I agree. I since then migrated al lmy project to RTC (https://jazz.net/products/rational-team-concert/): a much more flexible tool.
Comment for this answer: @Andrew I confirm that a Stream per CR is not a viable option. When you are dealing with activities you want to partially deliver, only the `findmerge` approach is possible, in order to not be tied by that fake "timeline" dependency between activities during a normal deliver.
Comment for this answer: @Andrew we are using RTC for C, C++ project under various Visual Studio without any problem. The `.project` are simply ignored. As for the flexibility, is is HUGE: private commit, selective deliveries of change sets, vision of changes through a list of Work Item, one Stream per CR if you want/need it (you can rename/reuse/delete a Stream whenever you want).
|
Title: reboot raspberry remotly from webserver
Tags: raspbian;ssh;remote;php
Question: I have many raspberries located in different places. All of them are connected to my website and download videos to show on their respective monitors.
Is it possible to send some command from my website (```i use php language```) to reboot the raspberry remotely? That is to say, run a command from my server to reboot a raspberry pi?
My raspberries are connected to an access point which is connected to the internet, but it does not have static IP.
This guide here presumes the raspberry and the server machine are in the same network, but for my setup that is not true.
Any solution to restart the raspberry remotely (through internet) would be helpful.
Here is the accepted answer: On your webserver you can setup a specific file to hold a command, for example
```http://myserver.com/command-for-raspberry.txt
```
That file should hold a sequence number and a command.
Periodically (research crontab) the Raspberry Pis should download that page, check the sequence number against their saved sequence number of the latest command they run, check the command against a list of valid commands, and if so, execute the command and save the sequence number for future reference.
Then all you have to do is create the file ```command-for-raspberry.txt``` with, for example, the content
```01 reboot now
```
put it on the server and wait for the Pis to download and execute it.
NOTE: there is no security built into this solution, and it can be easily exploited in a multitude of ways.
Comment for this answer: Actually i really started a solution close to your solution.... i create a python script listener.py in my raspberry... it always send a request to a simple web Service in my server... which will give it the command to execute ... for security my python always send a API Key and my web service also check this key ... but i will consider you answer is the best and the correct one ... thank you
Here is another answer: You can install and run piControl, a Node.js web application to shut down or reboot your Raspberry Pi.
More info here.
To make your Raspberry Pi reachable from the Internet, you should give it a static IP address on your local network, and forward the public HTTP default port (80) to the local IP address of the Raspberry Pi.
Comment for this answer: i like piControl but they said "piControl is allow reboot and shutdown from browser in local network" so it is for local network ,,, and you said i must give the pi a static IP ... so how Teamviewer can connect to other PC in windows without static IP ???
Comment for this answer: TeamViewer (TV) is a little special: you don't need your RPi to have a static IP address since when you connect to it, you don't really make a direct connexion from your computer to your RPi. When you run TV on your RPi, the RPi connects to the TV' servers, and waits a incoming connection. Then, when you connect to your RPi from your computer (or tablet, or whatever), your computer will, in fact, connect to the TV' servers, will find your RPi's ID number given by the TV running on your RPi, and take control of your RPi. Besides, you don't need to run TV on the RPi to run piControl.
|
Title: Database for user Accounts ionic framework
Tags: javascript;angularjs;database;ionic-framework
Question: I need to incorporate a simple user profile for each person who uses my ionic android app. That way they can login and access or edit personal information unique to their personal accounts.
I've been browsing all over the net for a day, and especially the ionic documentation for any info on how to make such a feature on my app. I can do that using php and mysql but don't know about ionic's approach. Please advice.
Here is another answer: Have you looked into using the devices ```localstorage```?
You can quite easily store variables and objects in the device using the following code.
```$window.localStorage[key] = JSON.stringify(object);
var object = $window.localStorage[key];
```
Personally I set these up in a factory so they can be accessed a lot easier across the App like follows
```.factory("localStorage", ["$window", function($window) {
return {
set: function(key, value) {
$window.localStorage[key] = value;
},
get: function(key, defaultValue) {
return $window.localStorage[key] || defaultValue;
},
setObject: function(key, object) {
$window.localStorage[key] = JSON.stringfy(object);
},
getObject: function(key) {
return JSON.parse($window.localStorage[key]);
},
remove: function(key) {
$window.localStorage.removeItem(key);
}
}
}
```
then inject the factory into your controller and run ```localStorage.setObject("users", userArray);```
|
Title: Universal URL for deep linking process | Android | iOs | Web
Tags: android;ios;web;deep-linking
Question: I have a problem related to the deep linking process. I need to create a Universal URL and send it to the end users on their email address that should fulfill the following conditions.
If the email is open in an android phone then that link should open my app(with custom data) otherwise redirect to Play Store for installing my application.
If the email is open in an iOs phone then that link should open my app(with custom data) otherwise redirect to Apple iTunes Store for installing my application.
If the email is open in Web (Desktop browser) then that link should open my website with(a specific URL).
Here is another answer: You can use Firebase Dynamic Linking for this, see :
Firebase Dynamic Linking
|
Title: How in Python find where exception was raised
Tags: python;exception
Question: How to determine in what function exception was raised. For example exist two functions: 'foo' and 'bar'. In 'foo' exception will raised randomly.
```import random
def foo():
if random.randint(1, 10) % 2:
raise Exception
bar()
def bar():
raise Exception
try:
foo()
except Exception as e:
print "Exception raised in %s" % ???
```
Comment: if this is a purely "academic" question, why not, but you shouldn't rely on stuff like inspecting stack traces in real life code... instead using custom exception classes (or one of the many built-in ones) is better.
Comment: Yes it's only was interesting
Here is another answer: ```import inspect
try:
foo()
except Exception as e:
print "Exception raised in %s" % inspect.trace()[-1][3]
```
Comment for this answer: this is probably what the asker meant, but it's probably not the best real life solution :)
Here is another answer: What is your goal? If you are worried about ```bar``` and ```foo``` throwing the same exception type and the caller not being able to differentiate between them, just derive a new exception class:
```import random
class FooException(Exception):
"""An exception thrown only by foo."""
def foo():
if random.randint(1,10) % 2:
raise FooException
bar()
def bar():
raise Exception
try:
foo()
except FooException:
print "Exception raised in foo..."
except:
print "Exception raised in bar (probably)..."
```
Comment for this answer: I agree your approach is better, but its not working for callbacks that you received as is from users. Or for example RPC when TypeError could be raised in attempt to call function or inside of it.
Actually now I do not build the logic of relying on this information, but it was interesting.
Comment for this answer: In this case you lose information about originally exception if only did not include it in the new
Comment for this answer: You could just wrap each callback in code that raises your custom exception.
Comment for this answer: Well, if you're on Python 3, you can use [exception chaining](http://legacy.python.org/dev/peps/pep-3134/). The way to do this in Python 2 isn't as nice.
Here is another answer: I use the traceback module, like so:
```import traceback
try:
1 / 0
except Exception:
print traceback.format_exc()
```
This gives the following output:
```Traceback (most recent call last):
File "<ipython-input-3-6b05b5b621cb>", line 2, in <module>
1 / 0
ZeroDivisionError: integer division or modulo by zero
```
If the code runs from a file, the traceback will tell the line and character number of where the error occured :)
EDIT:
To accomodate the comment from Habibutsu: ```Yes, it's useful for printing, but when needed to get more info (for example function name) - not suitable```
The doc-pages tell you how to extract the trace programmatically: http://docs.python.org/2/library/traceback.html
From the page linked above:
```>>> import traceback
>>> def another_function():
... lumberstack()
...
>>> def lumberstack():
... traceback.print_stack()
... print repr(traceback.extract_stack())
... print repr(traceback.format_stack())
...
>>> another_function()
File "<doctest>", line 10, in <module>
another_function()
File "<doctest>", line 3, in another_function
lumberstack()
File "<doctest>", line 6, in lumberstack
traceback.print_stack()
[('<doctest>', 10, '<module>', 'another_function()'),
('<doctest>', 3, 'another_function', 'lumberstack()'),
('<doctest>', 7, 'lumberstack', 'print repr(traceback.extract_stack())')]
[' File "<doctest>", line 10, in <module>\n another_function()\n',
' File "<doctest>", line 3, in another_function\n lumberstack()\n',
' File "<doctest>", line 8, in lumberstack\n print repr(traceback.format_stack())\n']
```
The doc-string for ```traceback.extract_stack``` is the same as for ```traceback.extract_tb```
```
traceback.extract_tb(traceback[, limit])
Return a list of up to limit “pre-processed” stack trace entries
extracted from the traceback object traceback. It is useful for
alternate formatting of stack traces. If limit is omitted or None, all
entries are extracted. A “pre-processed” stack trace entry is a
quadruple (filename, line number, function name, text) representing
the information that is usually printed for a stack trace. The text is
a string with leading and trailing whitespace stripped; if the source
is not available it is None.
```
Comment for this answer: Yes, it's useful for printing, but when needed to get more info (for example function name) - not suitable
Comment for this answer: @Habibutsu I updated my answer to show you how `traceback` is indeed suitable, and can be used to extract function names for example :)
|
Title: asignar ruta a google drive para guardar archivo
Tags: php;google-api;google-drive
Question: ¿en que parte se le asigna el parametro de la carpeta destino a google drive para que guarde el archivo?
me esta guardando en raiz el archivo y quiero que lo pongan en la carpeta estudios
en mi variable $folderId tengo la ruta
``` function insertaArchivoDrive($service, $nombre_estudio, $folderId, $data,$ruta){
// This is uploading a file directly, with no metadata associated.
function leerPorPedazos($fp, $bytesDelPedazo){
$totalBytes = 0;
$pedazoGigante = "";
while (!feof($fp)) {
$pedazo = fread($fp, 8192);
$totalBytes += strlen($pedazo);
$pedazoGigante .= $pedazo;
if ($totalBytes >= $bytesDelPedazo) {
return $pedazoGigante;
}
}
return $pedazoGigante;
}
$archivoDrive = new Google_Service_Drive_DriveFile();
$archivoDrive->setName($nombre_estudio);
$archivoDrive->setDescription('A test zip');
$archivoDrive->setMimeType('application/zip');
$bytesDelPedazo = 1 * 1024 * 1024; //128Kbs
$paramsOpc = array(
'fields' => '*'
);
$this->client->setDefer(true);
$solicitud = $service->files->create($archivoDrive,$paramsOpc);
$multimedia = new Google_Http_MediaFileUpload(
$this->client,
$solicitud,
"application/zip",
null,
true,
$bytesDelPedazo
);
$multimedia->setFileSize(filesize($ruta));
$estado = false;
$fp = fopen($ruta, "rb");
while (!$estado && !feof($fp)) {
// leemos hasta que dejamos de obtener $bytesDelPedazo del $archivoLocal
$pedazo = leerPorPedazos($fp, $bytesDelPedazo);
$estado = $multimedia->nextChunk($pedazo);
}
echo "Id del archvio: " . $estado->id;
echo "Folder del archivo: " . $estado->parents[0];
//var_dump($estado);
return $estado;
}
```
Here is the accepted answer: Para asignar un folder en dónde se ha de guardar el archivo, hay que especificarlo en los metadatos del archivo. Hay dos formas de hacerlo; La forma que vos estás usando es poniendo cada valor llamando un método dentro del objeto. La otra es ponerlo todo usando un arreglo. Te dejo la demostración de ambas.
La forma que vos estás usando
```$archivoDrive = new Google_Service_Drive_DriveFile();
$archivoDrive->setName($nombre_estudio);
$archivoDrive->setDescription('A test zip');
$archivoDrive->setMimeType('application/zip');
$archvioDrive->setParents(['id-del-folder']); //aquí se agrega el folder
```
La otra forma
```$archivoDrive = new Google_Service_Drive_DriveFile(array(
'name' => $nombre_estudio,
'description' => 'A test zip',
'mimeType' => 'application/zip',
'parents' => ['id-del-folder'] //aquí se agrega el folder
));
```
Cualquiera de las dos da el mismo resultado. Yo prefiero la última porque me es más limpio para leer.
Comment for this answer: ya quedo muchas gracias quize quitar el array $paramsOpc = array(
'fields' => '*'
); pense que no servia y lo comente pero al quitarlo no me descargaba el archivo ¿que hace ese array?
Comment for this answer: ese arreglo se encarga de especificar cuales propiedades mostrar cuando el archivo esté cargado. El asterisco representa el valor de **todas**
|
Title: ¿Como podria evitar repetir el código en cada condición?
Tags: javascript;condiciones
Question: tengo un codigo JS con una serie de condiciones que se repiten, lamentable y debido a mi poco conocimiento fue la manera que encontré como hacerlo, la verdad funciona, sin embargo me pregunto si existe una manera de hacerlo de tal manera que no tenga que estar repitiendo lo mismo en cada una de ellas. si ven el código notaran que todo es lo mismo excepto por la linea que dice ```Calculo = carroB <``` la cual cambia las cantidades agradezco la ayuda.
```var txteXpress = "";
var txtPayable = "";
var Calculo = "";
var tipoExpress = "";
var infoExpress = "";
var MontoFinal = "";
var colones = "";
if (document.getElementById('Exp-1').checked) {
txteXpress = "Monto Express";
txtPayable = "Total IVI";
Calculo = carroB < 30000 ? 1000 : 0;
tipoExpress = "Express 1";
infoExpress = innerHTML = elExpress;
MontoFinal = ((carroB + Calculo - Cupon) * perC).toFixed(2).toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",");
var colones = "¢ ";
} else if (document.getElementById('Exp-2').checked) {
txteXpress = "Monto Express";
txtPayable = "Total IVI";
Calculo = carroB < 30000 ? 1500 : 0;
tipoExpress = "Express 2";
infoExpress = innerHTML = elExpress;
MontoFinal = ((carroB + Calculo - Cupon) * perC).toFixed(2).toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",");
var colones = "¢ ";
} else if (document.getElementById('Exp-3').checked) {
txteXpress = "Monto Express";
txtPayable = "Total IVI";
Calculo = carroB < 30000 ? 2000 : 0;
tipoExpress = "Express 3";
infoExpress = innerHTML = elExpress;
MontoFinal = ((carroB + Calculo - Cupon) * perC).toFixed(2).toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",");
var colones = "¢ ";
} else if (document.getElementById('wopic').checked) {
txteXpress = "";
txtPayable = "Total IVI";
Calculo = "";
tipoExpress = "Express 3";
infoExpress = innerHTML = elExpress;
MontoFinal = ((carroB + Calculo - Cupon) * perC).toFixed(2).toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",");
var colones = "¢ ";
}
```
Comment: Pues poniéndolo al principio, antes del primer `if` o al final después del último, depende de lo que necesites hacer
Comment: Tal como comenta @Benito-B, todo lo que se repite lo puedes colocar fuera del condicional, y dentro sólo lo que no se repite.
Comment: Lo correcto es crear una función para el código que se repite y pasar el número que cambia como parámetro de la misma.
Comment: Lo siento , ya coloque las variables, si el usuario no da check todas son igual a nada. ```var txteXpress = "";
var txtPayable = "";
var Calculo = "";
var tipoExpress = "";
var infoExpress = "";
var MontoFinal = "";
var colones = ""; ```
|
Title: Creating a dataframe with variables from file name
Tags: r;text-mining
Question: tldr: I want to create a data frame with the number of occurrences of certain words in documents and later visualize their change over time, but I lose the necessary variables during the process.
The text files are named rigorously in a format which contain many data about them in this format: "000-yyyy-mmdd-AAA-00-00-AA.txt" (the 0s stand for numbers and the As stand for letters). I've used "readtext" to use this as document variables.
``` files <- readtext(
"all_files.zip",
docvarsfrom = "filenames",
dvsep = "-",
docvarnames = c("var1", "year", "daymonth", "var2", "var3", "var4", "var5"),
)
```
From this, I count the specific terms with the code below, taken from another question However, somewhere I lose the document variables: I only get the whole filenames with the numbers back, meanwhile I would need at least the year variable to sort them.
I've tried several ways, such as obtaining the file names, regex the year section out and insert back as a column and trying to change several settings and packages, but as a beginner, I could not find a solution. I'd guess that I lose the data perhaps during the tokenization, but I'm not sure.
Could you give me a recommendation, what should I do?
The following packages were active in my session:
```library(readr)
library(readtext)
library(dplyr)
library(lubridate)
library(stringr)
library(quanteda)
library(quanteda.textmodels)
library(quanteda.textstats)
library(purrr)
library(ggplot2)
library(tidytext)
files_toks <- tokens(files)
mydict <- c("apple", "pear")
dfmat <- tokens(files_toks) %>%
tokens_select(mydict) %>%
dfm()
convert(dfmat, to = "data.frame")
```
The dfmat dataframe goes like this:
``` doc_id apple pear
1 001-yyyy-mmdd-AAA-00-00-AA.txt 3 1
2 002-yyyy-mmdd-AAA-00-00-AA.txt 2 5
...
```
Comment: Yes, I should do in the quanteda package, if it is possible. The text files are quite long (originally dozens of pages) documents, containing English-language text. I hope this helps, I'm not sure what did you mean by the look of text files
Comment: I am not sure I understand what exactly is your problem (isn't dfmat what you're looking for), but having too many packages may be confusing. If you just want a wordcount, why not trying a basic stringr181a:6c1c:918f:f3a4:2e8c:61d6:1a08:6b16str_count(text. "word_to_count") and forget all the rest?
Comment: How does the text file look? What are it's contents? Do you have to do this with `quanteda` package?
Comment: You should include the names of all packages you are using. There are many text mining packages. Some packages have vignettes which show how the functions in the package work together.
|
Title: JVM hotspot options for large graph measure calculation:garbage collection
Tags: java;garbage-collection;heap-memory
Question: As a part of my code i need to calculate some centrality measure for a graph with 70k vertices and 700k edges. For this purpose I used array and hash map data structures. Unfortunately I ran out of memory at the middle of program. What would be the best JVM Hotspot parameters for handle this situation? Here is the exception i got:
```Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.HashMap.createEntry(Unknown Source)
at java.util.HashMap.addEntry(Unknown Source)
at java.util.HashMap.put(Unknown Source)
```
So i change the heap size with -Xmx6g but this parameter did not solve problem. I still have same problem with heap space.
In my program i want to calculate some measure for each node, unfortunately JVM keep information for all nodes and try to calculate it per each node. I want to know is there any way for changing JVM in a way that remove unneeded information from memory? For example my code crash after calculating measure for 1000 nodes from 70000 nodes. Is there anyway to remove information related to this 1000 nodes from memory after calculation? Memory could assign to other nodes in this way.Is this related to garbage collector?
here is my code(which is using JUNG library)
```public class FindMostCentralNodes {
private DirectedSparseGraph<Customer, Transaction> network = new DirectedSparseGraph<Customer, Transaction>();
static String dbName="SNfinal";
private int numberofNodes=0;
public static void main(String[] args) throws NumberFormatException, SQLException {
FindMostCentralNodes f=new FindMostCentralNodes();
int counter=1;
DirectedSparseGraph<Customer, Transaction> tsn=f.getTSN();
DistanceCentralityScorer<Customer,Transaction> scorer=new DistanceCentralityScorer<Customer,Transaction>(tsn,false,true,true);// un-weighted
Collection<Customer> subscribers=tsn.getVertices();
for(Customer node:subscribers){
String sql="update Node set dist_centrality='"+scorer.getVertexScore(node)+"' where subscriber='"+node.getName()+"'";
DatabaseManager.executeUpdate(sql,dbName);
System.out.println("Update node centrality measures successfully!: "+counter++);
node=null;
}
}
public DirectedSparseGraph<Customer,Transaction> getTSN() throws NumberFormatException, SQLException{
network= new DirectedSparseGraph<Customer,Transaction>();
String count="select count(*) as counter from Node";
ResultSet rscount=DatabaseManager.executeQuery(count, dbName);
if(rscount.next()) {
numberofNodes=rscount.getInt("counter");
}
Customer [] subscribers=new Customer[numberofNodes];
String sql="select * from Node";
ResultSet rs=DatabaseManager.executeQuery(sql, dbName);
while(rs.next()){
Customer sub=new Customer();
sub.setName(rs.getString("subscriber"));
network.addVertex(sub);
subscribers[rs.getInt("nodeID")-1]=sub;
sub=null;
}
String sql2="select * from TSN";
ResultSet rs2=DatabaseManager.executeQuery(sql2, dbName);
while(rs2.next()){
Transaction transaction=new Transaction(Double.parseDouble(rs2.getString("weight")));
network.addEdge( transaction, subscribers[rs2.getInt("callerNID")-1], subscribers[rs2.getInt("calleeNID")-1] );
transaction=null;
}
//garbage
rscount=null;
rs=null;
rs2=null;
subscribers=null;
return network;
}
}
```
Comment: For these sizes, I would seriously suggest writing your own collections; HashMap especially isn't well suited for storing small objects, it has a non-trivial size overhead.
Comment: I would set it as high as you can. You can use up to 80% of main memory as the heap. For example, if you have 32 GB of main memory, I would try `-Xmx24g` If you don't have much memory I suggest you buy some. The settings for eclipse are different to the settings for your program. You might like to decrease eclipse to give you more memory for your program.
Comment: did you have a space in the arguments `"-Xmx 4g"`? because that parameter should have _no space_, i.e. `"-Xmx4g"`.
Comment: No without space ;) it was just misspelling.
Comment: I thought eclipse and JVM hotspot are the same. How can i change JVM ones?
Comment: I figure it out myself. 6GB heap size didnt help neither. So probably i need to change the code...
Here is the accepted answer: I solve the problem by creating a method which every time it handle whole algorithm for 500 nodes. After each run of this method, GC now can remove the whole variable, so my problem is solved.
Here is another answer:
Try to change heap size (-Xmx) parameter
If you don't use some items in your HashMap, use HashMap.remove method. If there are no more references to these objects, they will be collected by GC..
Use Trove collections: http://trove.starlight-systems.com/overview
Comment for this answer: If you just put objects into HashMap without removing them it doesn't matter what GC algorithm do you use. If there is a reference from HashMap to these objects, they will not be collected by GC.
Comment for this answer: I used a library for this purpose so i dont access to that part of code! I tried to use different java GC like G1, my problem still exist.
Here is another answer: The garbage collector will remove any objects which are no longer reachable from live variables in your program. It will remove any such objects before giving up and throwing an ```OutOfMemoryError```. If you think too many objects are being retained in memory, then the first course of action is to let go of any objects you don't need, so that they are no longer reachable. Since you haven't shown us any code, we can't suggest any specific changes you could make.
If you trim the unnecessary objects, but still don't have enough memory, you could investigate the use of more compact ways to store data. A key technique is the use of off-heap storage; this is more work than simply using objects, but can be more efficient in terms of both space and CPU if it is done correctly. See:
http://mechanical-sympathy.blogspot.co.uk/2012/10/compact-off-heap-structurestuples-in.html
http://kotek.net/blog/3G_map
http://code.google.com/p/vanilla-java/wiki/HugeCollections
Comment for this answer: Finding 'packratted' objects involves digging deep into all the code that your program executes, often including library code. It's quite a big job. I'm afraid i don't have time to do it for your code. I barely even have time to do it for my own!
|
Title: cannot receive email on ubuntu email server
Tags: email;postfix;mail-server
Question: I setup the email server. I am able to send out mail but when I try to receive mail by sending an email to myself from another email, it doesn't work. The other email I receive a mailer-daemon and this one shows as a reject.
This is the error that i get in (```/var/log/mail.log```):
```Jun 24 19:17:31 localhost postfix/smtpd[13352]: connect from mail-lb0-f173.google.com9563591850]
Jun 24 19:17:31 localhost postfix/trivial-rewrite[13329]: warning: do not list domain socialbaked.com in BOTH mydestination and virtual_mailbox_domains
Jun 24 19:17:31 localhost postfix/smtpd[13352]: NOQUEUE: reject: RCPT from mail-lb0-f173.google.com9563591850]: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in local recipient table; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<mail-lb0-f173.google.com>
Jun 24 19:17:31 localhost postfix/smtpd[13352]: disconnect from mail-lb0-f173.google.com9563591850]
Jun 24 19:19:38 localhost postfix/master[2102]: daemon started -- version 2.7.0, configuration /etc/postfix
Jun 24 19:19:39 localhost dovecot: Dovecot v1.2.9 starting up (core dumps disabled)
Jun 24 19:19:39 localhost dovecot: auth-worker(default): mysql: Connected to 9563591850 (mail)
Jun 24 19:19:41 localhost postfix/master[2102]: reload -- version 2.7.0, configuration /etc/postfix
```
Comment: thanks that helped. i need to verify that this worked. where do i check to see if i received the email? i dont see any errors in the log and it shows a from address being mine.
Comment: By default, the mail should be spooled to `mbox` files in `/var/spool/mail` named after the recipient.
Comment: Have you checked [this](http://serverfault.com/questions/179419/postfix-recipient-address-rejected-user-unknown-in-local-recipient-table)?
Here is another answer: The following line from the error log gives valuable info:
```localhost postfix/smtpd[13352]: NOQUEUE: reject: RCPT from mail-lb0-f173.google.com9563591850]: 550 5.1.1 <[email protected]>: Recipient address ejected: User unknown in local recipient table; from=<[email protected]> o=<[email protected]> proto=ESMTP helo=<mail-lb0-f173.google.com>
```
Postfix should relay all mail to other servers on the internet, it does not actually receive mail for any domains. As in the error log example.com should be forwarded to the mail server for example.com. The solution, is to remove $mydomain from postfix config /etc/postfix/main.cf in line:
```mydestination = $mydomain, localhost.$mydomain, localhost
```
Source: serverfault
Here is another answer: For me, the issue was that I also had a DNS issue. To fix this, I used
``` dpkg-reconfigure postfix
```
From within the terminal window and changed the "local networks" field answer to include the IP of my public IP for the server.
Afterwards, I ran ```service postfix reload``` and ```service postfix restart``` and all was well.
|
Title: Mounted Android System.img unable to edit
Tags: mount;android;mountpoint;hacking
Question: I have mounted an android system image using the command:
```sudo mount -t ext4 -o loop system.img sys/
```
When I go into the mount point, all of the files are easy to open. I can go through just about everything. The problem comes when I try to edit the files. I cannot save anything. It says that there is not space left on the device. I have plenty of space left on the device itself, so that means that the mount point is out of space.
No big deal, right? Wrong. I tried deleting some files that were much bigger than the one I was trying to overwrite, but the same problem happened. In fact, the total disk size actually shrunk by the amount I had removed, once again leaving me at 0 free space.
How can I fix this?
Here is another answer: [Answer]
This was apparently a bug in Ubuntu. The way that I got around it was by using the ```resize2fs``` command to make the partition file bigger. The exact command I ran was ```sudo resize2fs system.img 520M```
NOTE: You may have to run ```sudo resize2fs system.img 1000M``` first and then reduce the size back to ```520M``` by using the command above.
Here is another answer: If the img file in your /etc/fstab? If it is it may be listed as being mounted in a read-only state (ro). You can either edit fstab to change ro to rw or you can explicitly state to mount the img file as read-write from the commandline by adding ```rw``` option after ```-o``` in your mount command.
```sudo mount -t ext4 -o loop,rw system.img sys/```
If you are still short on free space you may need to resize the image file. This can be done with GUI GParted via ```sudo gparted /dev/loop0``` then use menus to make the file larger so you have some free space to work with.
|
Title: Defining relationship type using SQL Server & C#?
Tags: c#;sql;.net;sql-server
Question: How I can define relationship type(one to one, one to many, many to many) using SQL Server & C#?
I tried some ways to determine:
Attempt #1:
```DataTable foreigKeys = con.GetSchema("ForeignKeys");
```
But this get referenced tables and other things, but not relationship type...
Attempt #2:
```using (var cmd = new SqlCommand("SELECT * FROM Schema.TableName" con))
{
using (var reader = cmd.ExecuteReader())
using (var schemaTable = reader.GetSchemaTable())
{
}
}
```
And schema table don't have relationship type...It have only some column parameters like: row.Field("ColumnName"), etc...
Attempt #3:
This way was to take information with a SQL query, but I'm not a database expert so this query don't get relationship type:
```SqlCommand sqlCom = new SqlCommand("select syscolumns.name, systypes.name " +
"from syscolumns " +
"inner join systypes " +
"on syscolumns.xtype = systypes.xtype AND syscolumns.xusertype = systypes.xusertype" +
"where syscolumns.id = object_id('MyDataBaseTable')", con);
```
I searched all google and tried many ways, but still I will not get relationship type.. Sorry for my bad english and thanks for any help...
Comment: you will have to calculate that your self.
Comment: you could look to see if theres a middle table that only has foreign keys in it. there isn't a bullet proof way to know this by just tracking the fks.
Comment: But how to determine if relationship is one to one or many to many? Easy to determine if one to many, becouse many table will have foreign key, but one dont have...So maybe someone will help how to determine it :) Or maybe c# have tools like in datatable load relationship type :) Or with sql query determine this
Comment: If you have just a schema, and no data at all - you can (at best) guess. If there is a sufficient amount of data in the database, you could `SELECT ... GROUP BY ...` to check, whether at the time of the check, there is data to confirm the **m** or/and **n** in **1 to n** or **m to n**.
|
Title: Error in flowplayer flash : 201, unable to load stream or clip, connection failed
Tags: google-chrome;flowplayer
Question: I am getting 201, unable to load stream or clip, connection failed…
its coming only in chrome, but when i play it chrome incognito window, it plays nicely. and in all browsers, it is running . so what may be the problem, kindly help.
|
Title: Is it possible to databind to an object in memory and also allow for databinding to XML
Tags: c#;wpf;xml;binding
Question: I have created an object called Project that has different properties (strings and some custom objects), I have bound text fields to these properties to get user input. I have created a method that outputs this object to an XML file. However when I import this XML file back into memory the text fields do not become populated with the text or list views of some custom objects that inherit from ObservableCollection do not have any text. The XML does load properly since if I enter text into the empty fields it updates the property and I can export an XML file with the new values.
To load the xml I use the following code
```public void LoadXML()
{
OpenFileDialog fileDialog = new OpenFileDialog();
fileDialog.Title = "Load XML File";
fileDialog.Filter = "XML Files|*.xml";
DialogResult result = fileDialog.ShowDialog();
if (result.ToString().Equals("OK"))
{
string filePath = fileDialog.FileName.ToString();
XmlSerializer serializer = new XmlSerializer(typeof(Project));
TextReader textReader = new StreamReader(filePath);
newProject = (Project)serializer.Deserialize(textReader);
textReader.Close();
}
}
```
Any suggestions would be welcomed, thanks.
Comment: Does the Project class implement the INotifyPropertyChanged interface and does it call throw the PropertyChanged event for every property changed? Are you using WPF?
Here is the accepted answer: I assume you use WPF.
You need to implement the INotfiyPropertyChanged-Interface and throw its event for every Property of your class that is tied to a Control.
WPF then updates your GUI accordingly, when you deserialize the Project from XML. If it does not, check if the DataContext of your control is set to the Project instance that you deserialized.
Comment for this answer: Had INotifyPropertyChanged implemented, it was the DataContext of the control that wasn't set. Didn't realise it had to be set after de-serialisation. Thanks for the help.
|
Title: What happens when a bulb fuses in a parallel circuit?
Tags: electric-circuits;electricity;electric-current;electrical-resistance
Question: Let's say there's a parallel circuit consisting of some bulbs (which are lit up). If one of them fuses, I know that the others would continue to glow. But because one of the bulbs has fused, would the new resistance depend on the other bulbs? Would the current change accordingly (if so)?
Thank you
Comment: The total resistance would be *higher* because there is one fewer path current can travel through. Think about people going into a stadium through 5 doors. You close one, less people will be able to go through, no matter the size of the door closed.
Comment: Possible useful learning tool: https://phet.colorado.edu/en/simulation/circuit-construction-kit-dc
Here is the accepted answer: The new equivalent resistance would be change, not the individual resistance. Suppose there bulb in parallel with resistance $R_1$,$R_2$ and $R_3$.
$$\frac{1}{R_{eq}}=\frac{1}{R_1}+\frac{1}{R_2}+\frac{1}{R_3}$$
If one of them, says the third one, get fused, the equivalent resistance would change to
$$\frac{1}{R'_{eq}}=\frac{1}{R_1}+\frac{1}{R_2}$$
So the resistance gets changed.
The current will change accordingly to resistance, the expression can be worked out.
Here is another answer: That depends a lot on what type of bulb we are talking about (LED, incandescent, halogen, fluorescent, etc). For simplicity we (erroneously) assume that the bulb behaves like a linear resistor.
```
But because one of the bulbs has fused, would the new resistance depend on the other bulbs?
```
The new load resistance of all bulbs combined would be higher.
```
Would the current change accordingly (if so)?
```
The current delivered from the source would be lower.
The current through each remaining bulb will be slightly higher. This is caused by the internal impedance of the power source. Due to the lower source current, the voltage drop over the internal source impedance will also slightly lower, so the bulbs will see a slightly higher voltage. If the source impedance is very small as compared to the bulb resistance and/or the number of bulbs is large, this effect is negligible.
|
Title: protocol buffer message to and from an XPathDocument
Tags: c#;.net;xml;protocol-buffers;protobuf-net
Question: I am trying to serialize and deserialize a protocol buffer message to and from an XPathDocument but it fails with an exception:
ProtoBuf.ProtoException: Mismatched group tags detected in message
How do I make this work?
I am using protobuf-net and my source code for reproducing it looks like this:
TestMsg.proto
``` option optimize_for = SPEED;
//*************************
message Test {
repeated A a = 1;
}
message A {
required string str = 1;
}
```
Progam.cs
```using System;
using System.Collections.Generic;
using System.IO;
using System.Xml;
using System.Xml.Serialization;
using System.Xml.XPath;
using ProtoBuf;
using TestMsg;
namespace protocolbufferserialize
{
class Program
{
static void Main(string[] args)
{
Test t = new Test();
XPathDocument xmldoc = Serialize(t);
Test t1 = Serialize(xmldoc);
}
public static XPathDocument Serialize(Test wro)
{
XPathDocument xmlDoc = null;
Serializer.PrepareSerializer<Test>();
XmlSerializer x = new XmlSerializer(wro.GetType());
using (MemoryStream memoryStream = new MemoryStream())
{
using (TextWriter w = new StreamWriter(memoryStream))
{
x.Serialize(w, wro);
memoryStream.Position = 0;
xmlDoc = new XPathDocument(memoryStream);
}
}
return xmlDoc;
}
public static Test Serialize(XPathDocument xmlDoc)
{
Test t = null;
Serializer.PrepareSerializer<Test>();
XmlSerializer x = new XmlSerializer(xmlDoc.GetType());
using (MemoryStream memoryStream = new MemoryStream())
{
using (TextWriter w = new StreamWriter(memoryStream))
{
x.Serialize(w, xmlDoc);
memoryStream.Position = 0;
t = Serializer.Deserialize<Test>(memoryStream);
}
}
return t;
}
}
}
```
I tried to extend use Serializer.Merge but the Test object is empty when it comes back from xml.
``` using System;
using System.Collections.Generic;
using System.IO;
using System.Xml;
using System.Xml.Serialization;
using System.Xml.XPath;
using ProtoBuf;
using TestMsg;
namespace TestXMLSerilizationLars
{
class Program
{
static void Main(string[] args)
{
Test t = new Test();
A a = new A();
string str = "test";
a.str = str;
t.a.Add(a);
XPathDocument xmldoc = Serialize(t);
WriteXpathDocument(xmldoc, "c:\\testmsg.xml");
Test t1 = Serialize(xmldoc);
}
public static XPathDocument Serialize(Test t)
{
XPathDocument xmlDoc = null;
Serializer.PrepareSerializer<Test>();
XmlSerializer x = new XmlSerializer(t.GetType());
using (MemoryStream memoryStream = new MemoryStream())
{
using (TextWriter w = new StreamWriter(memoryStream))
{
x.Serialize(w, t);
memoryStream.Position = 0;
xmlDoc = new XPathDocument(memoryStream);
}
}
return xmlDoc;
}
public static Test Serialize(XPathDocument xmlDoc)
{
Test t = null;
Type type = xmlDoc.GetType();
XmlSerializer serializer = new XmlSerializer(type);
using (MemoryStream memoryStream = new MemoryStream())
{
serializer.Serialize(memoryStream, xmlDoc);
// memoryStream.Close();
Test newt = Deserialize(memoryStream.ToArray());
return newt;
}
return t;
}
static public Test Deserialize(byte[] Bytes)
{
MemoryStream SerializeStream = new MemoryStream(Bytes);
Test NewObject = Serializer.Deserialize<Test>(SerializeStream);
Test ObjectExist = new Test();
if (ObjectExist == null)
{
return NewObject;
}
else
{
SerializeStream.Seek(0, SeekOrigin.Begin);
Serializer.Merge<Test>(SerializeStream, ObjectExist);
return ObjectExist;
}
}
public static void WriteXpathDocument(XPathDocument xpathDoc, string filename)
{
// Create XpathNaviagtor instances from XpathDoc instance.
XPathNavigator objXPathNav = xpathDoc.CreateNavigator();
// Create XmlWriter settings instance.
XmlWriterSettings objXmlWriterSettings = new XmlWriterSettings();
objXmlWriterSettings.Indent = true;
// Create disposable XmlWriter and write XML to file.
using (XmlWriter objXmlWriter = XmlWriter.Create(filename, objXmlWriterSettings))
{
objXPathNav.WriteSubtree(objXmlWriter);
objXmlWriter.Close();
}
}
}
}
```
The xml file I dump out looks like this
```<?xml version="1.0" encoding="utf-8"?>
<Test xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<a>
<A>
<str>test</str>
</A>
</a>
</Test>
```
Here is another answer: The only time you use protobuf here is:
```x.Serialize(w, xmlDoc);
memoryStream.Position = 0;
t = Serializer.Deserialize<Test>(memoryStream);
```
where you have written xml (```x``` is an ```XmlSerializer```), and then attempted to read it via protobuf (```Serializer.Deserialize```).
However; protobuf is not xml; it is a binary format completely unrelated to xml. If your intention is to deep-clone the data, you should also serialize with protobuf-net (```Serializer.Serialize```).
It is often possible to convert a model between the two formats, but the streams themselves are not swappable.
|
Title: How to affect the directive children using @HostBinding?
Tags: angular;bootstrap-4;angular-directive
Question: I'm implementing a bootsrap's single button dropdown (docs).
In order to make it to be "open", it must add a ```show``` class to main ```<div>``` and to the ```<ul>```.
This is closed:
```<div class="btn-group">
<button
type="button"
class="btn btn-primary dropdown-toggle">
Manage (Using Directive) <span class="caret"></span>
</button>
<ul class="dropdown-menu">
<li><a style="cursor: pointer;">Edit </a></li>
<li><a style="cursor: pointer;">Delete </a></li>
</ul>
</div>
```
This is open:
```<div class="btn-group show">
<button
type="button"
class="btn btn-primary dropdown-toggle">
Manage (Using Directive) <span class="caret"></span>
</button>
<ul class="dropdown-menu show">
<li><a style="cursor: pointer;">Edit </a></li>
<li><a style="cursor: pointer;">Delete </a></li>
</ul>
</div>
```
I'm trying to make it work as a directive with:
```<div class="btn-group" appDropdown>
<button
type="button"
class="btn btn-primary dropdown-toggle">
Manage (Using Directive) <span class="caret"></span>
</button>
<ul class="dropdown-menu">
<li><a style="cursor: pointer;">Edit </a></li>
<li><a style="cursor: pointer;">Delete </a></li>
</ul>
</div>
```
And the dropdown.directive.ts:
```import { Directive, HostListener, HostBinding } from '@angular/core';
@Directive({
selector: '[appDropdown]'
})
export class DropdownDirective {
@HostBinding('class.show') isOpen = false;
@HostListener('click') toggleOpen() {
this.isOpen = !this.isOpen;
}
}
```
This way I'm only able to add the ```show``` class to the ```<div>``` without adding it to the ```<ul>```,
Is there a way to affect the directive children?
Here is a StackBlitz
This is a related question, but it does not mention any @HostBinding
Here is the accepted answer: I came across this post when I was looking for the answer to the same question. I solved the issue myself using ElementRef. I'm fairly new to Angular but coming from a React background I thought if the 'Ref' in element ref is similar to Refs in React then it should have HTML attributes & properties, specifically querySelector.
So if anyone else is stumped on this, this can serve as a solution.
```import { Directive, HostListener, ElementRef } from '@angular/core';
@Directive({
selector: '[appDropdown]',
})
export class DropdownDirective {
constructor(private dropdownRef: ElementRef<HTMLElement>) {}
@HostListener('click') toggleOpen = () => {
this.dropdownRef.nativeElement
.querySelector('div')
.classList.toggle('hidden');
};
}
```
|
Title: Jquery / Javascript Dynamically load CSS file (after runtime), replacing other css file?
Tags: javascript;jquery;css
Question: I've been reading up on how I can dynamically load a css file using jquery at the page's runtime, but haven't been able to find anything on this... what I am wondering if it is possible to basically re-load a css file to reflect changes from the server side..
The reason for this is that I am making an app that offers a number of different page layout sizes and I ran into some strange issues when doing modification of every css element on the page using jquery so I am making a server side script that will create a number of different css file that are identical, except for the sizes of the elements.. so I want to make it so I can dynamically load in a new version of this at any time and it will replace the original and reflect the changes in the page layout.. I am not sure if this is possible using the other scripts that do dynamic loading as it didn't seem to mention this use case. Thanks for any info.
Comment: Possible duplicate of [Load external css file like scripts in jquery which is compatible in ie also](https://stackoverflow.com/questions/2685614/load-external-css-file-like-scripts-in-jquery-which-is-compatible-in-ie-also)
Here is the accepted answer: We do something similar in our web app. The users can choose between several predefined layouts.
There is a static ```CSS``` file loaded normally with common styles shared by all layouts.
Then the function below receives a ```CSS``` string delivered by the server:
```var setStyle = function (css){
//css has the format: selector{...style...}
var styleNode,
cur = document.getElementById('_theme');
cur && cur.parentNode.removeChild(cur);
styleNode = document.createElement('style');
styleNode.setAttribute('type', 'text/css');
styleNode.setAttribute('id', '_theme');
document.getElementsByTagName('head')[0].appendChild(styleNode);
if((/MSIE/).test(navigator.userAgent)){
styleNode.styleSheet.cssText = css;
}else{
styleNode.appendChild(document.createTextNode(css));
}
}
```
The function adds a ```STYLE``` tag with the id ```_theme``` and insert the ```CSS``` definition in it.
And the layout is applied to the page.
If the id ```_theme``` exists already, it is replaced.
More recently we developed a mobile version of our web app and we changed radically the technique.
The style is not defined by a static ```CSS``` anymore but from a ```JSON``` that we can generate dynamically, using variables, functions, etc... directly from the browser.
We made a small JS lib of it, the code is available at: http://github.com/pure/jstyle
Comment for this answer: Good info.. I'm curious why you would have changed the dynamic generation for the mobile app though.. I would think this would make the slower device (the phone) do more processing than normal since it js has to do at least some work in putting that json var into the layout.. I suppose its so the content will load incrementally on a slow connection instead of them waiting for the entire thing to load with the user seeing nothing until the entire css is loaded?
Comment for this answer: We did the desktop version first. I will move it to JS later as well. JS is fast enough, the DOM is slow in mobiles. If you plan to give some flexibility to people to change ie: fonts, colors, background; Using JS, variables, functions make it easier.
Here is another answer: ```function createjscssfile(filename, filetype){
if (filetype=="js"){ //if filename is a external JavaScript file
var fileref=document.createElement('script')
fileref.setAttribute("type","text/javascript")
fileref.setAttribute("src", filename)
}
else if (filetype=="css"){ //if filename is an external CSS file
var fileref=document.createElement("link")
fileref.setAttribute("rel", "stylesheet")
fileref.setAttribute("type", "text/css")
fileref.setAttribute("href", filename)
}
return fileref
}
function replacejscssfile(oldfilename, newfilename, filetype){
var targetelement=(filetype=="js")? "script" : (filetype=="css")? "link" : "none" //determine element type to create nodelist using
var targetattr=(filetype=="js")? "src" : (filetype=="css")? "href" : "none" //determine corresponding attribute to test for
var allsuspects=document.getElementsByTagName(targetelement)
for (var i=allsuspects.length; i>=0; i--){ //search backwards within nodelist for matching elements to remove
if (allsuspects[i] && allsuspects[i].getAttribute(targetattr)!=null && allsuspects[i].getAttribute(targetattr).indexOf(oldfilename)!=-1){
var newelement=createjscssfile(newfilename, filetype)
allsuspects[i].parentNode.replaceChild(newelement, allsuspects[i])
}
}
}
replacejscssfile("oldscript.js", "newscript.js", "js") //Replace all occurences of "oldscript.js" with "newscript.js"
replacejscssfile("oldstyle.css", "newstyle", "css") //Replace all occurences "oldstyle.css" with "newstyle.css"
```
From: http://www.javascriptkit.com/javatutors/loadjavascriptcss2.shtml
|
Title: Meteor re-activity issue inside template helper
Tags: meteor
Question: I currently have a template where I am querying the database with the following query.
```allMessages = Messages.find({$or: [{type: "user_message"}, {type: "system_message", time: {$gt: (Date.now() - 180000)} }]}, {sort: {time: 1 }}).fetch()
```
Now obviously the template helper gets re-run whenever something new goes into or is removed from this set of data, which is exactly what I want. The issue arises when a ```system_message``` gets older than 2 minutes and I no longer want that message to by apart of my query. The data does not update when this happens, and only updates when a new message comes in or a for some reason a message is removed.
Does anyone know why this might be the case? It seems to me that there shouldn't be an issue as the data on the query is changing so it should be re-running but it isn't.
Comment: The actual data inside the collection is not changing, thus the query is not being rerun, the way I get around this is to use a `Deps.dependancy` that uses a set timeout to fire a changed event.
Here is another answer: It isn't working because ```Date.now()``` isn't a reactive variable. If you were to set the date limit in something like a session variable or a ReactiveDict, it would cause your helper to recompute. Here's an example using the ```Session```:
```Template.myTemplate.allMessages = function() {
var oldestMessageDate = Session.get('oldestMessageDate');
var selector = {
$or: [
{type: "user_message"},
{type: "system_message", time: {$gt: oldestMessageDate}}
]
};
return Messages.find(selector, {sort: {time: 1}});
};
Template.myTemplate.created = function() {
this.intervalId = Meteor.setInterval(function() {
Session.set('oldestMessageDate', new Date - 120000);
}, 1000);
};
Template.myTemplate.destroyed = function() {
Meteor.clearInterval(this.intervalId);
};
```
Every second after the template is created, it changes ```oldestMessageDate``` to a new date which is two minutes in the past. Note that the ```intervalId``` is stored in the template instance and later cleaned up in the ```destroyed``` callback so it won't keep running after the template is no longer in use. Because ```oldestMessageDate``` is a reactive variable, it should cause your ```allMessages``` helper to continually rerun.
Comment for this answer: @SeanCallahan did this solution end up working for you?
|
Title: Doubts about the use of polymorphism, and also about how is polymorphism related to casting?
Tags: java;oop;polymorphism;instanceof
Question: I give lessons on the fundamentals of the Java programming language, to students who study this subject in college.
Today one of them got me really confused with her question, so I told her to give me just a day to think about the problem, and I'll give her as accurate of an answer as I can.
She told me that the teacher got really angry when she used the keyword ```instanceof``` in her exam.
Also, she said that the teacher said that there is not a way to prove how polymorphism worked if she used that word.
I thought a lot to try to find a way to prove that in some occasions we need to use ```instanceof```, and also that even if we use it, there still some polymorphism in that approach.
So this is the example I made:
```public interface Animal
{
public void talk();
}
class Dog implements Animal {
public void talk() {
System.out.println("Woof!");
}
}
public class Cat implements Animal
{
public void talk() {
System.out.println("Meow!");
}
public void climbToATree() {
System.out.println("Hop, the cat just cimbed to the tree");
}
}
class Hippopotamus implements Animal {
public void talk() {
System.out.println("Roar!");
}
}
public class Main {
public static void main(String[] args) {
//APPROACH 1
makeItTalk(new Cat());
makeItTalk(new Dog());
makeItTalk(new Hippopotamus());
//APPROACH 2
makeItClimbToATree(new Cat());
makeItClimbToATree(new Hippopotamus());
}
public static void makeItTalk(Animal animal) {
animal.talk();
}
public static void makeItClimbToATree(Animal animal) {
if(animal instanceof Cat) {
((Cat)animal).climbToATree();
}
else {
System.err.println("That animal cannot climb to a tree");
}
}
}
```
My conclusions are the following:
The first approach (APPROACH 1) is a simple demo of how to program to an interface, not a realization. I think that the polymorphism is clearly visible, in the parameters of the method ```makeItTalk(Animal animal)```, and also in the way the method talk is called, by using the animal object.(This part is ok)
The second part is the one that makes me confused. She used ```instanceof``` at some point in her exam (I don't know how their exam looked like), and that was not accepted correctly because the teacher said, you are not proving polymorphism.
To help her understand when she can use ```instanceof```, I thought about telling her, that she can use it, when the method she needs to call is not in the interface, but it is just in one of the implementing classes.
As you can see, only cats can climb to trees, and it would not be logical to make a Hippopotamus or a Dog climb to a tree. I think that could be an example of when to use ```instanceof```
But what about polymorphism in approach 2?
How many uses of polymorphism do you see there (only approach 2)?
Do you think this line has some type of polymorphism in it?
```((Cat)animal).climbToATree();```
I think it does, because in order to achieve a Casting of this type, the objects need to have an IS-A relationship, an in some way that is polymorphism.
What do you think, is it correct?
If yes, how would you explain with your own words, that casting relies on polymorphism?
Here is the accepted answer: In your above example, there is no need to call
```makeItClimbToATree (new Hippopotamus ());
```
It could be easily avoided, if makeItClimbToATree wouldn't expect an animal, but something more specific, which is really able to climb a tree. The necessity to allow animals, and therefore to use instanceof, isn't visible. If you manage the animals in a List of animals, it will be more obvious.
While ircmaxells explanation starts great, while introducing the Koala and other TreeClimbers, he doesn't see a second extension which is hiding in a sea anemone: different capabilities of animals like seaAnemoneHider, winterSleeping, blueEyed, bugEating, and so on, and so on. You would end up with boolean over boolean, constantly recompiling the base class, as well as breaking extending customer classes, which would need recompilation again, and wouldn't be able to introduce their own possibilities in a similar manner.
Customer A would need Customer B to declare a NotBugEatingException, to get your behaviour into the base class.
Introducing your own interfaces, combined with instanceof, is a much cleaner approach, and more flexible. Customer A might define divingLikeAPenguin and customer B trumpeting, both not knowing of each other, both not affecting the Animal class and not provoking useless recompilations.
```import java.util.*;
interface Animal {
public void talk ();
}
interface TreeClimbing {
public void climbToATree ();
}
class Dog implements Animal {
public void talk () { System.out.println("Woof!"); }
}
class Cat implements Animal, TreeClimbing {
public void talk () { System.out.println("Meow!"); }
public void climbToATree () { System.out.println ("on top!"); }
}
public class TreeCriterion {
public static void main(String[] args) {
List <Animal> animals = new ArrayList <Animal> ();
animals.add (new Cat ());
animals.add (new Dog ());
discuss (animals);
upTheTree (animals);
}
public static void discuss (List <Animal> animals) {
for (Animal a : animals)
a.talk ();
}
public static void upTheTree (List <Animal> animals) {
for (Animal a : animals) {
if (a instanceof TreeClimbing)
((TreeClimbing) a).climbToATree ();
}
}
}
```
We don't need a third animal, dog and cat are enough. I made them default visible instead of public, to make the whole example fit into a single file.
Comment for this answer: I still feel this type of strategy will lead to violations of the [SRP](http://en.wikipedia.org/wiki/Single_responsibility_principle) and [LSP](http://en.wikipedia.org/wiki/Liskov_substitution_principle) leading to huge classes and an unmaintainable ball of code. Instead, I would either use a [Bridge](http://sourcemaking.com/design_patterns/bridge) or a [Visitor](http://sourcemaking.com/design_patterns/visitor) to decouple the walking/climbing implementation(s) from the animal class. Otherwise you wind up with a boat load of interfaces and demi-god objects that do way too much...
Comment for this answer: This is a really good one. So here you use instanceOf with the Interface that is implemented by the animals that can climb. Great idea, i think it is a good approach and also Object Oriented. +1
Comment for this answer: @Michael I think i also agree with `user unknown`, imagine you have a class called Human, they can also climb to trees. So they need to implement that interface. I think this approach is in some way like creating marker interfaces. There is a similar thing already being done in java: The Serializable interface must be implemented by all those clases that wish to be serialized. I think it is correct and also flexible to work this way.
Comment for this answer: @user unknown: In that case, if you start adding more and more animals with a greater variety of abilities, developing a more dynamic approach would probably be better (as opposed to creating interfaces for each "ability group"). Something like creating a normal "Animal" class with a "List abilities" field.
Comment for this answer: Thanks. The code would be a bit clearer, but much longer, with more animals with multiple abilities, and multiple intersections. And multiple parties providing software to interact with each other. A base class provider for animal, who provides a management solution for a zoo for example, and specialists, for bears, for insects, for aquariums, where some customeer might not need any of the aquarium specific classes (because he has no aquarium), and some special aquarium doesn't need WinterSleeping or TreeClimbing.
Comment for this answer: Yes. And then? If you would like to call climbTree for every animal, which has this ability in its list? Would it just be a String? That would be easy, and very extensible, but not be checked by the compiler for spelling errors, and how would you move from knowing, that the BlackBear can climb the tree, to actually climbing it?
Comment for this answer: I don't have the climbing and so on in the animal class, but in the interfaces, which is the SRP, and in the concrete class (Cat). An interface MouseEating could be implemented by Cat and Eagle, TreeClimbing by Cat and Bear, Something else by Bear and Eagle (MoneyDecoration?). I don't see a LSP-violation, can you elaborate on this one?
Here is another answer: ```
Do you think this line has some type of polymorphism in it?
```
```((Cat)animal).climbToATree();
```
No. Especially, since ```Cat``` is a leaf class in the example.
```
I think it does, because in order to achieve a Casting of this type, the objects need to have an IS-A relationship, an in some way that is polymorphism.
```
Polymorphism requires the IS-A relationship, but not the other way round.
Polymorphism is when you dispatch to (potentially) different methods based on an abstract interface. If you don't have that dispatching, then it is not using polymorphism. In your example, using ```instanceof``` to cast to a class with no subclasses, you are removing the need for dispatching.
(Of course, there is more than one way to "do polymorphism" in Java. You can implement it using interfaces, using abstract classes, or using concrete classes with subclasses ... or hypothetical subclasses that may be written in the future. Interfaces (and dispatching based on an interface) are generally the best way because they give a clean separation of the API from the identity of class.)
And on a separate note, using ```instanceof``` like that is typically a sign of poor design and / or poor modelling. Specifically, it hard-wires the assumption that only cats can climb, which is trivially falsified if we include other animals into the model / program. If that happens, your code breaks.
Comment for this answer: Thank you, for the explanation now i understand better why there is no polymorphism in that concrete line of code. Regarding to the instanceOf, i agree, maybe if i used it in a different way(for example the user's unknown approach above), could be better. Some of the answers mentioned that instanceOf is bad, i think that would be true, but only if you use it in a no flexible way, as i did.
Here is another answer: I'm surprised no one wrote anything about Late Binding. Polymorphism in Java = Late Binding. The method being called will be be attached to the object when we finally know its type. In your example:
``` if(animal instanceof Cat) {
((Cat)animal).climbToATree();
}
```
You are calling ```climbToATree()``` on a Cat object so the compiler accepts it. At run time, there is no need to check the type of the calling object since ```climbToATree()``` belongs to ```Cat``` only. And so there is no polymorphism in these lines of code.
About casting being related to Polymorphism, it isn't. Casting just limits the fields that are shared in both objects, if the cast is legal. You could do this:
```class A {
int getInt() {}
}
class B extends A {
int getInt() {}
}
// in main
A a = new B();
A b = (A)a;
b.getInt(); // This would still call class B's getInt();
```
The cast itself added no value, getInt() was bound at run time to the runtime type of ```a```, which was class B.
Here is another answer: What about something like the code below? It solves the generality problem by separating the tree-climbing as another interface you can implement or not on your animals. It fits the problem better: climbing trees is not an intrinsic property of all animals, only of a subset of them. At least to me it looks much clearer and elegant than throwing ```NotImplementedException```s.
```
public interface Animal {
public void talk();
}
public interface AnimalCanClimbTrees extends Animal {
public void climbToATree();
}
public class Dog implements Animal {
public void talk() {
System.out.println("Woof!");
}
}
/* Animal is probably not needed, but being explicit is never bad */
public class Cat implements Animal, AnimalCanClimbTrees
{
public void talk() {
System.out.println("Meow!");
}
public void climbToATree() {
System.out.println("Hop, the cat just cimbed to the tree");
}
}
class Hippopotamus implements Animal {
public void talk() {
System.out.println("Roar!");
}
}
public class Main {
public static void main(String[] args) {
//APPROACH 1
makeItTalk(new Cat());
makeItTalk(new Dog());
makeItTalk(new Hippopotamus());
//APPROACH 2
makeItClimbToATree(new Cat());
makeItClimbToATree(new Hippopotamus());
}
public static void makeItTalk(Animal animal) {
animal.talk();
}
public static void makeItClimbToATree(Animal animal) {
if(animal instanceof AnimalCanClimbTrees) {
((AnimalCanClimbTrees)animal).climbToATree();
}
else {
System.err.println("That animal cannot climb to a tree");
}
}
}
```
Comment for this answer: Yeah, I remembered that just now before your reply. Wrote a little comment on it.
Comment for this answer: @Chris Dennett: That was my original name, then I noticed it doesn't make explicit the fact that a climber is still an animal. `AnimalTreeClimber` sounds a little weird, so I went with 'AnimalCanClimbTrees'.
Comment for this answer: That is nice, just one little mistake. Cat only needs to implement AnimalCanClimbTrees, because that interface already extends from animal. Thanks :)
Comment for this answer: `makeItClimbToATree` shouldn't accept Animals, but only TreeClimbers.
Comment for this answer: TreeClimber would be better as an interface name.
Comment for this answer: Okay, but if there's any confusion, the package name (relating to animals) can be used to differentiate the interface :)
Here is another answer: The ```instanceof``` operator has nothing to do with polymorphism. It is simply used to see whether or not an object is an instance of a particular class. You see this operator being used a lot in the ```equals()``` method, because the method takes a generic ```Object``` as a parameter:
```public class Cat implements Animal{
@Override
public boolean equals(Object obj){
if (obj == null || !obj instanceof Cat){
//obj is null or not a "Cat", so can't be equal
return false;
}
if (this == obj){
//it's the same instance so it must be equal
return true;
}
Cat catObj = (Cat)obj; //cast to "Cat"
return this.getName().equals(catObj.getName()); //compare the two objects
}
}
```
If a class does not implement a method, then it should throw an exception. I believe the "official" exception you are supposed to throw is ```UnsupportedOperationException```. To be "correct", I think the ```Animal``` interface should have a ```public void climbToATree();``` method. The ```climbToATree()``` methods in the ```Dog``` and ```Hippo``` classes should throw an ```UnsupportedOperationException``` because they cannot implement this method. But if you are throwing this exception very often, then there may be something wrong with your object model, as this is not a common thing to do I don't think.
Also note that it's helpful (but not required) to use the ```@Override``` annotation with polymorphic programming in Java. This will cause a compilation error to be thrown if a method with this annotation does not override a parent method, implement an abstract method, or (in Java 6) implement an interface method. This can help catch any mistakes you make in the method signature. For example:
```public String tostring(){
return "foobar";
}
```
Without the annotation, the program would compile and run successfully. But this was not your intention! You wanted to override toString(), but you accidentally spelled the name wrong!!
Comment for this answer: I understand, thank you for your answer. Regarding to polymorphism and casting, how to you think are those topics related to each other?
Here is another answer: The reason the ```instanceof``` method is seen as bad is simple. Cats aren't the only ```Animal``` that might be able to climb a tree.
What happens if down the road you need to add a Koala class. Then your simple ```if``` becomes a not so simple ```or```. Then, what happens when you add another class? and another one. And another one. That's the prime reason why ```instanceof``` is seen as bad. Because it couples the implementation to a concrete class, rather than opening it for the callee to determine what to do.
Simply implement the ```makeItClimbToATree()``` method to throw a ```CantClimbTreesException``` if called on an animal that can't climb. That way you have the best of both worlds. Easy to implement, and easy to extend.
IMHO, ```instanceof``` has only 1 truly valid use: In a test case to test the returned instance from a method matches the expected return type (in non-type safe languages).
Basically any other use can more than likely be refactored away or designed differently to negate the need for its use.
Another way to look at it is this: Polymorphism allows you to eliminate almost all conditional statements from your code. The only conditionals that you can't get rid of (at least all of them) are in object creational methods (such as in a factory where it must choose the class based upon a runtime argument). Just about any other conditional can be replaced by polymorphism. Therefore, anything that does conditional execution is by definition anti-polymorphic. That's not to say it's bad (there's a huge difference between Good and Good Enough), But in an academic discussion, it's not polymorphic...
Never forget the 60/60 rule. 60% of your total development time will be spent maintaining the code you wrote, and 60% of that time will be spent adding new features. Make maintaining easier, and your life will be easier as well. That's why ```instanceof``` is bad. It makes the initial design easier, but complicates the long term maintenance (which is more expensive anyway)...
Comment for this answer: @sfrj: There is and there is not polymorphism there. There is in the sense that any *child* of `Cat` can be treated the same. But in the sense of this discussion (Since you're typehinting against `Animal`, it should work on all children of animals, not just one subtree), no there is not.
Comment for this answer: @sfrj: Yes, but if that was the case, the `instanceof` method would be polymorphic as well. The only difference is that the cast will throw an exception on failure rather than return a boolean. You *could* do it that way, but it just feels wrong to me...
Comment for this answer: @user: I would argue that all of that functionality would not belong on the base class in the first place. You'd wind up with a huge ball of code that will be impossible to maintain. I'd go as far as arguing that if your interface has more than 4 or 5 methods, it's prob a good candidate for refactoring. So my full answer to your objection would be to break out that functionality into another class tree separate from the animal. But I didn't say that because that's outside the scope and spirit of this particular question. I hope that clarifies it a bit...
Comment for this answer: This was a really good practical explanation. Thank you.Can i ask you just one more thing i am interested: What do you think about the line where the casting is being done. Do you think, is there any type of polymorphism there?
Comment for this answer: @ircmaxell I understand partially, i am confused in one thing: I thought that all the IS-A relationships(implementation and inheritance), are in some way polymorphic, so always when do type casting, polymorphism is being proved. Is that i just said correct?
Comment for this answer: @ircmaxell You are right, it looks wrong to me too. My goal was to try to find and argument to help her earn maybe a couple of points at the exam review. Thank you very much for your help clearing my doubts.
Comment for this answer: The first part of your explanation is excellent, but when you suggest to use Exceptions, it is nearly as worse. Some animals will hide in a sea anemone, some will sleep in winter, some will do this, and some will do that. You will end up adding booleans over booleans to the base class, and checks and checks into the subclasses. Instead, using different interfaces for different capabilities will only affect classes which really use this features, and allow a much more loose coupling.
Here is another answer: Maybe I'm missing the point and don't get the context of the exam question, but whether an ```Animal``` can climb a tree should be a part of the class that implements ```Animal```. For example, if ```Animal``` is an interface, you could have a method ```boolean isCapableOfClimbing()``` and then each implementing class would be able to indicate its capability.
A method that attempted to make the animal climb could then use that. It doesn't make sense for a method that's trying to make the animal climb a tree check whether it's an instance of a particular class, since then that method is specifying something that should be specified in the implementing class. A simple method should not provide behaviour for a class that it's using.
As for your question of when to use ```instanceof```, once place where it will almost always be used is if overriding the ```equals()``` method of a class, since it only accepts an ```Object``` and you typically have to ensure it is of the same type so it can be cast and then meaningfully compared.
Comment for this answer: @rick: Yes. The problem is the contrived example; if you're ever in a situation where you have a generic `Animal`, you shouldn't be prepared to ask it to `climb` (for a start, what class would `climb` be a method on?).
Comment for this answer: Wouldn't `isCapableOfClimbing()` break the 'Tell, Don't Ask?' mantra?
Comment for this answer: @Oli - probably an interface `Climber` and `Climbable` interface as objects to be climbed upon. I agree that `Animal` is too broad to have any `climbing` or `talking` methods, not all animals are capable of producing a sound or climbing. Maybe `Mammal` interface with `GiveBirth` or `BreastFeed` will be more appropriate as an example.
|
Title: How to add a password log in into screen scraping with Beautiful Soup / Python
Tags: python
Question: I am trying to figure out how to add a piece of code that will help the code I already have to sign into the website so that my program can screen scrape the data I need. The sign in is a simple "Log in ID" and "Password". After sign in I am redirected to a page that is not the page I want to scrape data from.
I have been unable to test my code. My code always gives me "None". I assuming this is due to it not being able to sign into the website.
I am trying to screen scrape:
```<div class="Object7069">
<div style="font-family:verdana; font-size:9px; text-align:right; color:#999999;">
124 of 256
</div>
</div>
```
It is a simple variable "124/256" that will change over time.
My code:
```import urllib2
from bs4 import BeautifulSoup
quote_page = 'https://admin252.acellus.com/StudentFunctions/progress.html'
page = urllib2.urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')
price_box = soup.find('div', attrs={'class':'object7069'})
price = price_box
print price
```
Comment: Just realized it deleted the first half of my question...
Comment: What **exactly** is your question?
Here is another answer: Without actually logging in, I can't test the code properly but when I login to a different website and scrape data (which I do all the time) I use the requests library because it saves cookies and other login info.
Something like this:
```import requests
from bs4 import BeautifulSoup
quote_page = 'https://admin252.acellus.com/StudentFunctions/progress.html'
login_params = {'LoginUsername': 'Kyle', 'Password': 'abcd'}
# You will need to find out which params are actually sent and if it's a GET or POST request.
# I use Firefox developer (F12) to snoop on the parameters that are sent while logging in.
resp = requests.get(quote_page, params=login_params)
soup = BeautifulSoup(resp.content, 'html.parser')
price_box = soup.find('div', attrs={'class':'object7069'})
price = price_box.get_text()
print price
```
BTW, I find myself testing screen scraping in Python's interactive mode before running it as a script. For example, I'd print the resp in order to see if it was successful (code 200).
|
Title: mod_rewrite works on linux, but not on windows
Tags: php;mod-rewrite
Question: This rule works well on Linux based machine:
```RewriteRule ^([^/]*)/([^/]*)$ /index.php?page=$1&id=$2 [L]
```
But on windows based it doesn't. Maybe there is another way to make it work on both systems?
Comment: It's activated, and i thinking the best idea is just change rule...
Comment: The mod_rewrite module in xampp server is disabled by default. search and activate it (remove #) in apache/conf/httpd.conf then restart the apache server.
Here is another answer: It's not depend on OS. I think you used Apache on Linux and now using IIS.
htaccess for IIS is explained here : http://learn.iis.net/page.aspx/557/translate-htaccess-content-to-iis-webconfig/
Here is another answer: What do you mean "it doesn't work" on Windows? I'm pretty sure this is an issue with ```mod_rewrite``` working on your windows setup, and has nothing to do with this particular rule. I'm guessing if you look at ```apache_get_modules()```, you'll see it is not listed.
Comment for this answer: Well, it doesn't, tryed WampServer and xampp. Evene http://localhost/images or http://localhost/javascript or others folders. If i remove that rule from .htaccess everythink works...
|
Title: CASE Statement in where clause in mysql
Tags: mysql;case;where
Question: I'm trying to fetch data from table where I'm using a CASE condition in the WHERE clause and currently I'm using following query:-
```$sQuery = "SELECT SQL_CALC_FOUND_ROWS ".str_replace(" , ", " ",
implode(", ", $aColumns))." FROM (select CASE when r.agent_id=24
THEN r.Unit ELSE '--' END AS MyUnit,
CASE when r.agent_id=24 THEN r.landlord_name ELSE '--' END AS landlord_name_new,
r.*,l.loc_name as location,sl.sub_sub_loc as sub_location,
c.category as category,CONCAT(u.first_name, ' ', u.last_name) As agent
from crm_sales r
LEFT JOIN crm_location l ON r.area_location_id=l.loc_id
LEFT JOIN crm_subloc sl ON sl.sub_loc_id = r.sub_area_location_id
LEFT JOIN crm_category c on c.id = r.category_id
LEFT JOIN crm_users u on u.id=r.agent_id
where r.is_active=1 AND r.is_archive=0
AND CASE agent_id WHEN r.agent_id!=24 then r.status=2 else 1=1
group by r.ref) sel
$sWhere
$sOrder
$sLimit
";
```
Now I want to add one more condition, something like this.
IF(r.agent_id != 24) THEN WHERE r.status=2
EDITED: ADD CASE which i want but error
Here is the accepted answer: Fix the case/when clause in your WHERE clause to...
```AND CASE WHEN r.agent_id != 24
then r.status = 2
else 1 = 1 end
```
|
Title: Using a subshell for parameter substitution with diff
Tags: bash;shell
Question: I'm writing a shell script, and in an effort to make it shorter and easier to read, I'm trying to use nested subshells to pass parameters to diff.
Here's what I have:
```if
diff -iy '$(sort '$(awk 'BEGIN { FS = "|" } ; {print $1}' new-participants-by-state.csv)' '$(awk 'BEGIN { FS = "|" } ; {print $1}' current-participants-by-state.csv)')' > /dev/null;
then
echo There is no difference between the files. > ./participants-by-state-results.txt;
else
diff -iy '$(sort '$(awk 'BEGIN { FS = "|" } ; {print $1}' new-participants-by-state.csv)' '$(awk 'BEGIN { FS = "|" } ; {print $1}' current-participants-by-state.csv)')' > ./participants-by-state-results.txt;
fi
```
When I run the script, I keep getting ```diff: extra operand 'AL'```
I'd appreciate any insight into why this is failing. I think I'm pretty close. Thanks!
Comment: That's an awful lot for one command line. Especially since you're having trouble with it, I think you should break it down into smaller chunks and store them in variables.
Comment: I think it's failing because you are trying to use nested subshells to pass parameters to `diff`? And the lines are so long that the readability has been dramatically reduced.
Here is the accepted answer: Your code is unreadable because the lines are so long:
```if diff -iy '$(sort '$(awk 'BEGIN { FS = "|" } ; {print $1}' new-participants-by-state.csv)' \
'$(awk 'BEGIN { FS = "|" } ; {print $1}' current-participants-by-state.csv)')' \
> /dev/null;
then
echo There is no difference between the files. > ./participants-by-state-results.txt;
else
diff -iy '$(sort '$(awk 'BEGIN { FS = "|" } ; {print $1}' new-participants-by-state.csv)' \
'$(awk 'BEGIN { FS = "|" } ; {print $1}' current-participants-by-state.csv)')' \
> ./participants-by-state-results.txt;
fi
```
Repeating whole commands like that is also fairly nasty. You also have major problems with your use of single quotes; you only have one sort in each set of commands, apparently operating on the combined outputs of two identical ```awk``` commands (whereas you probably need two separate sorts, one for the output of each ```awk``` command); you're not using the ```-F``` option to ```awk``` when you could; you are repeating the gargantuan file names all over the place; and finally, it appears that you are probably wanting to use process substitution, but not actually doing so.
Let's take a step back and formulate the question clearly.
Given two files (```new-participants-by-state.csv``` and ```current-participants-by-state.csv```) find the first pipe-separated field on each line of each file, sort the lists of those fields, and compare the results of the two sorted lists.
If there are no differences, write a message into the output file ```participants-by-state-results.txt```; otherwise, list the differences in the output file.
So, we could use:
```oldfile='current-participants-by-state.csv'
newfile='new-participants-by-state.csv'
outfile='participants-by-state-results.txt'
tmpfile=${TMPDIR:-/tmp}/xx.$$
awk -F'|' '{print $1}' $oldfile | sort > $tmpfile.1
awk -F'|' '{print $1}' $newfile | sort > $tmpfile.2
if diff -iy $tmpfile.1 $tmpfile.2 > $outfile
then echo "There is no difference between the files" > $outfile
fi
rm -f $tmpfile.?
```
If this was going to be the final script, we'd want to put trap handling in place so that the temporary files are not left around unless the script is killed dead with SIGKILL.
However, we can now use process substitution to avoid the temporary files:
```oldfile='current-participants-by-state.csv'
newfile='new-participants-by-state.csv'
outfile='participants-by-state-results.txt'
if diff -iy <(awk -F'|' '{print $1}' $oldfile | sort) \
<(awk -F'|' '{print $1}' $newfile | sort) > $outfile
then echo "There is no difference between the files" > $outfile
fi
```
Note how the code carefully preserves symmetries where there are symmetries. Note the use of shortish variable names to avoid the repetition of long file names. Note that the ```diff``` command is run just once, not twice - throwing away results which are needed later is not very sensible.
You could compress the output I/O redirection even more using:
```{
if diff -iy <(awk -F'|' '{print $1}' $oldfile | sort) \
<(awk -F'|' '{print $1}' $newfile | sort)
then echo "There is no difference between the files"
fi
} > $outfile
```
That sends the standard output of the enclosed commands to the file.
Of course, CSV might not be the appropriate nomenclature if the files are pipe-separated rather than comma-separated, but that's another matter altogether.
I'm also assuming that the status from ```diff -iy``` works as suggested by the original script; I've not validated that usage of the ```diff``` command.
Comment for this answer: Thanks for such a detailed answer. This is the first time I've tried anything like this, so I appreciate it.
Here is another answer: There are several problems here.
First, you're putting various arguments in single-quotes, which prevents any interpretation being done on them (for example, ```$(....)``` doesn't do anything special inside single-quotes). You're probably thinking of double-quotes, but those aren't what you want either.
Which brings us to the second problem, that diff and sort expect to be given filenames as arguments, and they operate on the data in those files; you're trying to pass the data directly as arguments, which doesn't work (and I suspect that's the origin of the error you're getting: diff expects exactly two filenames, you're passing more than two participant names, and AL happened to be third on the list and hence the one that diff panicked on). The usual way to do this is to use intermediate files (and multiple lines in the script), but bash actually has a way of doing this without either of those: process substitution. Essentially, what it does is run one command with output (or input, but we need output in this case) sent to a named pipe; then it passes the name of the pipe as an argument to another command. For example, ```diff <(command1) <(command2)``` will give you the differences between the outputs of command1 and command2. Note that since this is a bash-only feature, you must start the script with ```#!/bin/bash```, not ```#!/bin/sh```.
Third, there's a missing close-parenthesis that makes it a little hard to tell what's supposed to happen. Are both files supposed to be sorted before the comparison, or only the new-participants file?
Fourth, since the final comparison ignores case (```-i```), you'd better use a case-insensitive sort (```-f```) as well.
Finally, you're doing all of the processing twice if there are any differences. I'd recommend running the comparison once into a file, then if there were no differences just ignore/overwrite the (empty) file.
Oh, and just a stylistic thing: you don't need semicolons at the end of lines in bash. You only need semicolons if you're putting more than one command on the same line (and a few other cases like before ```then``` in an ```if``` statement).
Anyway, here's my rewrite:
```#!/bin/bash
if
diff -iy <(awk 'BEGIN { FS = "|" } ; {print $1}' new-participants-by-state.csv | sort -f) <(awk 'BEGIN { FS = "|" } ; {print $1}' current-participants-by-state.csv | sort -f) >./participants-by-state-results.txt
then
echo "There is no difference between the files." > ./participants-by-state-results.txt
fi
```
Comment for this answer: Thanks a lot for walking through all the issues and the rewrite!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.