_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d16601 | val | Following is your JSON string.
{
"status": "FOUND",
"messages": null,
"sharedLists": [
{
"listId": "391647d",
"listName": "/???",
"numberOfItems": 0,
"colla borative": false,
"displaySettings": true
}
]
}
Clearly sharedLists is a JSON array within the outer JSON object.
So I have two classes as follows (created from http://www.jsonschema2pojo.org/ by providing your JSON as input)
ResponseObject - Represents the outer object
public class ResponseObject {
@SerializedName("status")
@Expose
private String status;
@SerializedName("messages")
@Expose
private Object messages;
@SerializedName("sharedLists")
@Expose
private List<SharedList> sharedLists = null;
public String getStatus() {
return status;
}
public void setStatus(String status) {
this.status = status;
}
public Object getMessages() {
return messages;
}
public void setMessages(Object messages) {
this.messages = messages;
}
public List<SharedList> getSharedLists() {
return sharedLists;
}
public void setSharedLists(List<SharedList> sharedLists) {
this.sharedLists = sharedLists;
}
}
and the SharedList - Represents each object within the array
public class SharedList {
@SerializedName("listId")
@Expose
private String listId;
@SerializedName("listName")
@Expose
private String listName;
@SerializedName("numberOfItems")
@Expose
private Integer numberOfItems;
@SerializedName("colla borative")
@Expose
private Boolean collaBorative;
@SerializedName("displaySettings")
@Expose
private Boolean displaySettings;
public String getListId() {
return listId;
}
public void setListId(String listId) {
this.listId = listId;
}
public String getListName() {
return listName;
}
public void setListName(String listName) {
this.listName = listName;
}
public Integer getNumberOfItems() {
return numberOfItems;
}
public void setNumberOfItems(Integer numberOfItems) {
this.numberOfItems = numberOfItems;
}
public Boolean getCollaBorative() {
return collaBorative;
}
public void setCollaBorative(Boolean collaBorative) {
this.collaBorative = collaBorative;
}
public Boolean getDisplaySettings() {
return displaySettings;
}
public void setDisplaySettings(Boolean displaySettings) {
this.displaySettings = displaySettings;
}
}
Now you can parse the entire JSON string with GSON as follows
Gson gson = new Gson();
ResponseObject target = gson.fromJson(inputString, ResponseObject.class);
Hope this helps. | unknown | |
d16602 | val | When you union two queries together, the columns on both must match.
You select from posts,follow,users on the first query and posts,users on the second.
this won't work.
From the mysql manual:
The column names from the first SELECT statement are used as the column names for the results returned. Selected columns listed in corresponding positions of each SELECT statement should have the same data type
A: Perhaps a JOIN would serve you better ... something like this:
SELECT * FROM posts
JOIN users on posts.author_id=users.id
JOIN followers on users.id=follow.following
WHERE follow.follower='$id' | unknown | |
d16603 | val | I'll leave aside the fact that this doesn't sound very secure. Maybe you have a good reason for doing it this way that I'm not aware of.
In the tMySQLOutput component, go to the Advanced settings tab, and add the following in the Additional JDBC parameters:"authenticationPlugins=mysql_clear_password" (with quotes).
(note: I'm not sure if the parameter value has the right syntax. You might have to do some more digging to find out)
Rationale:
1) The link you sent has this line:
The mysql, mysqladmin, and mysqlslap client programs support an --enable-cleartext-plugin option that enables the plugin on a per-invocation basis.
2) The tMySQLOutput allows custom parameters to be sent to the JDBC library. See here for details: https://help.talend.com/display/TalendComponentsReferenceGuide54EN/tMysqlOutput .
3) MySQL's JDBC library has an authentication plug-in parameter. See here for details: (scroll down to the list of parameters) https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-reference-configuration-properties.html | unknown | |
d16604 | val | You can add flags that specify that the new activity will replace the old one:
public void openMain(View view){
Intent intent = new Intent(this, MainActivity.class);
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_CLEAR_TASK);
startActivity(intent);
} | unknown | |
d16605 | val | To avoid this confusion you can use
"git push origin --delete branch_name"
this deletes remote branch not local branch.
A: Make sure you use a capitol D in the command, in this case you would type git branch -D <branch_name>. Note that this will only delete the branch from your local computer
If you are trying to delete a remote branch, type git push origin :<branch_name> (remember to add the colon)
A: Find the file .git/refs/heads/branch_name in Windows Explorer and directly delete it.
A: This only happens when the default branch (the branch HEAD in the bare repository folder is pointing to - usually master) doesn't exist locally.
If it does exist - you would be able to delete the branch with branch -d even from a bare repository folder (-d assuming it was merged).
If you are working with git worktrees - issuing the command from the worktree may succeed (same condition regarding HEAD of the worktree should be met)
A: From Pyrocks's answer:
This only happens when the default branch (the branch HEAD in the bare repository folder is pointing to - usually master) doesn't exist locally
This case is addressed with Git 2.39 (Q4 2022), which fixes a bug where git branch -d(man) did not work on an orphaned HEAD.
See commit eb20e63 (02 Nov 2022) by Jeff King (peff).
(Merged by Taylor Blau -- ttaylorr -- in commit 26734da, 18 Nov 2022)
branch: gracefully handle '-d' on orphan HEAD
Reported-by: Martin von Zweigbergk
Signed-off-by: Jeff King
Signed-off-by: Taylor Blau
When deleting a branch, "git branch -d"(man) has a safety check that ensures the branch is merged to its upstream (if any), or to HEAD.
To do that, naturally we try to resolve HEAD to a commit object.
If we're on an orphan branch (i.e., HEAD points to a branch that does not yet exist), that will fail, and we'll bail with an error:
$ git branch -d to-delete
fatal: Couldn't look up commit object for HEAD
This usually isn't that big of a deal.
The deletion would fail anyway, since the branch isn't merged to HEAD, and you'd need to use "-D" (or "-f").
See "How do I delete a local git branch when it can't look up commit object in 'refs/heads'?", since Git 2.34 (Q4 2021) added the --force option to git branch -d.
And doing so skips the HEAD resolution, courtesy of 67affd5 ("git-branch -D(man): make it work even when on a yet-to-be-born branch", 2006-11-24, Git v1.5.0-rc0 -- merge).
But there are still two problems:
*
*The error message isn't very helpful.
We should give the usual "not fully merged" message, which points the user at "branch -D".
That was a problem even back in 67affd5.
*Even without a HEAD, these days it's still possible for the deletion to succeed.
After 67affd5, commit 99c419c (branch -d: base the , 2009-12-29, Git v1.7.0-rc0 -- merge) (branch -d: base the "already-merged" safety on the branch it merges with, 2009-12-29) made it OK to delete a branch if it is merged to its upstream.
We can fix both by removing the die() in delete_branches() completely, leaving head_rev NULL in this case.
It's tempting to stop there, as it appears at first glance that the rest of the code does the right thing with a NULL.
But sadly, it's not quite true.
We end up feeding the NULL to repo_is_descendant_of().
In the traditional code path there, we call repo_in_merge_bases_many().
It feeds the NULL to repo_parse_commit(), which is smart enough to return an error, and we immediately return "no, it's not a descendant". | unknown | |
d16606 | val | You can do it by hand or use a library that provides big integers like https://mattmccutchen.net/bigint/
A: Take the modulo by breaking them into pieces.. say for example you want to take modulo of 37^11 mod 77 in which 37^11 gives answer 1.77917621779460E17 so to get this .. take some small number in place of 11 which gives an integer value.. break it into pieces... 37^11 mod 77 can be written as (37^4 x 37^4 x 37^3 mod 77) so solve it as.. {(37^4 mod 77)(37^4 mod 77)(37^3 mod 77)} mod 77. So, in general xy mod n = {(x mod n)(y mod n)} mod n | unknown | |
d16607 | val | From the Add-on SDK docs
Changing minVersion and maxVersion Values | unknown | |
d16608 | val | I do not believe there is any way this is possible. Even though CoreLocation's iBeacon APIs use Bluetooth LE and CoreBluetooth under the hood, Apple appears to have gone to some lengths to hide this implementation. There is no obvious way to see whether a Bluetooth LE scan is going on at a specific point in time.
Generally speaking, a Bluetooth LE scan is always going on when an app is ranging for iBeacons in the foreground. When an app is monitoring for iBeacons (either in the foreground or background) or ranging in the background, indirect evidence suggests that scans for beacons take place every few minutes, with the exact number ranging from 1-15 depending on phone model and state. I know of no way to programmatically detect the exact times when this starts and stops, although it can be inferred by iBeacon monitoring entry/exit times. If you look at the graph below, the blue dots show the inferred scan times for one particular test case. Details of how I inferred this are described in this blog post. | unknown | |
d16609 | val | If you want to have a single (later) revision where you revert the changes from all those merges, you can do it like this:
git checkout <id-of-revision> # use the ID of the revision you would like to get your project back to (in terms of content)
git reset --soft <the-branch> # the branch where we want to add a revision to revert all of that
git commit -m "Reverting"
# If you like the results
git branch -f <the-branch> # move branch pointer to this new revision
git checkout <the-branch>
A: Step1
Create a new backup branch first and keep it aside. (backup-branch)
Create a new branch from master or dev wherever you want to revert.(working-branch)
git revert commitid1
git revert commitid2
git revert commitid3....
is the best option.
dont do git reset --hard commitid it will mesh up your indexing.
Reverting is the safe option.
i have done 180 revert commits.
Step2
git log -180 --format=%H --no-merges
use this command to print all the commit ids alone.
it will ignore the merge commit id.
commitid1 commitid2 commitid3 ..... it will print like that.
Copy it to sublime ctrl+a -> ctrl +alt + l add
git revert --no-commit commitid1
git revert --no-commit commitid2
git revert --no-commit commitid3
Copy all and paste in the command prompt. All your commits will be reverted. now do a git commit.
Then do git push.
create a merge request to master.
Step3
How to verifiy?
create a new branch (verify-branch).
YOu can verify it by doing a
git reset -hard commitidX. This is the commit id to which you need to revert.
git status It will give you the number of commits behind the master.
git push -f
Now compare this branch with your working-branch by creating a pull request between them. you will see no changes means your working branch successfully reverted back to the version you're looking for. | unknown | |
d16610 | val | The solution to this seems to be to use separate script blocks. Apparently the document.write will not effect the loading of the scripts, until the script block closes.
That is, try this:
<script>
if (!window.jQuery) {
document.write('<script src="/Scripts/jquery-1.5.1.min.js" type="text/javascript"><' + '/script>');
}
</script>
<script>
if (!window.jQuery.ui) {
document.write('<script src="/Scripts/jquery-ui-1.8.11.min.js" type="text/javascript"></scr' + 'ipt>');
}
</script>
Works for me. Tested in IE and Firefox.
A: I've always injected js files via js DOM manipulation
if (typeof jQuery == 'undefined') {
var DOMHead = document.getElementsByTagName("head")[0];
var DOMScript = document.createElement("script");
DOMScript.type = "text/javascript";
DOMScript.src = "http://code.jquery.com/jquery.min.js";
DOMHead.appendChild(DOMScript);
}
but it's a bit picky and may not work in all situations
A: Misread the question slightly (can and can't look very similar).
If you're willing to use another library to handle it, there are some good answers here.
loading js files and other dependent js files asynchronously
A: Just write your own modules (in Dojo format, which since version 1.6 has now switched to the standard AMD async-load format) and dojo.require (or require) them whenver a portlet is loaded.
The good thing about this is that a module will always only load once (even when a portlet type is loaded multiple times), and only at the first instance it is needed -- dojo.require (or require) always first checks if a module is already loaded and will do nothing if it is. In additional, Dojo makes sure that all dependencies are also automatically loaded and executed before the module. You can have a very complex dependency tree and let Dojo do everything for you without you lifting a finger.
This is very standard Dojo infrastructure. Then entire Dojo toolkit is built on top of it, and you can use it to build your own modules as well. In fact, Dojo encourages you to break your app down into manageable chunks (my opinion is the smaller the better) and dynamically load them when necessary. Also, leverage class hierachies and mixins support. There are a lot of Dojo intrastructure provided to enable you to do just that.
You should also organize your classes/modules by namespaces for maximal manageability. In my opinion, this type of huge enterprise-level web apps is where Dojo truely shines with respect to other libraries like jQuery. You don't usually need such infrastructure for a few quick-and-dirty web pages with some animations, but you really appreciate it when you're building complicated and huge apps.
For example, pre-1.6 style:
portletA.js:
dojo.provide("myNameSpace.portletA.class1");
dojo.declare("myNameSpace.portletA.class1", myNameSpace.portletBase.baseClass, function() { ...
});
main.js:
dojo.require("myNameSpace.portletA.class1");
var myClass1 = new myNameSpace.portletA.class1(/* Arguments */);
Post-1.6 style:
portletA.js:
define("myNameSpace/portletA/class1", "myNameSpace/portletBase/baseClass", function(baseClass) { ...
return dojo.declare(baseClass, function() {
});
});
main.js:
var class1 = require("myNameSpace/portletA/class1");
var myClass1 = new class1(/* Arguments */);
A: Pyramid is a dependency library that can handle this situation well. Basically, you can define you dependencies(in this case, javascript libraries) in a dependencyLoader.js file and then use Pyramid to load the appropriate dependencies. Note that it only loads the dependencies once (so you don't have to worry about duplicates). You can maintain your dependencies in a single file and then load them dynamically as required. Here is some example code.
File: dependencyLoader.js
//Set up file dependencies
Pyramid.newDependency({
name: 'standard',
files: [
'standardResources/jquery.1.6.1.min.js'
//other standard libraries
]
});
Pyramid.newDependency({
name:'core',
files: [
'styles.css',
'customStyles.css',
'applyStyles.js',
'core.js'
],
dependencies: ['standard']
});
Pyramid.newDependency({
name:'portal1',
files: [
'portal1.js',
'portal1.css'
],
dependencies: ['core']
});
Pyramid.newDependency({
name:'portal2',
files: [
'portal2.js',
'portal2.css'
],
dependencies: ['core']
});
Html Files
<head>
<script src="standardResources/pyramid-1.0.1.js"></script>
<script src="dependencyLoader.js"></script>
</head>
...
<script type="text/javascript">
Pyramid.load('portal1');
</script>
...
<script type="text/javascript">
Pyramid.load('portal2');
</script>
So shared files only get loaded once. And you can choose how you load your dependencies. You can also just define a further dependency group such as
Pyramid.newDependency({
name:'loadAll',
dependencies: ['portal1','portal2']
});
And in your html, just load the dependencies all at once.
<head>
<script src="standardResources/pyramid-1.0.1.js"></script>
<script src="dependencyLoader.js"></script>
<script type="text/javascript">
Pyramid.load('loadAll');
</script>
</head>
Some other features that might also help is that it can handle other file types (like css) and also can combine your separate development files into a single file when ready for a release. Check out the details here - Pyramid Docs
note: I am biased since I worked on Pyramid. | unknown | |
d16611 | val | I make use of the backgroundView in an extension as such:
extension UICollectionView {
func setEmptyMessage(_ message: String) {
let messageLabel = UILabel(frame: CGRect(x: 0, y: 0, width: self.bounds.size.width, height: self.bounds.size.height))
messageLabel.text = message
messageLabel.textColor = .black
messageLabel.numberOfLines = 0;
messageLabel.textAlignment = .center;
messageLabel.font = UIFont(name: "TrebuchetMS", size: 15)
messageLabel.sizeToFit()
self.backgroundView = messageLabel;
}
func restore() {
self.backgroundView = nil
}
}
and I use it as such:
func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
if (self.movies.count == 0) {
self.collectionView.setEmptyMessage("Nothing to show :(")
} else {
self.collectionView.restore()
}
return self.movies.count
}
A: Maybe little bit late but here is little more constraint based solution. Still may help some one.
First create some empty state message (You can also create own, more complex view with image or something).
lazy var emptyStateMessage: UILabel = {
let messageLabel = UILabel()
messageLabel.translatesAutoresizingMaskIntoConstraints = false
messageLabel.textColor = .darkGray
messageLabel.numberOfLines = 0;
messageLabel.textAlignment = .center;
messageLabel.font = UIFont.systemFont(ofSize: 15)
messageLabel.sizeToFit()
return messageLabel
}()
Then add two methods and call them whenever you like.
func showEmptyState() {
collectionView.addSubview(emptyStateMessage)
emptyStateMessage.centerXAnchor.constraint(equalTo: collectionView.centerXAnchor).activate()
emptyStateMessage.centerYAnchor.constraint(equalTo: collectionView.centerYAnchor).activate()
}
func hideEmptyState() {
emptyStateMessage.removeFromSuperview()
}
A: Use backgroundView of your Collection View to display the no results message.
A: I would say have two sections in your collectionView, where one is for the actual data to be displayed, then the other one when you don't have any data.
Assuming we are populating data in section 0.
in numberOfRowsInSection you'd have something like:
if section == 0 { return movies.count}
else if section == 1 {
if movies.count < 1 { return 1 }
else { return 0 }
}
return 0
in your cellForItemAt you'd do something like that:
if indexPath.section == 0 {
// let cell = ...
// show your data into your cell
return cell
}else {
// here you handle the case where there is no data, but the big thing is we are not dequeuing
let rect = CGRect(x: 0, y: 0, width: self.favCollectionView.frame.width, height: self.favCollectionView.frame.height)
let noDataLabel: UILabel = UILabel(frame: rect)
noDataLabel.text = "No favorite movies yet."
noDataLabel.textAlignment = .center
noDataLabel.textColor = UIColor.gray
noDataLabel.sizeToFit()
let cell = UICollectionViewCell()
cell.contentView.addSubview(noDataLabel)
return cell
}
A: Why don't you use:
open var backgroundView: UIView?
// will be automatically resized to track the size of the collection view and placed behind all cells and supplementary views.`
You can show it whenever collection is empty. Example:
var collectionView: UICollectionView!
var collectionData: [Any] {
didSet {
collectionView.backgroundView?.alpha = collectionData.count > 0 ? 1.0 : 0.0
}
} | unknown | |
d16612 | val | I found a similar question here
What you can do is use a comma: $alist = "a,b"
The comma will be seen as a paramater seperator:
PS D:\temp> $arglist = "a,b"
PS D:\temp> .\testpar.cmd $arglist
[a] [b]
You can also use an array to pass the arguments:
PS D:\temp> $arglist = @("a", "c")
PS D:\temp> .\testpar.cmd $arglist
[a] [c]
A: The most convenient way to me to pass $alist is:
PS> .\testpar.cmd $alist.Split()
[arg1] [arg2]
In this way I don't need to change the way I build $alist.
Besides Split() works well also for quoted long arguments:
PS> $alist= @"
>> "long arg1" "long arg2"
>> "@
This is:
PS> echo $alist
"long arg1" "long arg2"
and gives:
PS> .\testpar.cmd $alist.Split()
["long arg1"] ["long arg2"]
A: In general, I would recommend using an array to build up a list of arguments, instead of mashing them all together into a single string:
$alist = @('long arg1', 'long arg2')
.\testpar.cmd $alist
If you need to pass more complicated arguments, you may find this answer useful. | unknown | |
d16613 | val | I think you're missing the config level in your XML hierarchy, you could do:
part_number = tree.find('config').find('swpn').text
part_desc = tree.find('config').find('swname').text
Alternately you can loop through all the elements if you don't want to have to know the structure and use conditionals to find the elements you care about with tree.iter.
for e in tree.iter():
if e.tag == 'sqpn':
part_number = e.text
if e.tag == 'swname':
part_desc = e.text
A: ElementTree and etree's find functionality searchers for direct children.
You can still use it by specifying the entire branch:
tree.find('config').find('swpn')
tree.find('config/swpn')
If you always want to look for swpn, but disregard the structure (e.g. you don't know if it's going to be a child of config), you might find it easier to use the xpath functionality in etree (and not in ElementTree):
tree = etree.fromstring(data)
tree.xpath('//swpn')
In this case, the // basically mean that you are looking for elements in tree, no matter where they are
If the xml files are small, and you don't care about performance, you can use minidom which IMHO is more convenient compared to lxml. In this case, your code could be something like this:
from xml.dom.minidom import parseString
xml = parseString(data)
PartNo = xml.getElementsByTagName('swpn')[0]
Desc = xml.getElementsByTagName('swname')[0]
print(PartNo.firstChild.nodeValue) | unknown | |
d16614 | val | I think you have a typo in your settings.py file: You're trying to connect to port 9300 while elasticsearch is running on port 9200:
Caused by: java.net.ConnectException: Connection refused: /10.142.0.2:9300
Can you post the relevant parts of your settings.py file if that doesn't solve the issue?
EDIT
Looking through a few relevant posts, it seems as though nodes communicate with each other via port 9300 as well as port 9200. Does your ping work on port 9300 as well? If not, that may also need to be opened up.
Possibly related: https://discuss.elastic.co/t/elasticsearch-port-9200-or-9300/72080 | unknown | |
d16615 | val | Doc Says, As an API designer, you should use them sparingly, only when the
benefit is truly compelling.
vararg can be represented by three dots (...) that's just not going to look good with byte at least IMHO. I suggest you to stick with byte[] as in most cases of programming we will have byte[] and not singular byte elements and you won't benefit anything with varargs in this particular case.
public static String bytesToHex(byte... bytes) {
} | unknown | |
d16616 | val | There is already an inbuilt function in Julia that does exactly that:
using DelimitedFiles
reshape(readdlm("myfilename.txt"),:,2)
Let's give it a spin:
shell> more file.txt
1 2 3
4 5 6
7 8 9
10 11 12
julia> reshape(readdlm("file.txt"),:,2)
6×2 Array{Float64,2}:
1.0 8.0
4.0 11.0
7.0 3.0
10.0 6.0
2.0 9.0
5.0 12.0
or if you want a different ordering just transpose with '
julia> reshape(readdlm("file.txt")',:,2)
6×2 reshape(::LinearAlgebra.Adjoint{Float64,Array{Float64,2}}, 6, 2) with eltype Float64:
1.0 7.0
2.0 8.0
3.0 9.0
4.0 10.0
5.0 11.0
6.0 12.0
A: (untested)
function read2col(filename, len)
asfloat64(s) = try x = parse(Float64, s); return x catch; return missing; end
data = []
for word in split(read(filename, String), r"\s+")
push!(data, word)
end
data = reshape(data,(len, 2))
data = asfloat64.(data)
return data
end
or even
asfloat64(s) = try x = parse(Float64, s); return x catch; return missing; end
read2col(fname, len) = asfloat64.(reshape(split(read(fname, String), r"\s+"), (len, 2)))
A: The laziest way to do it is to use CSV.jl
using CSV
for row in CSV.File("file.txt",delim=' ',ignorerepeated=true)
println("a=$(row.a), b=$(row.b), c=$(row.c)")
end
delim=',': a Char or String that indicates how columns are delimited in a file; if no argument is provided, parsing will try to detect the most consistent delimiter on the first 10 rows of the file | unknown | |
d16617 | val | Quick glance at the help reveals the code you need. In your case:
pow(velocity, 3)
and
sin(pow(tan(myValue), -1))
Please learn to use the help first. And also add what errors/problems you hit when you tried something already :) | unknown | |
d16618 | val | You can use the BindDefaultInterfaces() method, which will bind every class which has the View word in their names to your IView interface:
.Kernel.Bind(
x => x.FromThisAssembly()
.SelectAllClasses().InNamespaceOf<FirstView>()
.BindDefaultInterfaces());
You can also check the available "BindSomething" options in the documentation. | unknown | |
d16619 | val | I'm sure you know that you could disallow users who are not authenticated via web.config
<system.web>
<authentication mode="Windows"/>
<authorization>
<deny users="?"/>
</authorization>
</system.web>
would do it I think.
A: Taken from technet
The property for anonymous access is
unfortunately not available through
Web setup projects. For this reason,
you must:
*
*Write a custom installer to enable or disable anonymous access.
*Pass the necessary parameters from the setup wizard to the installer at
run time.
A: <configuration>
<system.web>
<compilation debug="true" targetFramework="4.0" />
<authentication mode="Forms">
<forms loginUrl="SignIn.aspx" defaultUrl="Welcome.aspx" protection="All">
<credentials passwordFormat="Clear">
<user name="lee" password="lee12345"/>
<user name="add" password="add12345"/>
</credentials>
</forms>
</authentication>
<authorization>
<deny users="?" /> //deny acces to anonymous users
</authorization>
</system.web>
</configuration> | unknown | |
d16620 | val | I'm having some (permission denied) problem with SSH ( which i didn't have yesterday),
Check with which account your root command is executed: root or your own?
Because the cron job will look for ~/.ssh/id_rsa(.pub) keys in the HOME folder. Make sure the provate key is not passphrase protected.
so for now i kinda want to do this with HTTP,
Then make sure the credentials are cached, through a credential helper. That way, you don't have to enter any credentials. (Again, beware of the account used). | unknown | |
d16621 | val | This is because that a type has not been attached to the object. If you create a type and attach it it should work. The type in your case could be:
type myType = {
data: object
}
And the object you are using as props needs to be declared to be this type:
const initialCanvasDataModel: myType = {
And then you can extend the type when there are more children. For this to work i do not believe you can use props, but you can get the props like this:
const RootContainer = ({myData}: {myData: myType}) => {
Now you should be able to use autocomplete as normal. | unknown | |
d16622 | val | This error usually means that the icons variable is not the type that you expect it to be.
Often it's because it's an array. In that case, a string can't be used to index the type of the array because a number is needed to do that. e.g. icons[2].
I would try console logging the icons variable and seeing what comes out, from there you can figure out what you need to use to index it :)
I hope this helps! | unknown | |
d16623 | val | The IBM/360 column binary format defines how a hexadecimal value is represented on a Hollerith-card (punch card). This is described e.g. in http://www.jwdp.com/colbin1.html and in https://www.masswerk.at/keypunch/
There are several versions of punch cards, see e.g. https://en.wikipedia.org/wiki/Punched_card. The very common IBM 80-column punched card has 80 colums and 12 rows. The rows are labeld from top to bottom Y, X, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. Using the IBM/360 column binary format it follows for your code:
hex Byte 1 (hex) Byte 2 (hex) Byte 1 (cbf) Byte 2 (cbf) cbf (=column binary format)
\1004 10 04 X 7 X7
\1001 10 01 X 9 X9
\2001 20 01 Y 9 Y9
\1010 10 10 X 5 X5
\0900 09 00 03 0 03
\0000 00 00 0 0 blank
\0006 00 06 0 78 78
\2012 20 12 Y 58 Y58
Next, you have to apply a keypunch to map the punchcard-data to letters, digits and so on. You have not specified a special keypunch. Thus, it makes sense to use the IBM model 029 keypunch which was the most common keypunch, see e.g. https://www.masswerk.at/keypunch/ and your link
http://homepage.divms.uiowa.edu/~jones/cards/codes.html.
cbf 029 keypunch
X7 P
X9 R
Y9 I
X5 N
03 T
blank blank
78 "
Y58 (
Altogether, the result is PRINT "( | unknown | |
d16624 | val | There shouldn't be any problem with that, I have tried that on my test cluster and everything worked just fine.
I had a problem with upgrading immediately from 1.4.3 to 1.5.6, so with below steps you're first upgrading from 1.4.3 to 1.5.0, then from 1.5.0 to 1.5.6
Take a look at below steps to follow.
1.Follow istio documentation and install istioctl 1.4, 1.5 and 1.5.6 with:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.4.0 sh -
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.0 sh -
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.6 sh -
2.Add the istioctl 1.4 to your path
cd istio-1.4.0
export PATH=$PWD/bin:$PATH
3.Install istio 1.4
istioctl manifest apply --set profile=demo
4.Check if everything works correct.
kubectl get pod -n istio-system
kubectl get svc -n istio-system
istioctl version
5.Add the istioctl 1.5 to your path
cd istio-1.5.0
export PATH=$PWD/bin:$PATH
6.Install istio operator for future upgrade.
istioctl operator init
7.Prepare IstioOperator.yaml
nano IstioOperator.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: demo
tag: 1.5.0
8.Before the upgrade use below commands
kubectl -n istio-system delete service/istio-galley deployment.apps/istio-galley
kubectl delete validatingwebhookconfiguration.admissionregistration.k8s.io/istio-galley
9.Upgrade from 1.4 to 1.5 with istioctl upgrade and prepared IstioOperator.yaml
istioctl upgrade -f IstioOperator.yaml
10.After the upgrade use below commands
kubectl -n istio-system delete deployment istio-citadel istio-galley istio-pilot istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete service istio-citadel istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete horizontalpodautoscaler.autoscaling/istio-pilot horizontalpodautoscaler.autoscaling/istio-telemetry
kubectl -n istio-system delete pdb istio-citadel istio-galley istio-pilot istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete deployment istiocoredns
kubectl -n istio-system delete service istiocoredns
11.Check if everything works correct.
kubectl get pod -n istio-system
kubectl get svc -n istio-system
istioctl version
12.Change istio IstioOperator.yaml tag value
nano IstioOperator.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: demo
tag: 1.5.6 <---
13.Upgrade from 1.5 to 1.5.6 with istioctl upgrade and prepared IstioOperator.yaml
istioctl upgrade -f IstioOperator.yaml
14.Add the istioctl 1.5.6 to your path
cd istio-1.5.6
export PATH=$PWD/bin:$PATH
15.I have deployed a bookinfo app to check if everything works correct.
kubectl label namespace default istio-injection=enabled
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
16.Results
curl -v xx.xx.xxx.xxx/productpage | grep HTTP
HTTP/1.1 200 OK
istioctl version
client version: 1.5.6
control plane version: 1.5.6
data plane version: 1.5.6 (9 proxies)
Let me know if have any more questions. | unknown | |
d16625 | val | The image url is supposed to be an absolute url.. not a path (relative or absolute).. it can't be a path, it must be a complete url to the image.
So you need to use a valid url such as the one below.
http://www.example.com/images/image-name.jpg
Something like the below would not work.
../path/to/images/image-name.jpg
and the same goes for below (it won't work):
/root/public/path/to/images/image-name.jpg
so make sure you are specifying a valid url to your image. | unknown | |
d16626 | val | according to the cp documentation, the switch "--preserve=context" allows to copy the Selinux context as well during the process.
Please have a look to this excellent documentation from redhat, it explains the topic wonderfully in an human language:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security-Enhanced_Linux/sect-Security-Enhanced_Linux-Working_with_SELinux-Maintaining_SELinux_Labels_.html | unknown | |
d16627 | val | The behavior is built into the DispatcherServlet. The javadoc defines the root application context.
Only the root application context as loaded by ContextLoaderListener,
if any, will be shared.
The javadoc of ContextLoaderListener also states
Bootstrap listener to start up and shut down Spring's root WebApplicationContext.
And, assuming you use the DispatcherServlet constructor that receives a WebApplicationContext,
If the given context does not already have a parent, the root
application context will be set as the parent.
you'll get this behavior automatically.
Again from the javadoc,
This constructor is useful in Servlet 3.0+ environments where
instance-based registration of servlets is possible through the
ServletContext.addServlet(java.lang.String, java.lang.String) API.
which is what the common AbstractDispatcherServletInitializer uses to set up your Spring MVC application. | unknown | |
d16628 | val | When using POCO entities with the built-in features of Entity Framework, proxy creation must be enabled in order to use lazy loading. So, with POCO entities, if ProxyCreationEnabled is false, then lazy loading won't happen even if LazyLoadingEnabled is set to true.
With certain types of legacy entities (notably those the derive from EntityObject) this was not the case and lazy loading would work even if ProxyCreationEnabled is set to false. But don't take that to mean you should use EntityObject entities--that will cause you more pain.
The ProxyCreationEnabled flag is normally set to false when you want to ensure that EF will never create a proxy, possibly because this will cause problems for the type of serialization you are doing.
The LazyLoadingEnabled flag is normally used to control whether or not lazy loading happens on a context-wide basis once you have decided that proxies are okay. So, for example, you might want to use proxies for change tracking, but switch off lazy loading. | unknown | |
d16629 | val | the problem here is the withDecay returns an animation value (it auto update). So the idea would be to do trX.value = withDecay(), same of y and then use useDerivedValue to get the total Matrix4. I hope this helps. | unknown | |
d16630 | val | The issue is not whether your application and the dynamic libraries were compiled with different versions of clang and/or gcc. The issue is whether, ultimately, there's one underlying C library that manipulates one kind of FILE * object and has one, compatible implementation of fclose().
Under MacOS and Linux, at least, the answer to all these questions is likely to be "yes". In my experience it's hard to get two different, incompatible C libraries into the mix; you'd have to really work at it.
Addendum: I suppose I should admit, however, that my experience may be getting dated. In my experience, on any Unix-like system, there's exactly one C library, generally /lib/libc.{a,so}. But I gather that "modern" compilers are tending to access their own compiler- and version-specific libraries off in special places, meaning that the scenario you're worried about could be a problem. To me, it seems, this way lies madness, but then again, it seems that more and more of the world seems to be embracing dependency hell, rather than trying to eliminate it.
A: It is not generally safe to use a library designed for one compiler with code compiled by a different compiler. A compiler may generate code that implements the nominal functions in the standard library using internal routines or interfaces, and those routines or interfaces may be different or missing in the library designed for another compiler.
Nor is it safe to take any pointer to some internal data structure from one library and use it with another library.
If the sources are just compiled with different versions of one compiler (e.g., clang 73 and clang 89), not different compilers (e.g., Apple clang versus GCC), the compiler might offer some guarantee about library compatibility. You would have to check its documentation. Or, if the compiler is intended to use the library provided with the operating system, that could work. Again, you would have to check its documentation.
A: On Linux, if both your code and the other library dynamically link to the same library (such as libc.so.6), both will get the same version and implementation of that library at runtime. You can check which libraries a given dynamic library links to with ldd.
If you were linking to a library that statically linked in a supporting library, you would need to be careful to pass any structures to or from it against the same version of the library. But this is more likely to come up in practice with libc++ and libstdc++ than with libc.
So, don't statically link your library to another and then pass a data structure that requires client code to separately link to the same library. | unknown | |
d16631 | val | The result of fgets is never undefined. However, your approach is way too low-level. Use file and array_filter:
$results = array_filter(file('input.filename'), function(line) {
return strpos($line, '4') !== false; // Add filter here
});
var_export($results); // Do something with the results here | unknown | |
d16632 | val | Your data is being stored as dates which is a numeric value (today's excel date is 44,885), which does not have a dash, it's just displayed that way. To prove it, change the formatting of a cell to a dollar amount. The split you're using with period is just getting the time from the date (noon would be .5).
If you're trying to split out by month and day, consider converting to text or using the month and day function. Or you could use this function next to your original data and drag it down, and then apply your vba code to the new data (which will be text), I believe you'll get what you expect.
=text(A2,"YYYY-MM-DD HH:MM") | unknown | |
d16633 | val | As MSDN says
When you perform comparisons with nullable types, if the value of one
of the nullable types is null and the other is not, all comparisons
evaluate to false except for != (not equal). It is important not to
assume that because a particular comparison returns false, the
opposite case returns true. In the following example, 10 is not
greater than, less than, nor equal to null. Only num1 != num2
evaluates to true.
int? num1 = 10;
int? num2 = null;
if (num1 >= num2)
{
Console.WriteLine("num1 is greater than or equal to num2");
}
else
{
// This clause is selected, but num1 is not less than num2.
Console.WriteLine("num1 >= num2 returned false (but num1 < num2 also is false)");
}
if (num1 < num2)
{
Console.WriteLine("num1 is less than num2");
}
else
{
// The else clause is selected again, but num1 is not greater than
// or equal to num2.
Console.WriteLine("num1 < num2 returned false (but num1 >= num2 also is false)");
}
if (num1 != num2)
{
// This comparison is true, num1 and num2 are not equal.
Console.WriteLine("Finally, num1 != num2 returns true!");
}
// Change the value of num1, so that both num1 and num2 are null.
num1 = null;
if (num1 == num2)
{
// The equality comparison returns true when both operands are null.
Console.WriteLine("num1 == num2 returns true when the value of each is null");
}
/* Output:
* num1 >= num2 returned false (but num1 < num2 also is false)
* num1 < num2 returned false (but num1 >= num2 also is false)
* Finally, num1 != num2 returns true!
* num1 == num2 returns true when the value of each is null
*/
A: To summarise: any inequality comparison with null (>=, <, <=, >) returns false even if both operands are null. i.e.
null > anyValue //false
null <= null //false
Any equality or non-equality comparison with null (==, !=) works 'as expected'. i.e.
null == null //true
null != null //false
null == nonNull //false
null != nonNull //true
A: Comparing C# with SQL
C#: a=null and b=null => a==b => true
SQL: a=null and b=null => a==b => false
A: According to MSDN - it's down the page in the "Operators" section:
When you perform comparisons with nullable types, if the value of one of the nullable types is null and the other is not, all comparisons evaluate to false except for !=
So both a > b and a < b evaluate to false since a is null...
A: If you do in your last else "else if (a == b)" you wont get any output at all. a is not greater than 1, is not less than 1 or equal to 1, is null | unknown | |
d16634 | val | If I understood it right, you need to have smth like:
<ul>
<g:each in="${yourString.split( '•' )}" var="s">
<li>${s}</li>
</g:each>
</ul>
UPD:
Another way:
${yourString.replaceAll( '•', '<br>•' )} | unknown | |
d16635 | val | Edit: Claudio Cherubino says that Google Play Services is now available and will make this process a lot easier. However, there's no sample code available (yet, he says it's coming soon... they said Google Play Services was "coming soon" 4 months ago, so there's a good chance this answer will continue to be the only completely working example of accessing Google Drive from your Android application into 2013.)
Edit 2X: Looks like I was off by about a month when I said Google wouldn't have a working example until next year. The official guide from Google is over here:
https://developers.google.com/drive/quickstart-android
I haven't tested their methods yet, so it's possible that my solutions from September 2012 (below) are still the best:
Google Play Services is NOT REQUIRED for this. It's a pain in the butt, and I spent well over 50 hours (edit: 100+ hours) figuring it all out, but here's a lot of things that it'll help to know:
THE LIBRARIES
For Google's online services in general you'll need these libraries in your project: (Instructions and Download Link)
*
*google-api-client-1.11.0-beta.jar
*google-api-client-android-1.11.0-beta.jar
*google-http-client-1.11.0-beta.jar
*google-http-client-android-1.11.0-beta.jar
*google-http-client-jackson-1.11.0-beta.jar
*google-oauth-client-1.11.0-beta.jar
*guava-11.0.1.jar
*jackson-core-asl-1.9.9.jar
*jsr305-1.3.9.jar
For Google Drive in particular you'll also need this:
*
*google-api-services-drive-v2-rev9-1.8.0-beta.jar (Download Link)
SETTING UP THE CONSOLE
Next, go to Google Console. Make a new project. Under Services, you'll need to turn on two things: DRIVE API and DRIVE SDK! They are separate, one does not automatically turn the other on, and YOU MUST TURN BOTH ON! (Figuring this out wasted at least 20 hours of my time alone.)
Still on the console, go to API Access. Create a client, make it an Android app. Give it your bundle ID. I don't think the fingerprints thing is actually important, as I'm pretty sure I used the wrong one, but try to get that right anyways (Google provides instructions for it.)
It'll generate a Client ID. You're going to need that. Hold onto it.
Edit: I've been told that I'm mistaken and that you only need to turn on Drive API, Drive SDK doesn't need to be turned on at all, and that you just need to use the Simple API Key, not set up something for Android. I'm looking into that right now and will probably edit this answer in a few minutes if i figure it out...
THE ANDROID CODE - Set Up and Uploading
First, get an auth token:
AccountManager am = AccountManager.get(activity);
am.getAuthToken(am.getAccounts())[0],
"oauth2:" + DriveScopes.DRIVE,
new Bundle(),
true,
new OnTokenAcquired(),
null);
Next, OnTokenAcquired() needs to be set up something like this:
private class OnTokenAcquired implements AccountManagerCallback<Bundle> {
@Override
public void run(AccountManagerFuture<Bundle> result) {
try {
final String token = result.getResult().getString(AccountManager.KEY_AUTHTOKEN);
HttpTransport httpTransport = new NetHttpTransport();
JacksonFactory jsonFactory = new JacksonFactory();
Drive.Builder b = new Drive.Builder(httpTransport, jsonFactory, null);
b.setJsonHttpRequestInitializer(new JsonHttpRequestInitializer() {
@Override
public void initialize(JSonHttpRequest request) throws IOException {
DriveRequest driveRequest = (DriveRequest) request;
driveRequest.setPrettyPrint(true);
driveRequest.setKey(CLIENT ID YOU GOT WHEN SETTING UP THE CONSOLE BEFORE YOU STARTED CODING)
driveRequest.setOauthToken(token);
}
});
final Drive drive = b.build();
final com.google.api.services.drive.model.File body = new com.google.api.services.drive.model.File();
body.setTitle("My Test File");
body.setDescription("A Test File");
body.setMimeType("text/plain");
final FileContent mediaContent = new FileContent("text/plain", an ordinary java.io.File you'd like to upload. Make it using a FileWriter or something, that's really outside the scope of this answer.)
new Thread(new Runnable() {
public void run() {
try {
com.google.api.services.drive.model.File file = drive.files().insert(body, mediaContent).execute();
alreadyTriedAgain = false; // Global boolean to make sure you don't repeatedly try too many times when the server is down or your code is faulty... they'll block requests until the next day if you make 10 bad requests, I found.
} catch (IOException e) {
if (!alreadyTriedAgain) {
alreadyTriedAgain = true;
AccountManager am = AccountManager.get(activity);
am.invalidateAuthToken(am.getAccounts()[0].type, null); // Requires the permissions MANAGE_ACCOUNTS & USE_CREDENTIALS in the Manifest
am.getAuthToken (same as before...)
} else {
// Give up. Crash or log an error or whatever you want.
}
}
}
}).start();
Intent launch = (Intent)result.getResult().get(AccountManager.KEY_INTENT);
if (launch != null) {
startActivityForResult(launch, 3025);
return; // Not sure why... I wrote it here for some reason. Might not actually be necessary.
}
} catch (OperationCanceledException e) {
// Handle it...
} catch (AuthenticatorException e) {
// Handle it...
} catch (IOException e) {
// Handle it...
}
}
}
THE ANDROID CODE - Downloading
private java.io.File downloadGFileToJFolder(Drive drive, String token, File gFile, java.io.File jFolder) throws IOException {
if (gFile.getDownloadUrl() != null && gFile.getDownloadUrl().length() > 0 ) {
if (jFolder == null) {
jFolder = Environment.getExternalStorageDirectory();
jFolder.mkdirs();
}
try {
HttpClient client = new DefaultHttpClient();
HttpGet get = new HttpGet(gFile.getDownloadUrl());
get.setHeader("Authorization", "Bearer " + token);
HttpResponse response = client.execute(get);
InputStream inputStream = response.getEntity().getContent();
jFolder.mkdirs();
java.io.File jFile = new java.io.File(jFolder.getAbsolutePath() + "/" + getGFileName(gFile)); // getGFileName() is my own method... it just grabs originalFilename if it exists or title if it doesn't.
FileOutputStream fileStream = new FileOutputStream(jFile);
byte buffer[] = new byte[1024];
int length;
while ((length=inputStream.read(buffer))>0) {
fileStream.write(buffer, 0, length);
}
fileStream.close();
inputStream.close();
return jFile;
} catch (IOException e) {
// Handle IOExceptions here...
return null;
}
} else {
// Handle the case where the file on Google Drive has no length here.
return null;
}
}
One last thing... if that intent gets sent off, you'll need to handle when it returns with a result.
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == 3025) {
switch (resultCode) {
case RESULT_OK:
AccountManager am = AccountManager.get(activity);
am.getAuthToken(Same as the other two times... it should work this time though, because now the user is actually logged in.)
break;
case RESULT_CANCELED:
// This probably means the user refused to log in. Explain to them why they need to log in.
break;
default:
// This isn't expected... maybe just log whatever code was returned.
break;
}
} else {
// Your application has other intents that it fires off besides the one for Drive's log in if it ever reaches this spot. Handle it here however you'd like.
}
}
THE ANDROID CODE - Updating
Two quick notes on updating the last modified date of a file on Google Drive:
*
*You must provide a fully initialized DateTime. If you do not, you'll get a response of "Bad Request" from Google Drive.
*You must use both setModifiedDate() on the File from Google Drive and setSetModifiedDate(true) on the update request itself. (Fun name, huh? "setSet[...]", there's no way people could mistype that one...)
Here's some brief sample code showing how to do an update, including updating the file time:
public void updateGFileFromJFile(Drive drive, File gFile, java.io.File jFile) throws IOException {
FileContent gContent = new FileContent("text/csv", jFile);
gFile.setModifiedDate(new DateTime(false, jFile.lastModified(), 0));
gFile = drive.files().update(gFile.getId(), gFile, gContent).setSetModifiedDate(true).execute();
}
THE MANIFEST
You'll need the following permissions: GET_ACCOUNTS, USE_CREDENTIALS, MANAGE_ACCOUNTS, INTERNET, and there's a good chance you'll need WRITE_EXTERNAL_STORAGE as well, depending on where you'd like to store the local copies of your files.
YOUR BUILD TARGET
Right click your project, go into it's properties, and under Android change the build target to Google APIs if you must. If they aren't there, download them from the android download manager.
If you're testing on an emulator, make sure its target is Google APIs, not generic Android.
You'll need a Google Account set up on your test device. The code as written will automatically use the first Google Account it finds (that's what the [0] is.) IDK if you need to have downloaded the Google Drive app for this to have worked. I was using API Level 15, I don't know how far back this code will work.
THE REST
The above should get you started and hopefully you can figure your way out from there... honestly, this is just about as far as I've gotten so far. I hope this helps A LOT of people and saves them A LOT of time. I'm fairly certain I've just written the most comprehensive set up guide to setting up an Android app to use Google Drive. Shame on Google for spreading the necessary material across at least 6 different pages that don't link to each other at all.
A: Check this video from Google I/O to learn how to integrate your Android app with Drive:
http://www.youtube.com/watch?v=xRGyzqD-vRg
Please be aware that what you see in the video is based on Google Play Services:
https://developers.google.com/android/google-play-services/
A: It's 2015, things have changed!
Get the 'Drive API for Android' with gradle:
compile 'com.google.android.gms:play-services-drive:7.8.0'
There's some new doco (although still lackluster IMO):
https://developers.google.com/drive/web/quickstart/android
And for those about to go caving...the biggest problem I encountered thus far is that there is absolutely no way of distinguishing folders that have been permanently deleted from folders that are normal...you can find them, you can create folders and files within them, only writing to the file DriveContents will always fail.
A: Take a look at Google's DrEdit Example, which has a folder called android/. Copy it, follow the readme, and it should work (works for me on an Android emulator with KitKat).
.
P.S.
Sorry for reviving this, but the new Google Drive Android API doesn't support full Drive access, only drive.file and drive.appdata authorization scopes, so if you need full access you have to go back to the good 'ol Google API's Client for Java (which the DrEdit example uses). | unknown | |
d16636 | val | You are getting Uncaught TypeError because you are trying to call the plugin before even the device is ready... Call the Plugin only when the device is ready...
UPDATE
What you have to do is to determine what all fields you need to show in native side only...
After that pass a variable(flag) from java to JavaScript which will be accessible at pagebeforeshow
So your Activity will look like something like this
/*
* Some database operations which you need to check
* so that you can determine what to show
*/
this.setIntegerProperty("loadUrlTimeoutValue", 70000);
super.loadUrl("file:///android_asset/www/index.html", 20000);
super.loadUrl("javascript: { var pageFlag = '" + flag + "';}");
And in your index.html show like this
$("#loginPage").live('pagebeforeshow',function(event, ui){
alert(pageFlag);
});
After that with the use of flag you can determine what to show and what not to...
Hope it helps... | unknown | |
d16637 | val | Check the lengths of the lists.
if len(list1) > 0 and len(list2) > 0:
# do something using both lists
elif len(list1) > 0:
# do something using just the first list
else:
# do something using just the second list
If you're looking specifically for the first element, you can shorten this to:
if list1 and list2:
# do something using both lists
elif list1:
# do something using just the first list
else:
# do something using just the second list
Evaluating a list in a boolean context checks if the list is non-empty.
A: If you want to check if list[n] exists, use if len(list) > n. List indexes are always consecutive, and never skip, so it works.
A: If you want to check if element of specific index is in the list you can check if index < len(list1). (assuming index is a non negative integer)
if index < len(list1) and index < len(list2):
#do something using both lists
elif index < len(list1):
#do something using just the first list
elif index < len(list2):
#do something using just the second list
If you want to check whether element of specific value is in the list you will use if value in list1.
if value in list1 and value in list2:
#do something using both lists
elif value in list1:
#do something using just the first list
elif value in list2:
#do something using just the second list | unknown | |
d16638 | val | You can use File>>listFiles()
http://download.oracle.com/javase/1.4.2/docs/api/java/io/File.html
to get the array of Files in a particular directory (the one you initialized the File-object with).
You can then use the individual File's getName() method to get the names, then use JComboBox's addItem() method to add those names:
http://download.oracle.com/javase/1.4.2/docs/api/javax/swing/JComboBox.html
Finally, to do something when the user clicks one of those names you have to install an item-listener using the JComboBox's addItemListener()-method. There are tutorials on how to do this last part and in general it just calls your ItemListener, giving it an ItemEvent, which you can then use to check which name was clicked. | unknown | |
d16639 | val | I believe you aren't able to SOURCE — that is, import other arbitrary files — from within phpMyAdmin. You could use the MySQL command line client or rename load_departments.dump to load_departments.sql and import that file through the phpMyAdmin interface manually.
If I recall correctly, the source command is a construct of the MySQL command line client and isn't actually a valid SQL command. | unknown | |
d16640 | val | You need to use style-loader and css-loader in your webpack.config.js
First, install these two packages via npm:
npm install style-loader, css-loader --dev
Then, create a styles.css in your src folder and append the following styles into the file (just for demo purpose, so you know it's working correctly):
body {
background-color: #ff4444;
}
Don't forget to import the css file in your src/index.js:
import React from 'react';
import ReactDOM from 'react-dom';
import App from './components/App.js';
import './styles.css'; // <- import the css file here, so webpack will handle it when bundling
ReactDOM.render(<App />, document.getElementById('app'));
And use style-loader and css-loader in your webpack.config.js:
const path = require('path');
const HtmlWebpackPlugin = require('html-webpack-plugin');
module.exports = {
entry: './src/index.js',
output: {
path: path.join(__dirname, 'dist'),
filename: 'bundle.js',
},
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: { loader: 'babel-loader' },
},
{
test: /\.css$/,
use: ['style-loader', 'css-loader'],
},
],
},
plugins: [
new HtmlWebpackPlugin({
template: './src/index.html',
}),
],
};
If you don't see the correct output, you might need to restart the webpack dev server again. I have cloned your repo and made the changes like I mentioned above, it works.
As for ExtractTextPlugin, you will need this when bundling for a production build, you can learn more from their Repo
Hope it helps!
A: Hi Chirag ExtractTextPlugin works great but when it comes to caching and bundle hashing. Css bundle becomes 0 bytes. So they introduced MiniCssExtractPlugin which has tackled this issue. It is really important to cache static files as your app size increase by time.
import plugin first:
var MiniCssExtractPlugin = require("mini-css-extract-plugin");
add these in your webpack config:
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader'
}
},
{
test: /\.scss$/,
use: [ 'style-loader', MiniCssExtractPlugin.loader, 'css-loader', 'sass-loader']
}
]
}
plugins: [
new MiniCssExtractPlugin({
filename: 'style.css',
}),
new HtmlWebpackPlugin({
template: './src/index.html',
}),
Let me know you the issue still persists.
A: first you need to load style-loader and css-loader.Then you will add the following code in "rules" in webpack.config.js file.
{ test: /\.css$/,use: ['style-loader', 'css-loader']}
then import the "style.css" file location into the "index.js" file and for example:
import "./style.css";
When you package, "css" codes will be added in "bundle.js". | unknown | |
d16641 | val | It is a little hard to find it.
Yes,the user-pattern in tesseract couldn't work well in the old version of tesseract.
Refer to this Pull Request on github.
And finally I found the example of how to use the user-pattern in tesseract.In your circumstance,you could try:
*
*Firstly, make sure the version of tesseract >= 4.0.(I recommend you install tesseract 5.x,because I used 5.x in my PC)
*Create a file called xxx.patterns.The content(with UNIX line endings (line-feed character) and a blank line at the end):
\d{5}\.?\d{5} \.?\d{6} ?\d{5}\.?\d{6} ?\d ?\d{14}
*Then try to use:
pytesseract.image_to_string("test.png", config="--user-patterns yourpath/xxx.patterns")
Finally, it worked for me(This is an example in documentation.):
Also you could refer to this documentation.
A: This might not be the answer you are looking for, but I faced a similar problem with tesseract a few months ago. You might want to take a look at whitelisting, more specifically, whitelisting all digits. Like this,
pytesseract.image_to_string(question_img, config="-c tessedit_char_whitelist=0123456789. -psm 6")
This however did not work for me, so I ended up using opencv knn, this does mean you need to know where each char is located though... First I stored some images of the characters I wanted to recognize. And added those detections to a temporary file:
frame[y:y + h, x:x + w].copy().flatten()
After labeling all those detections I trained them using the previously mentioned knn.
network = cv2.ml.KNearest_create()
network.train(data, cv2.ml.ROW_SAMPLE, labels)
network.save('pattern')
Now all chars can be analysed using.
chars = [
frame[y1:y1 + h, x1:x1 + w].copy().flatten(), #char 1
frame[y2:y2 + h, x2:x2 + w].copy().flatten(), #char 2
frame[yn:yn + h, xn:xn + w].copy().flatten(), #char n
]
output = ''
network = cv2.ml.KNearest_create()
network.load('pattern')
for char in chars:
ret, results, neighbours, dist = network.findNearest([char.astype(np.float32)], 3)
output = '{0}'.format(result)
After this you can just do your regex on your string. Total training and labeling only took me something like 2 hours so should be quite doable. | unknown | |
d16642 | val | Some events only fire when the control is visible. This sounds like what you should do is decouple the text entries from the control and store them in another object which fires off the filled events then do data binding to those entries.
This has the nice benefit of decoupling the UI from the data storage (always a nice thing) as well as freeing you from the vagaries of the .net UI system (both winforms and wpf have 'interesting' quirks like the above which assume specific behavior preferences). | unknown | |
d16643 | val | But a better practice for your sql (and here converted to linq) is to use join to join tables and not the where:
string currentCulture = Culture.GetCulture();
var result = from g in CTGLBL
join ct in CTTGLBL on g.sysctglbl equals ct.sysctglbl into ctj
from ct in ctj.DefaultIfEmpty()
join lang in CTLANG on ct.sysctlang equals lang.sysctlang into langj
from lang in langj.DefaultIfEmpty()
where (lang == null ? 1 : (lang.activeflag ?? 1)) == 1 &&
(lang?.ISOCODE.StartsWith(currentCulture) || lang?.ISOCODE == null)
select new { g, ct, lang };
You can also have a "nested select" for your CTLANG like this:
string currentCulture = Culture.GetCulture();
var result = from g in CTGLBL
join ct in CTTGLBL on g.sysctglbl equals ct.sysctglbl into ctj
from ct in ctj.DefaultIfEmpty()
join lang in CTTGLBL.Where(lang => lang.activeflag ?? 1 == 1 &&
(lang.ISOCODE.Contains(currentCulture) ||
lang.ISOCODE == null))
on ct.sysctlang equals lang.sysctlang into langj
from lang in langj.DefaultIfEmpty()
select new { g, ct, lang };
A: (What I see is a left join, not right)
Assuming you have the proper relations between the tables in your schema, with SQL server (Linq TO SQL) this would work, not sure if it would supported for Oracle:
string currentCulture = Culture.GetCulture();
var data = from g in db.CTGLBL
from ct in g.CTTGLBL.DefaultIfEmpty()
from lang in g.CTLANG.DefaultIfEmpty()
where !g.CTLANG.Any() ||
( lang.activeflag == 1 &&
lang.ISOCODE.StartsWith(currentCulture))
select new {g, ct, lang}; | unknown | |
d16644 | val | I wrote some code a while back that tried to extract data source information from Sitecore's XML deltas. I never tried updating it though, but this may work for you.
The class I used was Sitecore.Layouts.LayoutDefinition which is able to parse the XML and if I remember correctly it deals with the business of working out what the correct set of page controls is by combining the delta with the underlying template data. You can construct it like so:
string xml = LayoutField.GetFieldValue(item.Fields["__Renderings"]);
LayoutDefinition ld = LayoutDefinition.Parse(xml);
DeviceDefinition deviceDef = ld.GetDevice(deviceID);
foreach(RenderingDefinition renderingDef in deviceDef.GetRenderings(renderingID))
{
// do stuff with renderingDef.Datasource
}
So I think you can then use the API that LayoutDefinition, DeviceDefinition and RenderingDefinition provides to access the data. There's a bit more info on how I used this in the processImages() function in this blog post: https://jermdavis.wordpress.com/2014/05/19/custom-sitemap-filespart-three/
I think the missing step you're after is that you can modify the data this object stores (eg to set a data source for a particular rendering) and then use the ToXml() method to get back the revised data to store into your Renderings field?
You may be able to find more information by using something like Reflector or DotPeek to look inside the code for how something like the Layout Details dialog box modifies this data in the Sitecore UI.
-- Edited to add --
I did a bit more digging on this topic as I was interested in how to save the data again correctly. I wrote up what I discovered here: https://jermdavis.wordpress.com/2015/07/20/editing-layout-details/ | unknown | |
d16645 | val | Try updating the document by using $pull operator
collection.update(
{},
{ $pull: { "class_section": { class_id: '2' } } }
);
Please refer to mongo documentation of $pull operator here | unknown | |
d16646 | val | When you call System.out.println(pq), the toString method is called implicitly.
The toString method of PriorityQueue extends from AbstractCollection, which
Returns a string representation of this collection. The string
representation consists of a list of the collection's elements in the
order they are returned by its iterator, enclosed in square brackets
("[]").
While the iterator of PriorityQueue is not guaranteed to traverse in particular order:
The Iterator provided in method iterator() is not guaranteed to
traverse the elements of the priority queue in any particular order.
since the queue is based on heap.
You can poll elements one by one to get ordered elements:
while (pq.size() != 0) {
System.out.print(pq.poll() + ","); // 8,6,5,3,2,1,
}
A: You should poll() all the elements until the queue is empty and save them somewhere to get them ordered.
A: Try the following, list contains items in sorted order. The priority key by itself not maintain the elements in sorted order, it just keeps the top element as minimum or maximum based on your implementation of the PQ.
public static void main (String[] args) {
int a[]={1,3,8,5,2,6};
Comparator<Integer> c = new IntCompare();
PriorityQueue<Integer> pq=new PriorityQueue<>(c);
for(int i=0;i<a.length;i++)
pq.add(a[i]);
ArrayList<Integer> list = new ArrayList<>();
while(!pq.isEmpty()){
list.add(pq.poll());
}
for(Integer i : list)
System.out.println(i);
} | unknown | |
d16647 | val | I have solved using a modified version of Exoplayer (RTSP Exoplayer GitHub pull request). The buffer size can be edited, so I think it's the best choice for this use case.
It works flawlessly! | unknown | |
d16648 | val | The keypress event handler fires too early - the user hasn't finished pressing the key down and entering in the value at that point, so the focus reverts to the initial input field. See how if you change the focus after a setTimeout it'll work:
document.getElementById("thing").addEventListener("keypress", function() {
myFunction(event);
});
function myFunction(event) {
document.getElementById("answer").innerHTML = event.keyCode;
setTimeout(() => document.getElementById("dude").focus());
}
<input type="text" id="thing">
<input type="text" id="dude">
<p id="answer"></p>
Or watch for the keyup event instead:
document.getElementById("thing").addEventListener("keyup", function() {
myFunction(event);
});
function myFunction(event) {
document.getElementById("answer").innerHTML = event.keyCode;
document.getElementById("dude").focus();
}
<input type="text" id="thing">
<input type="text" id="dude">
<p id="answer"></p>
For example, the following code works:
Not exactly, because with that code, you're focusing the dude input on pageload, rather than when the thing input has stuff typed into it.
You also should avoid using keypress in modern code, it's deprecated:
This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes. Avoid using it, and update existing code if possible; see the compatibility table at the bottom of this page to guide your decision. Be aware that this feature may cease to work at any time.
Since this event has been deprecated, you should look to use beforeinput or keydown instead.
keyCode is too, technically, but the replacement for it - .code - isn't compatible everywhere.
A: Use the keyup event instead of keypress, because the default action of the keypress event sets the focus back to that input element.
document.getElementById("thing").addEventListener("keyup", function() {
myFunction(event);
});
function myFunction(event) {
document.getElementById("answer").innerHTML = event.keyCode;
document.getElementById("dude").focus();
}
<!DOCTYPE html>
<html>
<body>
<input type="text" id="thing">
<input type="text" id="dude">
<p id="answer"></p>
</body>
</html>
A: It seems like you are trying to shift focus from input "thing" to input "dude" after you complete a "keypress" or "keyup" on input "thing".
I don't understand the use case for this. But, IMO if you are trying to change the focus state after you input a value, I would recommend placing an event listener on the "change" event. You could simply press your "TAB" key on your keyboard after you are done inputting data into input "thing" and focus will be shifted to the input "dude" and the function will execute. Both achieved!
document.getElementById('thing').addEventListener('change', function (event) {
myFunction(event.target.value)
})
function myFunction(answer) {
document.getElementById('answer').innerText = answer
}
<input type="text" id="thing" />
<input type="text" id="dude" />
<p id="answer"></p> | unknown | |
d16649 | val | Could it be that you have defined the dateformat on the priority column, not startdate column?
A: I did a blog post here: http://peterkellner.net/2011/08/24/getting-extjs-4-date-format-to-behave-properly-in-grid-panel-with-asp-net-mvc3/
sorry for digging up something from a while back but I was just searching for the same thing and ran into my blog post first (before this item) | unknown | |
d16650 | val | You can try this :
//yourModule.js
let yourModule={};
yourModule.you=async()=>{
//something await...
}
modules.export = yourModule;
//app.js
let yourModule = require('<pathToModule>');
async function test()
{
await yourModule.you(); //your `await` here
}
A: You are misunderstanding the error. It says
SyntaxError: await is only valid in async function
not "for async function"
There is no problem with your export. It is simply not possible to use await outside of a function marked async. Therefore the bug is in app.js. It should be:
var things = require("./someThings");
async function app () {
console.log(await things.doSomething());
}
app().then(() => console.log('done calling app()'));
A: This is probably a duplicate of this question.
You can simply assign the function (or a function expression) to a property (or just the entirety) of `module.exports’.
For example:
async function AsyncStuff () { ... }
module.exports.myAsyncThing = myAsyncStuff;
A: Try this
var things = require("./someThings");
let result = getResult();
async function getResult() {
return await things.getSomeThingsAsync();
} | unknown | |
d16651 | val | You need to add orders.order_amount and orders.order_count to group by:
select trans.account_id,
SUM(trans.amount),
COUNT(trans.account_id),
orders.order_amount,
orders.order_count
from trans
FULL JOIN (
select [order].account_id,
SUM([order].amount) as order_amount,
COUNT([order].account_id) as order_count
from [order]
GROUP BY [order].account_id
) as orders
ON (trans.account_id = orders.account_id)
group by trans.account_id, orders.order_amount, orders.order_count
order by trans.account_id;
A: You would be using full join if you thought that accounts could be in either table, but not necessarily in both. If so, you should fix your query.
I would recommend aggregating both tables before joining. And then fixing the order by conditions:
select coalesce(t.account_id, o.account_id), t.trans_amount, t.trans_count,
o.order_amount, o.order_count
from (select t.account_id, sum(t.amount) as trans_amount, count(*) as trans_count
from trans t
group by t.account_id
) t full join
(select o.account_id,
sum(o.amount) as order_amount,
count(o.account_id) as order_count
from [order] o
group by o.account_id
) o
on t.account_id = o.account_id
order by coalesce(t.account_id, o.account_id);
Note that full join is often not needed. However, if it is needed, you should write the query correctly for it. | unknown | |
d16652 | val | Here are the some answers from my side.
1. Will the libraries & files conflit?
No. - Both local & Anaconda will have separete site packages folders to store installed libraries.No matter how many different versions of python you install there will be separate site-packages folders named with respective versions to store installed libraries.
2. Should I need to re-install packages again that I'm alredy using in older python before I run a program on anaconda?
Yes. Local python will use - cmd -WIndows command prompot
Anoconda will use - Anaconda prompt - Which will be installed along with installation. Both Anconda and local python maintains separate storage locations in order to store & process data which includes libraries, settings, Environments, cache....
3.if we selects Anaconda as primary. This would mean that it would be seen as such by all the tools I use, such as PyCharm?
No. Pycharm will have old configuartion whatever you using currently
even thouh we install anaconda & make its a primary. But, still you
can use anaconda from pycharm by creating a virtual environmnet for it. | unknown | |
d16653 | val | In your first example, you pass the actual x. This will copy x and give it to reflect.ValueOf. When you try to do v.SetFloat, as it get only a copy, it has no way to change the original x variable.
In your second example, you pass the address of x, so you can access the original variable by dereferencing it.
In the third example, you pass floats to reflect.ValueOf, which is a slice. "Slice hold references to the underlying array" (http://golang.org/doc/effective_go.html#slices). Which mean that through the slice, you actually have a reference to the underlying object. You won't be able to change the slice itself (append & co) but you can change what the slice refers to. | unknown | |
d16654 | val | You need to make a struct for AssetBlock and all of the types below it, I've done it up to group to show you what I mean:
https://play.golang.org/p/vj_CkneHuLd
type Product struct {
GlobalID string `xml:"globalId"`
Title string `xml:"title"`
ChunkID int `xml:"gpcChunkId"`
AssetBlock assetBlock `xml:"assetBlock"`
}
type assetBlock struct {
Images images `xml:"images"`
}
type images struct {
GroupList groupList `xml:"groupList"`
}
type groupList struct {
Groups []group `xml:"group"`
}
type group struct {
Usage string `xml:"usage"`
Size string `xml:"size"`
} | unknown | |
d16655 | val | One way you could try is to write the numbers into a StringBuilder and then use it's ToString() method to get the resulting text:
Imports System.IO
Imports System.Text
Public Class NumberWriter
Private ReadOnly OutputPath as String = _
Path.Combine(Application.StartupPath, "out.txt")
Public Sub WriteOut()
Dim outbuffer as New StringBuilder()
For i as integer = 1 to 100
outbuffer.AppendLine(System.Convert.ToString(i))
Next i
File.WriteAllText(OutputPath, outbuffer.ToString(), true)
End Sub
Public Shared Sub Main()
Dim writer as New NumberWriter()
Try
writer.WriteOut()
Catch ex as Exception
Console.WriteLine(ex.Message)
End Try
End Sub
End Class
A: There's a good example over at Home and Learn
Dim FILE_NAME As String = "C:\test2.txt"
If System.IO.File.Exists(FILE_NAME) = True Then
Dim objWriter As New System.IO.StreamWriter(FILE_NAME)
objWriter.Write(TextBox1.Text)
objWriter.Close()
MsgBox("Text written to file")
Else
MsgBox("File Does Not Exist")
End If
A: You could also use the "My.Computer.FileSystem" namespace, like:
Dim str As String = ""
For num As Int16 = 1 To 100
str += num.ToString & vbCrLf
Next
My.Computer.FileSystem.WriteAllText("C:\Working\Output.txt", str, False)
A: See System.IO namespace, especially the System.IO.File class. | unknown | |
d16656 | val | This is a really good use case for javax.swing.Timer...
This will allow you to schedule a callback, at a regular interval with which you can perform an action, safely on the UI.
private class WindowHandler extends WindowAdapter {
@Override
public void windowOpened(WindowEvent e) {
System.out.println("...");
Timer timer = new Timer(2000, new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
JPanel panel = getPanelFromScreenPanel(1);
panel.setLayout(new GridBagLayout());
GridBagConstraints gbc = new GridBagConstraints();
gbc.gridwidth = GridBagConstraints.REMAINDER;
for (int index = 0; index < 100; index++) {
panel.add(new JLabel(Integer.toString(index)), gbc);
}
panel.revalidate();
}
});
timer.start();
timer.setRepeats(false);
}
}
Now, if you wanted to do a series of actions, separated by the interval, you could use a counter to determine the number of "ticks" that have occurred and take appropriate action...
private class WindowHandler extends WindowAdapter {
@Override
public void windowOpened(WindowEvent e) {
System.out.println("...");
Timer timer = new Timer(2000, new ActionListener() {
private int counter = 0;
private int maxActions = 10;
@Override
public void actionPerformed(ActionEvent e) {
switch (counter) {
case 0:
// Action for case 0...
break;
case 1:
// Action for case 1...
break;
.
.
.
}
counter++;
if (counter >= maxActions) {
((Timer)e.getSource()).stop();
}
}
});
timer.start();
}
}
Take a look at How to use Swing Timers for more details | unknown | |
d16657 | val | Multiple open modals are not supported by Bootstrap. You have to remember that .modal() is asynchronous, so the next .modal() is going to run before the previous completes. So, you probably have an overlay covering your page, even though the styles it's given prevent you from seeing it.
This might work:
this.on('success', function(event, response) {
$('#uploadFileModal')
.on('hidden.bs.modal', function(){
$('#uploadFileModal2').modal('show');
})
.modal('hide');
});
This will delay the second modal from being triggered until the first modal has completed it's hide transition. | unknown | |
d16658 | val | You need to add RawPrinterHelperClass to your project, and then print like this
string ZPL_STRING = "^XA^LL440,^FO50,50^A0N,50,50^FDTesting Zebra Printer^FS^XZ";
RawPrinterHelper.SendStringToPrinter("PrinterName", ZPL_STRING)
C# Class
https://github.com/andyyou/SendToPrinter/blob/master/Printer/RawPrinterHelper.cs | unknown | |
d16659 | val | No, and you shouldn't. How am I to do std::cout << at(mkvec(), 0) << std::endl;, a perfectly reasonable thing, if you've banned me from using at() on temporaries?
Storing references to temporaries is just a problem C++ programmers have to deal with, unfortunately.
To answer your new question, yes, you can do this:
class A {
void func() &; // lvalues go to this one
void func() &&; // rvalues go to this one
};
A a;
a.func(); // first overload
A().func(); // second overload
A: Just an idea:
To disable copying constructor on the vector somehow.
vector ( const vector<T,Allocator>& x );
Implicit copying of arrays is not that good thing anyway. (wondering why STL authors decided to define such ctor at all)
It will fix problems like you've mentioned and as a bonus will force you to use more effective version of your function:
void mkvec(std::vector<std::string>& n)
{
n.push_back("kagami");
n.push_back("misao");
} | unknown | |
d16660 | val | For the two objects: User and Preference, you can specify the relationship as follows:
const User = sequelize.define('User', {
username: Sequelize.STRING,
});
const Preference = sequelize.define('Preference', {
id: Sequelize.INTEGER,
//Below, 'users' refer to the table name and 'username' is the primary key in the 'users' table
user: {
type: Sequelize.STRING,
references: {
model: 'users',
key: 'username',
}
}
});
User.hasMany(Preference); // Add one to many relationship
I would suggest to read the following document to understand better:
Sequelize Foreign Key | unknown | |
d16661 | val | You probably would want to create your own custom dialog. You can extend DialogFramgent and change it accordingly.
See the Android doc HERE for a great example.
Or, use PopupWindow if you want a popover dialog with control of the background, see this SO post. | unknown | |
d16662 | val | The simplest way I can see is to reparent to 0. Something like this:
#include <QApplication>
#include <QPushButton>
class MyButton : public QPushButton
{
public:
MyButton(QWidget* parent) : QPushButton(parent) {}
void mousePressEvent(QMouseEvent*) {
this->setParent(0);
this->showMaximized();
this->show();
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QWidget mainWidget;
MyButton button(&mainWidget);
mainWidget.show();
return a.exec();
}
A: I have modified the previous example. The previous example never goes back to normal screen.
Just copy paste the code and it will run.
#include <QApplication>
#include <QPushButton>
class MyButton : public QPushButton
{
public:
MyButton(QWidget* parent) : QPushButton(parent) {
m_pParent = parent;
maxMode = false;
}
QWidget * m_pParent;
bool maxMode;
Qt::WindowFlags m_enOrigWindowFlags;
QSize m_pSize;
void mousePressEvent(QMouseEvent*) {
if (maxMode== false)
{
m_enOrigWindowFlags = this->windowFlags();
m_pSize = this->size();
this->setParent(0);
this->setWindowFlags( Qt::FramelessWindowHint|Qt::WindowStaysOnTopHint);
this->showMaximized();
maxMode = true;
}
else
{
this->setParent(m_pParent);
this ->resize(m_pSize);
this->overrideWindowFlags(m_enOrigWindowFlags);
this->show();
maxMode = false;
}
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QWidget mainWidget;
MyButton button(&mainWidget);
mainWidget.show();
return a.exec();
} | unknown | |
d16663 | val | MySQL's default storage engine is InnoDB. As you run queries against an InnoDB table, the portion of that table or indexes that it reads are copied into the InnoDB Buffer Pool in memory. This is done automatically. So if you query the same table later, chances are it's already in memory.
If you run queries against other tables, it load those into memory too. If the buffer pool is full, it will evicting some data that belongs to your first table. This is not a problem, since it was only a copy of what's on disk.
There's no way to specifically "lock" a table on an index in memory. InnoDB will load either data or index if it needs to. InnoDB is smart enough not to evict data you used a thousand times, just for one other table requested one time.
Over time, this tends to balance out, using memory for your most-frequently queried subset of each table and index.
So if you have system memory available, allocate more of it to your InnoDB Buffer Pool. The more memory the Buffer Pool has, the more able it is to store all the frequently-queried tables and indexes.
Up to the size of your data + indexes, of course. The content copied from the data + indexes is stored only once in memory. So if you have only 8G of data + indexes, there's no need to give the buffer pool more and more memory.
Don't allocate more system memory to the buffer pool than your server can afford. Overallocating memory leads to swapping memory for disk, and that will be bad for performance.
Don't bother with the {DATA|INDEX} DIRECTORY options. Those are for when you need to locate a table on another disk volume, because you're running out of space. It's not likely to help performance. Allocating more system memory to the buffer pool will accomplish that much more reliably.
A:
but I can store either the data or the index in a RAM drive through the DIRECTORY table options...
Short answer: let the database and OS do it.
Using a RAM disk might have made sense 10-20 years ago, but these days the software manages caching disk to RAM for you. The disk itself has its own RAM cache, especially if it's a hybrid drive. The OS will cache file system access in RAM. And then MySQL itself will do its own caching.
And if it's an SSD that's already extremely fast, so a RAM cache is unlikely to show much improvement.
So making your own RAM disk isn't likely to do anything that isn't already happening. What you will do is pull resources away from the OS and MySQL that they could have managed smarter themselves likely slowing everything on that machine down.
What you're describing a micro-optimization. This is attempting to make individual operations faster. They tend to add complexity and degrade the system as a whole. And there are limits to how much optimizing you can do with micro-optimizations. For example, if you have to search 1,000,000 rows, and it takes 1ms per row, that's 1,000,000 ms. If you make it 0.9ms per row then it's 900,000 ms.
What you want to focus on is algorithmic optimization, improvements to the algorithm. These tend to make the code simpler and less complex, though often the data structures need to be more thought out, because you're doing less work. Take those same 1,000,000 rows and add an index. Instead of looking at 1,000,000 rows you'll spend, say, 100 ms to look at the index.
The numbers are made up, but I hope you get the point. If "what you want is speed", algorithmic optimizations will take you where no micro-optimization will.
There's also the performance of the code using the database to consider, it is often the real bottleneck using unoptimized queries, poor patterns for fetching related data, and not taking advantage of caching.
Micro-optimizations, with their complexities and special configurations, tend to make algorithmic optimizations more difficult. So you might be slowing yourself down in the long run by worrying about micro-optimizations now. Furthermore, you're doing this at the very start when you only have fuzzy ideas about how this thing will be used or perform or where the bottlenecks will be.
Spend your time optimizing your data structures and indexes, not minute details of your database storage. Once you've done that, if it still isn't fast enough, then look at tweaking settings.
As a side note, there is one possible benefit to playing with DIRECTORY. You can put the data and index on separate physical drives. Then both can be accessed simultaneously with the full I/O throughput of each drive.
Though you've just made it twice as likely to have a disk failure, and complicated backups. You're probably better off with an SSD and/or RAID.
And consider whether a cloud database might actually out-perform any hardware you might be able to afford. | unknown | |
d16664 | val | The working solution is:
function qa_html_convert_urls($html, $newwindow=false) {
return substr(preg_replace('/([^A-Za-z0-9])((http|https|ftp):\/\/([^\s&<>\(\)\[\]"\'\.])+\.([^\s&<>\(\)\[\]"\']|&)+)/i', '\1<a href="\2" '.($newwindow ? ' target="_blank"' : '').'>\2</a>', ' '.$html.' '), 1, -1);`
}
Thanks and credits to gidgreen. | unknown | |
d16665 | val | Maybe this is what you need:
const compareByKeyLength = <
A extends Record<Key, any[]>,
B extends Record<Key, any[]>,
Key extends keyof A & keyof B
>
(
a: A,
b: B,
key: Key,
) => {
return a[key].length < b[key].length ? 1 : -1;
};
compareByKeyLength({a: [], b: 123}, {a: [], c: 123}, "a")
I introduce the generic type Key which is both a keyof A and keyof B. Then I specify that the inputs a and b are of the generic types A and B which both have the key Key with an array type any[].
Playground | unknown | |
d16666 | val | For the monitoring whether or not the user has launched the program, I would use psutil: https://pypi.python.org/pypi/psutil
and for launching another program from a python script, I would use subprocess.
To launch something with subprocess you can do something like this:
PATH_TO_MY_EXTERNAL_PROGRAM = r"C:\ProgramFiles\MyProgram\MyProgramLauncher.exe"
subprocess.call([PATH_TO_MY_EXTERNAL_PROGRAM])
If it is as simple as calling an exe though, you could just use:
PATH_TO_MY_EXTERNAL_PROGRAM = r"C:\ProgramFiles\MyProgram\MyProgramLauncher.exe"
os.system(PATH_TO_MY_EXTERNAL_PROGRAM)
Hope this helps.
-Alex | unknown | |
d16667 | val | As it is in the docs, you have to make props option true in the routes, see below code to understand it:
const User = {
props: ['id'],
template: '<div>User {{ id }}</div>'
}
const router = new VueRouter({
routes: [
{ path: '/user/:id', component: User, props: true }
]
}) | unknown | |
d16668 | val | Your "set" accessor is setup incorrectly. It's setting the value of _Agent to Agent, which calls the "get" on the property itself. The "getter" for Agent returns the _Agent field which is null.
Use value instead:
public string Agent
{
get { return _Agent; }
set { _Agent = value; }
}
Also, if I may, here's a few other suggestions on trimming down that class. But take it with a grain of salt, especially if you're just learning.
class Insurance
{
private int customers;
public Insurance(string agent, int customers)
{
Agent = Agent;
Customers = Customers;
}
public string Agent { get; set; }
public int Customers
{
get { return customers; }
set { customers = Math.Max(value, 0); }
}
} | unknown | |
d16669 | val | I'm not sure that Property is reserved, but properties is treated specially for domain classes since it's used for data binding. What happens when you change:
static hasMany = [properties: Property]
to something like
static hasMany = [myProperties: Property]
A: Grails is a web framework. In general, only languages really have reserved words. The reserved words of Groovy are all those reserved by Java, plus a few others. The complete list is shown here.
You'll notice that it does include "property", which was a big surprise to me, as I've no idea what it's used for, and I think/thought I know Groovy reasonably well. Perhaps it's reserved for future use?
A: While I cant find any file with the name Property in grails, it is wise not to use such a common word - who knows when it might become reserved in the future?
What would happen if you just prepended your classname with something, like BlahProperty? | unknown | |
d16670 | val | Looking at the specific constructor you're using it states (emphasis mine):
This constructor creates a new TcpClient and makes a synchronous connection attempt to the provided host name and port number. The underlying service provider will assign the most appropriate local IP address and port number. TcpClient will block until it either connects or fails. This constructor allows you to initialize, resolve the DNS host name, and connect in one convenient step.
Your code isn't "lagging", it's actively waiting for the connection to succeed or fail.
I would suggest instead using the default constructor and either call the BeginConnect and corresponding EndConnect methods, or if you can use the async/await pattern in Unity, perhaps try using ConnectAsync.
Although in Unity, I think a simpler method might be to just use a coroutine (I'm not a Unity programmer so this might not be 100% right):
StartCoroutine(TestConnectionMethod());
private IEnumerator TestConnectionMethod()
{
_client = new TcpClient(_tcpAddress, _tcpPort);
yield return _client.Connected;
} | unknown | |
d16671 | val | You have typo in this line:
user = User.objects.create_user(**validated_data),
It contains comma , in the last of line. So user become a tuple of user instance, not just user instance. It become (user,).
Should return user instance. | unknown | |
d16672 | val | This is a demonstration of how to sort a QMap <int, int> by value and not by key in qt C++.
The values of the QMap were extracted and stored in a QList container object, then sorted through the qSort method. The keys were also stored in a QList for themselves. After sorting is complete, the QMap object is then cleared and the Keys and values are then inserted back in the QMap container in ascending order by value. See solution below:
#include <QCoreApplication>
#include <qalgorithms.h>
#include <QMap>
#include <QDebug>
#include <QList>
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
QMap <int, int> dataMapList;
//QMap <int, int> sorted = new QMap<int, int>();
QList <int> keys; // container to store all keys from QMap container
QList<int> values; // container to store all values from QMap container
QMap<int, int>::Iterator h; // used to loop/ iterate through QMap
// used to iterate through QLists
QList<int>::Iterator i; //
QList<int>::Iterator j;
//inserts to QMap Container
dataMapList.insert(1,34);
dataMapList.insert(3,2);
dataMapList.insert(2,32);
dataMapList.insert(14,89);
dataMapList.insert(7,23);
h=dataMapList.begin();
qDebug()<< "unsorted";
//list out the unsorted values along with their respective keys
while(h!=dataMapList.end()){
qDebug() << "[" << h.key()<<"], " <<"[" <<h.value()<<"]" << endl;
h++;
}
values = dataMapList.values(); // pass all values in the QMap to a QList container to store values only
keys= dataMapList.keys(); // pass all keys in the QMap to a QList container to store already sorted by default keys
qSort(values); // sorts the values in ascending order
dataMapList.clear(); // empties the QMap
i=values.begin();
j=keys.begin();
// insert back the sorted values and map them to keys in QMap container
while(i!=values.end() && j!=keys.end()){
dataMapList.insert(*j, *i);
i++;
j++;
}
qDebug() << "sorted" << endl;
h=dataMapList.begin();
//the display of the sorted QMap
while(h!=dataMapList.end()){
qDebug() << "[" << h.key()<<"], " <<"[" <<h.value()<<"]" << endl;
h++;
}
return a.exec();
}
Note: The iterators for the QMap and QList were used to traverse through the containers to access the value and/or keys stored. These also helped with displaying the items in the list (unsorted and sorted). This solution was done in a Qt console application.
A: In QMap by default the items are always sorted by key. So, if you iterate over QMap like:
QMap<int, int>::const_iterator i = yourQMap.constBegin();
while (i != yourQMap.constEnd()) {
cout << i.key() << ": " << i.value() << endl;
++i;
}
You'll get result sorted by keys.
Try to think about transforming your task to fit standard algorithms.
Otherwise, you can use this approach to get sorted your titles:
QList<int> list = yourQMap.values();
qSort(list.begin(), list.end());
And then, if you need -- get associated keys by calling method QMap::key(const T &value);.
A: Another alternative, depending on the exact case, is simply to swap the keys and values around since the keys will be automatically sorted by the QMap. In most cases, the values will not be unique, so just use a QMultiMap instead.
For example, let's say we have the following data in a QMap:
Key Value
--- -----
1 100
2 87
3 430
4 87
The following piece of code will sort the data by value.
QMap<int, int> oldMap;
QMultiMap<int, int> newMap;
QMapIterator<int, int> it(oldMap);
while (it.hasNext())
{
it.next();
newMap.insertMulti(it.value(), it.key()); //swap value and key
}
Our new map now looks like this:
Key Value
--- -----
87 4
87 2
100 1
430 3 | unknown | |
d16673 | val | I've had the same issue for quite a while, and I figured out something: Application.screenUpdating only stays FALSE for how ever long a macro runs. When any macro running stops, it turns True. You can try this:
Sub testApplicationScreenUpdating()
Application.ScreenUpdating = False
Debug.Print "Application screen updating is:" & Application.ScreenUpdating
Application.ScreenUpdating = True
End Sub
*
*if you just run this, it will return in the Immediate window: "Application screen updating is:False"
*if you run it step by step, and hover over Applicaiton.ScreenUpdating with your mouse, it will show as "True", even if the Immediate window will show "False".
*if you comment out the [Application.ScreenUpdating = True] at the end, and then run [Debug.Print "Application screen updating is:" & Application.ScreenUpdating] separately, it will return true, even if it was not switched to true.
A: Try this code and see the values for each in the Immediate Window Ctrl+G:
Sub copyData()
Dim r As Boolean
r = Application.ScreenUpdating = False
Debug.Print "'Application.ScreenUpdating' is set to " & r
r = Application.ScreenUpdating = True
Debug.Print "'Application.ScreenUpdating' is set to " & r
End Sub
A: I use Excel as part of Microsoft 365. I too fought with the screen flickering problem. Although my macro worked, the flickering was very annoying. I tried several approaches and stumbled upon this:
Minimize the second workbook before initiating the macro from the first workbook. For my situation, the screen no longer flickered. I also tried the following code to minimize the second workbook from within VBA. If the second workbook was already minimized, there was no effect. If the second workbook was not minimized, the screen flickered only once - to enable me to minimize the second workbook. Subsequent switching back and forth between workbooks did not introduce any screen flickering.
‘
Filename = "SecondWorkbookName.xlsx"
Windows(Filename).Activate
Application.WindowState = xlMinimized ' Minimize workbook to prevent flickering.
Application.ScreenUpdating = False | unknown | |
d16674 | val | On the composite primary key issue, see JPA composite primary key
From the exception stack trace, it seems the method signatures of your Order.getId()/Order.setId() have the wrong signatures. According to JavaBean conventions, since Order.id is an int, setId() should take an int parameter and getId() should return an int accordingly. Or, you can change the type of Order.id to long since getId()/setId() already make use of long.
Also, you may wish to edit your code to fix the following typos and syntax errors:
Client.java:
- @OnetoMany supposed to be @OneToMany
- Missing end-of-line semicolon on List<Order> orders
Order.java
- Missing end-of-line semicolon on this.id = id in Order.setId | unknown | |
d16675 | val | if you made java programming:
conf.set("hbase.zookeeper.quorum", "server1,server2,server3");
conf.set("hbase.zookeeper.property.clientPort", "2181");
if you used command:add -Dhbase.zookeeper.quorum
sudo hadoop jar /opt/cloudera/parcels/CDH-4.3.0-1.cdh4.3.0.p0.22/lib/hbase/hbase.jar rowcounter -Dhbase.zookeeper.quorum=server1,server2,server3 hly_temp | unknown | |
d16676 | val | As you have already answered, I think that might be the only solution right now.
When you are building your Docker image, do something like:
COPY data/package.json /data/
RUN mkdir /dist/node_modules && ln -s /dist/node_modules /data/node_modules && cd /data && npm install
And for other stuff (like bower, do the same thing)
COPY data/.bowerrc /data/
COPY data/bower.json /data/
RUN mkdir /dist/vendor && ln -s /dist/vendor /data/vendor && cd /data && bower install --allow-root
And COPY data/ /data at the end (so you are able to use Dockers caching and not having to do npm/docker installation when there is a change to data.
You will also need to create the symlinks you need and store them in your git-repo. They will be invalid on the outside, but will happely work on the inside of your container.
Using this solution, you are able to mount your $PWD/data:/data without getting the npm/bower "junk" outside your container. And you will still be able to build your image as a standalone deployment of your service..
A: A similar and alternative way is to use NODE_ENV variable instead of creating a symlink.
RUN mkdir -p /dist/node_modules
RUN cp -r node_modules/* /dist/node_modules/
ENV NODE_PATH /dist/node_modules
Here you first create a new directory for node_modules, copy all modules there, and have Node read the modules from there.
A: I've been having this problem for some time now, and the accepted solution didn't work for me*
I found this link, which had an edit pointing here and this indeed worked for me:
volumes:
- ./:/data
- /data/node_modules
In this case the Engine creates a volume (see Compose reference on volumes) which is not mounted to your source directory. This was the easiest solution and didn't require me to do any symlinking, setting paths, etc.
For reference, my simple Dockerfile just looks like this:
# install node requirements
WORKDIR /data
COPY ./package.json ./package.json
RUN npm install -qq
# add source code
COPY ./ ./
# run watch script
CMD npm run watch
(The watch script is just webpack --watch -d)
Hope this is able to help someone and save hours of time like it did for me!
'*' = I couldn't get webpack to work from my package.json scripts and installing anything while inside the container created the node_modules folder with whatever I just installed (I run npm i --save [packages] from inside the container to get the package update the package.json until the next rebuild)
A: The solution I went with was placing the node_modules folder in /dist/node_modules, and making a symlink to it from /data/node_modules. I can do this both in my Dockerfile so it will use it when building, and I can submit my symlinks to my git-repo. Everything worked out nicely..
A: Maybe you can save your container, and then rebuild it regularly with a minimal dockerfile
FROM my_container
and a .dockerignore file containing
/data/node_modules
See the doc
http://docs.docker.com/reference/builder/#the-dockerignore-file | unknown | |
d16677 | val | Sure, you can do that through SFINAE;
#include <type_traits>
template <const bool EnableThird, std::enable_if_t<EnableThird, int> = 0>
void dynamic_parameter_count(int one, int two, int three) {
std::cout << "EnableThird was true\n";
}
template <const bool EnableThird, std::enable_if_t<!EnableThird, int> = 0>
void dynamic_parameter_count(int one, int two) {
std::cout << "EnableThird was false\n";
}
And you can then simply invoke using;
dynamic_parameter_count<true>(1, 2, 3);
dynamic_parameter_count<false>(1, 2);
This works by enabling or disabling one of the template instantiations based on the template parameters. You do, in fact, need two templates for this as far as I know. I'm not sure if you can do this in one template.
You can also simply specify two versions for the same function, however;
void parameter_count(int one, int two, int three) {
std::cout << "3 Parameters\n";
}
void parameter_count(int one, int two) {
std::cout << "2 Parameters\n";
}
To me, without knowing the context you are working in, this seems more logical.
Or even:
#include <optional>
void parameter_count(int one, int two, std::optional<int> three = {}) {
if (three.has_value()) {
std::cout << "3 Parameters\n";
} else {
std::cout << "2 Parameters\n";
}
}
A: Simple overload seems simpler, but to directly answer your question, you might (ab)use of variadic template and SFINAE:
template<bool Dynamic,
typename ... Ts,
std::enable_if_t<(Dynamic == false && sizeof...(Ts) == 0)
|| (Dynamic == true && sizeof...(Ts) == 1
&& std::is_convertible_v<std::tuple_element_t<0, std::tuple<Ts...>>,
v2_rotation_t>)
, bool> = false>
void setHashAt(rect2D_t area, uint32_t const hash, const Ts&... vR);
With the caveat that 3rd argument should be deducible (so no {..}).
A: This is a small variation to your solution
#include <type_traits>
struct None{};
template<bool select>
void foo(int, std::conditional_t<select, int, None> = None{}) {
}
int main() {
foo<false>(1);
foo<true>(1,2);
// foo<false>(1,2); // fails
// foo<true>(1); // fails
}
I don't think this is a clean solution, but instead overloading and refactoring the code to avoid duplication should be the right approach (as suggested in comments). | unknown | |
d16678 | val | I Tried to solve your problem.
We can use {allowHtml:true} to embed and process HTML code with Google chart.
function drawChart() {
var data = new google.visualization.DataTable();
data.addColumn('string', 'Name');
data.addColumn('string', 'Parent');
data.addRows([
[{
v: 'parent_node',
f: 'Parent Node'
},
null],
[{
v: 'child_node_1',
f: '<b>Child Node</b> <img src="http://www.mapsysinc.com/wp-content/uploads/2013/08/oracle-logo.gif" height="42" width="50"> </img>'
}, 'parent_node'],
[{
v: 'child_node_2',
f: 'Child Node'
}, 'parent_node']
]);
var chart = new google.visualization.OrgChart(document.querySelector('#chart_div'));
chart.draw(data, {
allowHtml: true
});
}
google.load('visualization', '1', {
packages: ['orgchart'],
callback: drawChart
});
Here I created Example Please check it: https://jsfiddle.net/1gvwLy8n/5/ | unknown | |
d16679 | val | With CPDBarWidth = 0.15, two bars take up only 30% of the space between successive bar locations. Increase the barWidth to reduce the space between neighboring bars. | unknown | |
d16680 | val | You can make your own accessor methods for the date and time attributes like this:
def date
datetime.to_date
end
def date=(d)
original = datetime
self.datetime = DateTime.new(d.year, d.month, d.day,
original.hour, original.min, original.sec)
end
def time
datetime.to_time
end
def time=(t)
original = datetime
self.datetime = DateTime.new(original.year, original.month, original.day,
t.hour, t.min, t.sec)
end
A: You should use before_validation callback to combine data from two virtual attributes into one real attribute. For example, something like this:
before_validation(:on => :create) do
self.begin = date + time
end
Where date + time will be your combining logic of the two values.
Then you should write some attr_accessor methods to get individual values if necessary. Which would do the split and return appropriate value.
A: I think you have a datetime field in your model, rails allows you to read in the date part and time part separately in your forms (easily) and then just as easily combine them into ONE date time field. This works specially if your attribute is a datetime.
# model post.rb
attr_accessible :published_on # just added here to show you that it's accessible
# form
<%= form_for(@post) do |f| %>
<%= f.date_select :published_on %>
<%= f.time_select :published_on, :ignore_date => true %>
<% end %>
The Date Select line will provide you the date part of published_on
The Time Select line will provide you the time part of published_on. :ignore_date => true will ensure that the time select does not output 3 hidden fields related to published_at since you are already reading them in in the previous line.
On the server side, the date and time fields will be combined!
If you however you are reading the date as a text_box, then this solution doesnt work unfortunately. Since you are not using the composite attribute that rails provides for you built in on datetime.
In short, if you use date_select,time_select, rails makes it easy, otherwise you're free to look for other options.
A: You should use an open-source datetime picker widget that handles this for you, and not make the fields yourself as a text field. | unknown | |
d16681 | val | You may use
(?m)^\*?[A-Z][\w' -]*:\s*
See the regex demo
Details
*
*(?m) - re.M flag, it makes ^ match start of a line
*^ - start of a line
*\*? - an optional * char
*[A-Z] - an uppercase letter
*[\w' -]* - 0 or more word chars, spaces, - or apostrophes
*: - a colon
*\s* - 0+ whitespaces. | unknown | |
d16682 | val | Mono crashes on <xsd:choice>
See https://bugzilla.xamarin.com/show_bug.cgi?id=2907
A: I have posted the patch that fixes this problem. It is for trunk version (3.0.?).
If you don't want to touch user's mono, you can simply copy new System.Xml.dll to folder where your program resides. Mono will use your dll instead of user's.
The patch is attached to this bug: https://bugzilla.xamarin.com/show_bug.cgi?id=2907 | unknown | |
d16683 | val | TL;DR : You cannot count on the value of a ThreadLocal being garbage collected when the ThreadLocal object is no longer referenced. You have to call ThreadLocal.remove or cause the thread to terminate
(Thanks to @Lii)
Detailed answer:
from that it seems that objects referenced by a ThreadLocal variable are garbage collected only when thread dies.
That is an over-simplification. What it actually says is two things:
*
*The value of the variable won't be garbage collected while the thread is alive (hasn't terminated), AND the ThreadLocal object is strongly reachable.
*The value will be subject to normal garbage collection rules when the thread terminates.
There is an important third case where the thread is still live but the ThreadLocal is no longer strongly reachable. That is not covered by the javadoc. Thus, the GC behaviour in that case is unspecified, and could potentially be different across different Java implementations.
In fact, for OpenJDK Java 6 through OpenJDK Java 8 (and other implementations derived from those code-bases) the actual behaviour is rather complicated. The values of a thread's thread-locals are held in a ThreadLocalMap object. The comments say this:
ThreadLocalMap is a customized hash map suitable only for maintaining thread local values. [...] To help deal with very large and long-lived usages, the hash table entries use WeakReferences for keys. However, since reference queues are not used, stale entries are guaranteed to be removed only when the table starts running out of space.
If you look at the code, stale map entries (with broken WeakReferences) may also be removed in other circumstances. If stale entry is encountered in a get, set, insert or remove operation on the map, the corresponding value is nulled. In some cases, the code does a partial scan heuristic, but the only situation where we can guarantee that all stale map entries are removed is when the hash table is resized (grows).
So ...
Then during context undeploy application classloader becomes a subject for garbage collection, but thread is from a thread pool so it does not die. Will object b be subject for garbage collection?
The best we can say is that it may be ... depending on how the application manages other thread locals the thread in question.
So yes, stale thread-local map entries could be a storage leak if you redeploy a webapp, unless the web container destroys and recreates all of the request threads in the thread pool. (You would hope that a web container would / could do that, but AFAIK it is not specified.)
The other alternative is to have your webapp's Servlets always clean up after themselves by calling ThreadLocal.remove on each one on completion (successful or otherwise) of each request.
A: ThreadLocal variables are hold in Thread
ThreadLocal.ThreadLocalMap threadLocals;
which is initialized lazily on first ThreadLocal.set/get invocation in the current thread and holds reference to the map until Thread is alive. However ThreadLocalMap uses WeakReferences for keys so its entries may be removed when ThreadLocal is referenced from nowhere else. See ThreadLocal.ThreadLocalMap javadoc for details
A: If the ThreadLocal itself is collected because it's not accessible anymore (there's an "and" in the quote), then all its content can eventually be collected, depending on whether it's also referenced somewhere else and other ThreadLocal manipulations happen on the same thread, triggering the removal of stale entries (see for example the replaceStaleEntry or expungeStaleEntry methods in ThreadLocalMap). The ThreadLocal is not (strongly) referenced by the threads, it references the threads: think of ThreadLocal<T> as a WeakHashMap<Thread, T>.
In your example, if the classloader is collected, it will unload the Test class as well (unless you have a memory leak), and the ThreadLocal a will be collected.
A: ThreadLocal contains a reference to a WeakHashMap that holds key-value pairs
A: It depends, it will not be garbage collected if your are referencing it as static or by singleton and your class is not unloaded, that is why in application server environment and with ThreadLocal values, you have to use some listener or request filter the be sure that you are dereferencing all thread local variables at the end of the request processing. Or either use some Request scope functionality of your framework.
You can look here for some other explanations.
EDIT: In the context of a thread pool as asked, of course if the Thread is garbaged thread locals are.
A: Object b will not be subject for garbage collection if it somehow refers to your Test class. It can happen without your intention. For example if you have a code like this:
public class Test {
private static final ThreadLocal<Set<Integer>> a =
new ThreadLocal<Set<Integer>>(){
@Override public Set<Integer> initialValue(){
return new HashSet<Integer>(){{add(5);}};
}
};
}
The double brace initialization {{add(5);}} will create an anonymous class which refers to your Test class so this object will never be garbage collected even if you don't have reference to your Test class anymore. If that Test class is used in a web app then it will refer to its class loader which will prevent all other classes to be GCed.
Moreover, if your b object is a simple object it will not be immediately subject for GC. Only when ThreadLocal.ThreadLocalMap in Thread class is resized you will have your object b subject for GC.
However I created a solution for this problem so when you redeploy your web app you will never have class loader leaks. | unknown | |
d16684 | val | Input data sent via GET is attached to the URI (/?work=<data>), which is sent as a new request:
import socket
import sys
import os
Addr = ''
PORT = 2333
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((Addr, PORT))
s.listen()
while (1):
try:
print("waiting for connection")
conn, address = s.accept()
print(
"New client connected from IP address {} and port number {}".format(
*address
)
)
request = conn.recv(1024)
print("Request received")
method, uri, _ = request.decode().split(' ', 2)
print(method, uri)
#This is what i am hosting
#A webpage with a form
response = ""
conn.send(b'HTTP/1.1 200 OK\r\n')
conn.send(b'Content-Type: text/html\r\n')
conn.send(b'Host: localhost:2333\n')
conn.send(b'\r\n')
if uri == '/':
response = """<html>
<body><form action="http://localhost:2333/" method="GET">
<input type="text" name="work"></form></body>
</html>"""
elif uri.startswith('/?work'):
response = f"<html><body><h2>recevied: {uri[uri.find('/?work=')+7:]}</h2></body></html>"
conn.send(response.encode())
conn.send(b"\r\n")
print("Form input received")
#print("HTTP response sent")
except KeyboardInterrupt:
conn.close()
s.close()
#conn.close()
#s.close()
#break
Out:
waiting for connection
New client connected from IP address 127.0.0.1 and port number 55941
Request received
GET /?work=TestInput
<html><body><h2>recevied: TestInput</h2></body></html>
Form input received
waiting for connection
...
Note:
You might want to have a look at the protocol specs and/or use any existing library to get rid of this low level stuff.
A: Whenever we submit any form, browser makes new http request instead of using the existing connection, so you need to handle it in new http request/connection.
Another thing is, r = conn.recv(1024) is not letting the current connection close, that's why pressing enter in textfield is also not working. | unknown | |
d16685 | val | Rather than trying to register a initialized bean, I'm now initializing (registering) after the bean is loaded. Using weld callback I could achive it. Inversion of control after all ;). More Info: http://docs.jboss.org/weld/reference/latest/en-US/html/environments.html#_cdi_se_module | unknown | |
d16686 | val | I can't say why exactly you'd be getting that problem with Account and not Contact, but my first inclination would be not to try to pass in a BigDecimal at all and instead convert it to a double first using BigDecimal.doubleValue(). The downside is that you may lose some precision there, but the upside is that it should work without incident :). | unknown | |
d16687 | val | You are not doing anything wrong. You just need some additional code to access the Objects in the response, which is a regular JavaScript object, but console.log() is just not printing it all. The translation you are looking for is contained somewhere in there. Just be aware that there can be multiple responses as the same word can have different meanings or can be of different word classes (noun, verb, adjective, etc.)
What you can do to get a better understanding of the objects is to use JSON.stringify() to print the whole structure. For example:
.then((json) => console.log(JSON.stringify(json, null, 2)))
Note: The third paramter 2 means that an indentation of 2 spaces will be used.
A: My approach to the Pons Api on the server
const utils = require('./lib/utils')
app.get("/dic/:word", (req, res, next) => {
utils.getLastLine('api_key', 1).then(key => {
key = key.split('=')[1]
var word = req.params.word
var translate = 'dees'
return fetch(
`https://api.pons.com/v1/dictionary?q=${word}&l=${translate}`,{
headers: { "X-Secret": key }}
)
.then(res => res.json()).then(json => {
var base = json[0].hits[0].roms[0]
var wclass = base.wordclass
var arab = base.arabs[0]
var trans = arab.translations
var firsttrans = trans[0].target
var nrtrans = arab.translations.length
res.json({ firsttrans, wclass, nrtrans, trans })
})
});
});
Requirements:
*
*Get a Pons Api Key
*Be familiar with fetch Api - recommended server-side
Further/Limits:
*
*It is a nested fetch
*See a live example
*Code example is for development, not for production:
*
*no server error catching in case of
*
*no input
*non-existing word
*not all Api results are considered, just the first hit | unknown | |
d16688 | val | Adding to my comment: You can do this by using the definiton of comparing 2 columns described in ?glm.
data <- data.frame(AgeGroup = c('female, 18-39', 'female, 40-59', 'female, 60 and older', 'male, 18-39', 'male, 40-59', 'male, 60- and older'),
NoOutcome = c(130, 156, 165, 234, 156, 90),
Outcome = c(9, 22, 18, 44, 34, 5))
fit <- glm(cbind(Outcome, NoOutcome) ~ AgeGroup, data = data, family = binomial)
This is equivalent to expanding each group to individual rows and doing a binary regression:
data_long <- do.call(rbind, lapply(split(data, data$AgeGroup),
\(data)data.frame(AgeGroup = data[, 1], outcome = rep(c(0, 1), c(data[,2], data[,3])))))
fit_long <- glm(outcome ~ AgeGroup, family = binomial, data = data_long)
summary(fit_long)
summary(fit) # identical | unknown | |
d16689 | val | From Chris Seline's answer:
Any fields you don't want serialized in general you should use the
"transient" modifier, and this also applies to json serializers (at
least it does to a few that I have used, including gson).
If you don't want name to show up in the serialized json give it a
transient keyword, eg:
private transient String name;
A: If You don't want to that column in response json you can use @JsonIgnore
and if you don't want to that field in table you should use @Transient
@Transient
private String password_key_type;
@JsonIgnore
public int getUser_id()
{
return user_id;
} | unknown | |
d16690 | val | If you look at the examples in the documentation, the hard-coded array being passed into the table doesn't have the outer data property, it's just an array by itself - see https://datatables.net/examples/data_sources/js_array.html . You can see the same thing here as well: https://datatables.net/reference/option/data
The requirements when defining an AJAX data source are different. As per the example at https://datatables.net/reference/option/ajax by default you must supply an object with a "data" property as per the structure you've shown us in your question.
So it's simply that the requirements for each type of data source are different. Always read the documentation!
Demo of how to set the data source using a variable, with your variable. Note the absence of the "data" property...instead "test" is just a plain array:
var test = [{
"HistChar_ID": "4",
"Vorname": "Garnier",
"Nachname": "de Naplouse"
}, {
"HistChar_ID": "2",
"Vorname": "Robert",
"Nachname": "de Sable"
}, {
"HistChar_ID": "7",
"Vorname": "Ibn",
"Nachname": "Dschubair"
}, {
"HistChar_ID": "6",
"Vorname": "Baha ad-Din",
"Nachname": "ibn Schaddad"
}, {
"HistChar_ID": "1",
"Vorname": "Richard",
"Nachname": "L\u00f6wenherz"
}, {
"HistChar_ID": "5",
"Vorname": "Wilhelm",
"Nachname": "von Montferrat"
}];
$('#example').DataTable({
data: test,
columns: [{
data: 'HistChar_ID'
},
{
data: 'Vorname'
},
{
data: 'Nachname'
},
]
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="https://cdn.datatables.net/1.12.1/js/jquery.dataTables.min.js"></script>
<table id="example" class="display" style="width:100%">
<thead class="thead1">
<tr>
<th class="th1">HistChar_ID</th>
<th class="th2">Vorname</th>
<th class="th3">Nachname</th>
</tr>
</thead>
<tfoot class="tfoot1">
<tr>
<th class="th1">HistChar_ID</th>
<th class="th2">Vorname</th>
<th class="th3">Nachname</th>
</tr>
</tfoot>
</table>
A: Here is an example using a JavaScript variable which does not require you to change the data you show in your question:
var test = { "data": [ { ... }, { ... }, ... ] };
In the above structure, each element in the array [ ... ] contains the data for one table row.
In this case, the DataTable uses the data option to specify where that array can be found:
data: test.data
Here is the runnable demo:
var test = {
"data": [{
"HistChar_ID": "4",
"Vorname": "Garnier",
"Nachname": "de Naplouse"
}, {
"HistChar_ID": "2",
"Vorname": "Robert",
"Nachname": "de Sable"
}, {
"HistChar_ID": "7",
"Vorname": "Ibn",
"Nachname": "Dschubair"
}, {
"HistChar_ID": "6",
"Vorname": "Baha ad-Din",
"Nachname": "ibn Schaddad"
}, {
"HistChar_ID": "1",
"Vorname": "Richard",
"Nachname": "L\u00f6wenherz"
}, {
"HistChar_ID": "5",
"Vorname": "Wilhelm",
"Nachname": "von Montferrat"
}]
};
$(document).ready(function() {
$('#example').DataTable({
data: test.data,
columns: [
{ data: 'HistChar_ID' },
{ data: 'Vorname' },
{ data: 'Nachname' },
]
} );
} );
<!doctype html>
<html>
<head>
<meta charset="UTF-8">
<title>Demo</title>
<script src="https://code.jquery.com/jquery-3.5.1.js"></script>
<script src="https://cdn.datatables.net/1.10.22/js/jquery.dataTables.js"></script>
<link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.10.22/css/jquery.dataTables.css">
<link rel="stylesheet" type="text/css" href="https://datatables.net/media/css/site-examples.css">
</head>
<body>
<div style="margin: 20px;">
<table id="example" class="display" style="width:100%">
<thead class="thead1">
<tr>
<th class="th1">HistChar_ID</th>
<th class="th2">Vorname</th>
<th class="th3">Nachname</th>
</tr>
</thead>
<tfoot class="tfoot1">
<tr>
<th class="th1">HistChar_ID</th>
<th class="th2">Vorname</th>
<th class="th3">Nachname</th>
</tr>
</tfoot>
</table>
</div>
</body>
</html>
JavaScript Data Sources
In the above example, the data is sourced from a JavaScript variable - so at the very least you always need to tell DataTables what the name of the JS variable is, using the data option.
And, you may also need to tell DataTables where the array of row data can be found in that variable. This is what we needed to do in the above example.
If the JavaScript variable had been structured like this (an array, not an object containing an array)...
var test = [ { ... }, { ... }, ... ];
...then in that case, we only need to use data: test in the DataTable.
Ajax Data Source
For Ajax-sourced data, things are slightly different. There is no JavaScript variable - there is only a JSON response.
By default, if that JSON response has the following structure (an array of objects called "data" - or an array of arrays)...
{ "data": [ { ... }, { ... }, ... ] }
...then you do not need to provide any additional instructions for DataTables to locate the array. It uses "data" as the default value.
Otherwise if you have a different JSON structure, you need to use the Ajax dataSrc option to specify where the array is in the JSON response.
For the above example, if you do not provide the dataSrc option, that is the same as providing the following:
ajax: {
url: "your URL here",
dataSrc: "data" // this is the default value - so you do not need to provide it
}
This is why your Ajax version "just works" when you only provide the URL:
ajax: 'RESOURCES/PHP/Searchfield.php'
DataTables is using the default value of data to find the array it needs.
And this is why it doesn't work when you use a JavaScript variable called test with data: test.
So, for JavaScript-sourced data, there is no default value. You always have to provide the JavaScript variable name - and maybe additional info for the location of the array in the varaible.
But for Ajax-sourced data, there is a default value (data) - and I believe this is only provided for backwards compatibility with older versions of DataTables. | unknown | |
d16691 | val | The solution is to call string session_cache_limiter ([ string $cache_limiter ] ) with an appropriate value for $cache_limiter before calling session_start. I am using "private_no_expire", which ensures that the Expire header is never sent to the browser. | unknown | |
d16692 | val | The answer is in the apply_filters( 'woocommerce_dropdown_variation_attribute_options_args', $args )
You basically need to use that filter to access the $args that are being passed. In your particular situation, this is how you would do it:
add_filter( 'woocommerce_dropdown_variation_attribute_options_args', static function( $args ) {
$args['class'] = 'form-control';
return $args;
}, 2 );
What this does, is hooks into the woocommerce_dropdown_variation_attribute_options_args filter and passes the original $args to a static function. Then you basically set the value of the class index of the $args array. Then you return the $args. | unknown | |
d16693 | val | You're checking 50,000 points every time. That's a bit too much.
You may want to split those points into different ParticleSystems... Like 10 objects with 5000 particles each.
Ideally each object would compose a different "quadrant" so the Raycaster can check the boundingSphere first and ignore all those points if not intersecting. | unknown | |
d16694 | val | Problem is await. await resolves promise to actual value or throws exception (simplified view).
So, after this line:
const wheels = await car.findWheelsByCarId(carVIN);
wheels is not a promise, but actual wheel (or whatever).
Change it to:
const wheelsPromise = car.findWheelsByCarId(carVIN); // no await
And then this:
assert.isRejected(wheelsPromise, "Expected to have four wheels");
Should work as expected. | unknown | |
d16695 | val | So there was not easy to find a way but a the end that's what I do :
public static void main(String[] args) {
List<ErrorCodeModel> presentErrorList = new ArrayList<>();
presentErrorList.add(new ErrorCodeModel("1000", 10, 0));
presentErrorList.add(new ErrorCodeModel("1100", 2, 0));
List<ErrorCodeModel> pastErrorList = new ArrayList<>();
pastErrorList.add(new ErrorCodeModel("1003", 0, 10));
pastErrorList.add(new ErrorCodeModel("1104", 0, 12));
pastErrorList.add(new ErrorCodeModel("1000", 0, 12));
Map<String, ErrorCodeModel> map = Stream.of(presentErrorList, pastErrorList)
.flatMap(Collection::stream)
.collect(Collectors.toMap(ErrorCodeModel::getErrorCode,
Function.identity(),
(oldValue, newValue)
-> new ErrorCodeModel(oldValue.getErrorCode(),
oldValue.getPresentErrorCount()+newValue.getPresentErrorCount(),
oldValue.getPastErrorCount()+newValue.getPastErrorCount())));
List<ErrorCodeModel> errorList = new ArrayList<>(map.values());
errorList.sort((err1, err2) //line 20*
-> Integer.compare(Integer.parseInt(err1.getErrorCode()),
Integer.parseInt(err2.getErrorCode())));
System.out.println(errorList.toString());
//*line 20 : Optionnal, if you want to sort by errorCode
//(need to parse to int to avoir alphabetical order
}
So after add your elements, this is what is done :
*
*A stream of each 2 List is done
*the goal is to add them in a Map : (object's code, object)
*the (oldValue,newValue) lambda-exp is used is the key is always in the map, I tell it to add a new Object which has the sum of previous and the new to add
*after the map, a List is generated from the values which represents the ErrorCodeModel as you ask | unknown | |
d16696 | val | Is this what you were looking for?
with open("MyTEXT.txt", "r") as myfile:
wordlist = [line.rstrip('\n').split() for line in myfile]
titlelist = [i[0] for i in wordlist if len(i) == 1 and i[0] != "unwanted"]
For example, if MyTEXT.txt contained:
unwanted
12345 2124
abcd
efghi jkl mn
o
pqr 123
unwanted
stu v
wq
y z
unwanted
567 890
The output would be:
>>> titlelist
['abcd', 'o', 'wq'] | unknown | |
d16697 | val | I have slept on it and discovered the problem/solution.
TL;DR
*
*Remove the global flag from the regex pattern (I have no need for the global flag here, so I have opted to just remove it); or
*set the lastIndex property back to 0 before each search . i.e.
re.lastIndex = 0;
re.exec(your_string_to_search_goes_here);
Explanation
I had the global flag set but was applying the regex pattern iteratively to each element of an array. The regex object updates the lastIndex property when a match is found, provided the global flag is set. This is useful so that whatever pattern matching function you end up using knows where to start from after each time it matches something in your string.
For my use case, the global flag was totally unnecessary because I had already elementised the strings to search into an array, and I know that if there is to be a match, I expect there to be 1 and only 1, so remembering where the search pattern was last up to is irrelevant - if it doesn't match, move onto the next array element.
More useful threads here:
*
*RegExp.exec() returns NULL sporadically
*Regex.prototype.exec returns null on second iteration of the search | unknown | |
d16698 | val | Remy you are my hero. It was very easy to modify the SECURITY_DESCRIPTOR. I added just one line of code:
std::filesystem::path fileName("C:\\ProgramData\\MED\\Data.txt");
int ret(0);
FILE *fp;
ret = _tfopen_s(&fp, fileName.c_str(), _T("w"));
if (ERROR_SUCCESS == ret)
{
_ftprintf_s(fp, _T("1 = Type\n"));
_ftprintf_s(fp, _T("MED = Name\n"));
fclose(fp);
// Now give Standard Users permissions to modify/delete the file
SetNamedSecurityInfoA("C:\\ProgramData\\MED\\Data.txt", SE_FILE_OBJECT, DACL_SECURITY_INFORMATION, NULL, NULL, NULL, NULL);
} | unknown | |
d16699 | val | You could do something with CSS, using the :before or :after psuedo-elements (jsfiddle):
<div>Hello world</div>
div {
position: relative;
}
div:hover:after {
content: 'foo bar';
position: absolute;
background: cornsilk;
padding: 10px;
border: 1px solid #222;
box-shadow: 1px 1px 1px 1px #222;
}
To take it a step further, you can do it dynamically like this (just an example):
var text = $('input').val();
$('div').attr('data-content',text);
div:hover:after {
content: attr(data-content);
/*plus all the other stuff*/
}
A: try this:
<div id="container">
<div style="width:140px;">
<h1 id="text">GOOGLE</h1>
</div>
</div>
js:
var tile = $('<div />').appendTo('#container');
tile.attr('id','tile');
tile.text('google');
tile.css({
'background-color':'cornsilk',
width:'50px',
height:'20px',
'position': 'absolute',
'display': 'none',
'box-shadow': '1px 1px 1px 1px #222',
});
$('#text').on('mouseover',function(e){
tile.show();
tile.css({
'left': e.pageX + 'px',
'top': e.pageY + 'px'
});
});
$('#text').on('mouseout',function(e){
tile.hide();
}); | unknown | |
d16700 | val | It depends. The order of constructors does, unfortunately, make a difference. This means that the order of the patterns for that type does not. Whether you write
foo (Bin x y) = ...
foo Tip = ...
or
foo Tip = ...
foo (Bin x y) = ...
makes no difference, because they will be reordered by constructor order immediately in the "desugaring" process. The order of matching for multiple patterns is always semantically left-to-right, so the argument order can matter if you use multiple patterns together (you can always get around this with case). But GHC feels very free to reorganize code, sometimes for good and sometimes for ill. For instance, if you write
foo :: Int -> Bool
foo x = x == 5 || x == 0 || x == 7 || x == 4
GHC will smash it into (essentially)
foo = \x -> case x of
0 -> True
4 -> True
5 -> True
7 -> True
_ -> False
and then do a sort of binary search of these possibilities. This is probably not optimal in many cases at all, and is especially annoying if you happen to know that x==5 is particularly likely. But that's how it is for now, and changing it will make things perform badly in certain situations without someone doing a lot of work. | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.