_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d7101 | train | After a lot of searching, I think what I want is impossible in the WPF framework. I switched to OpenTK for that purpose and implemented the raycasting myself.
Now I have a WPF-mimick in OpenTK with a much better performance. The code is available here if anyone is interested. | unknown | |
d7102 | train | The problem was simply that I needed to move the references to the JS files to before I tried to use the Kendo grid. | unknown | |
d7103 | train | For performance reason and also to avoid a very different behavior between log-ON and log-OFF, I suggest to run one buffered log file per thread.
*
*One per thread to avoid locking: no contention
*Buffered to avoid disk latency
The counterparts are:
*
*a merging tools based on time (milliseconds) is needed to see application activity as a whole (vs thread activity)
*buffering may hide last log records in case of brutal termination
To go one step more to real-time, you have to log in memory, and to develop a dedicated interface to extract log on request but this kind of log is generally reserved to hard real-time embedded application.
Other solution for safe logging with low CPU consumption (low level C programming):
*
*place a log record buffer into shared memory
*the observed process act as log record producer
*create a log manager process, with higher priority, which acts as log record consumer
*manage the communication between consumer and producer behind a flip/flop mechanism: a pointer assignment under critical section.
If the observed process crash, no log record will be lost since shared memory segment is attached to log manager process.
A: Opening, writing to, and then closing the file on each log request is both redundant and inefficient.
On your log class, use a buffer, and write that buffer's contents to the file, either every X requests, shutdown, or every Y minutes. | unknown | |
d7104 | train | This isn't two widgets per field, this is two fields per form and one form per instance. For that we have formsets. | unknown | |
d7105 | train | Consider this method:
public Boolean checkData_pseudo_pass(String pseudo, String pass){
SQLiteDatabase db = this.getReadableDatabase();
Cursor res_unique =
db.rawQuery("select * from tp4_table where PSEUDO=? and PASS=?",
new String[]{pseudo, pass});
if (res_unique.getCount() > 0) {
return false;
} else {
return true;
}
}
This returns false if any of the rows in tp4_table matches the given pseudo and pass and true otherwise.
In other words, it fails if the username and password are correct.
The logic of the test is backwards. It should be:
if (res_unique.getCount() > 0) {
return true;
} else {
return false;
}
or better still, just this:
return res_unique.getCount() > 0;
If you still have a problem after this change, then it is somewhere else in the code. For example, you may not have populated the database correctly.
And you should return boolean not Boolean.
And you should fix the numerous stye errors in your code, starting the many cases of identifiers that do not follow the style rules:
*
*variable names and method names start with a lowercase letter
*no underscores (_) as in variable names, method names, class names or package names
*use camel case instead of underscores between words (except for constant names). | unknown | |
d7106 | train | Your code should have worked, but it can be simplified.
If you provide a property name for the predicate argument to _.find, it will search that property for the thisArg value.
function findOb(id) {
return _.find(myList, 'id', id);
}
The only problem I can see with your code is that you use === in your comparisons. If you pass 1 as the ID argument instead of "1", it won't match because === performs strict type checking. | unknown | |
d7107 | train | Use NSWorkspace's fullPathForApplication: to get an application's bundle path. If that method returns nil, the app is not installed. For example:
NSString *path = [[NSWorkspace sharedWorkspace] fullPathForApplication:@"Twitter"];
BOOL isTwitterInstalled = (nil != path);
URLForApplicationWithBundleIdentifier is another method you may use.
A: I have never tried the code in the above answer, but the following works for me:
if ( [[UIApplication sharedApplication] canOpenURL:[NSURL URLWithString:@"app-scheme://"]] ) {
NSLog(@"This app is installed.");
} else {
NSLog(@"This app is not installed.");
}
This method requires the app to have a scheme though. I don't know about the one above. | unknown | |
d7108 | train | Does the page being rendered know it's own address/URL (it should), if so can't it just check to ensure it's address doesn't match the RSS one? | unknown | |
d7109 | train | You can modify your json to match the following parsing process:
*
*find the intent the pattern matches
*get a response
*pass it as an argument to the function of that intent
This means that you will add a "function" field to the json and call it when you parse. All intents will simply have it as "print" (or whatever other default operation you're doing) and the "Status_Check" intent will have its own special function. Then, just map the names to actual function objects that you can call.
So the json can look like this:
[{"tag": "greetings",
"pattern": ["Hi", "How are you", "Hey", "Hello", "Good Day"],
"function": "print",
"responses": ["Hello!", "Hey there", "Hi, how can I help you today?"]
},
{"tag": "status_check",
"pattern": ["Where's my shipment", "Track my shipment"],
"function": "GetDBInfo",
"responses": [""]
}]
And to parse it:
import json
import random
def db_info(useless_arg):
print("from func")
func_mapping = {"print": print,
"GetDBInfo": db_info}
with open("test.json") as file:
intents = json.load(file)
text = input("input: ")
for intent in intents:
if text in intent["pattern"]:
function = intent["function"]
arg = random.choice(intent["responses"])
func_mapping[function](arg)
An example run:
input: Where's my shipment
from func
input: Hi
Hello! | unknown | |
d7110 | train | Try setting a btree-index for ntv_staff_office.pid. | unknown | |
d7111 | train | Use ^[\w\s ,.]+$ for your validation.
You can check it online at https://regex101.com/r/q6LoSE/4. | unknown | |
d7112 | train | Add a Dynamic Action on titlelevel (Key Release if Text Field, onChange if dropdown etc.)
Add a PL/SQL Action to the dynamic_action, using Items to Submit to pass fields in and Items to Return for the fields you modify.
Dynamic Action:
OR
Action: | unknown | |
d7113 | train | I usually solve this kind of problems with Promises, see: Bluebird.
You could then do a batch upload on S3 using Promise.all(), once you get that callback you can batch insert into Mongo, when done, run the final callback. OR, you could do a batch that does both things: upload->insert to mongo, and when all of those are done, return final callback. It will depend on your server and on how many files you want to upload at once. You could also use Promise.map() with the concurrency option set to whatever concurrent tasks you want to run.
Pseudo-code example:
Lets asume that getFiles, uploadFile and uploadToMongo return a Promise object.
var maxConcurrency = 10;
getFiles()
.map(function(file){
return uploadFile(file)
.then(uploadToMongo)
},{concurrency: maxConcurrency})
.then(function(){
return finalCallback();
}).catch(handleError);
Example, how to manually "promisify* S3:
function uploadMyFile(filename, filepath, bucketname) {
return new Promise(function(resolve, reject){
s3.upload({
Key: filename,
Bucket: bucketname,
ACL: "public-read",
Body: fs.createReadStream(filepath)
}, function(err, data){
//This err will get to the "catch" statement.
if (err) return reject(err);
// Handle success and eventually call:
return resolve(data);
});
});
}
You can use this as:
uploadMyFile
.then(handleSuccess)
.catch(handleFailure);
All nice and pretty!
A: If you cant get promises to work, you can store the status of your calls in a local variable. You would just break up your calls into 2 functions, a single upload and a bulk upload.
This is dirty code, but you should be able to get the idea:
router.post("/upload", function(req, res){
var form = new multiparty.Form();
form.parse(req,function(err,fields,files){
if (err){
cb(err);
} else {
bulkUpload(files, fields, function(err, result){
if (err){
cb(err);
} else {
res.json({result:result});
}
})
}
})
})
function singleUpload(file, field, cb){
s3.upload({
Key: filename,
Bucket: bucketname,
ACL: "public-read",
Body: fs.createReadStream(filepath)
}, function(err, data){
if(err)
cb(err);
} else {
Model.collection.insert({"name": "name", "url" : data.Location}, function(err, result){
if(err){
cb(err);
} else {
cb(null, result);
}
})
}
})
}
function bulkUpload (files, fields, cb) {
var count = files.length;
var successes = 0;
var errors = 0;
for (i=0;i<files.length;i++) {
singleUpload(files[i], fields[i], function (err, res) {
if (err) {
errors++
//do something with the error?
} else {
successes++
//do something with the result?
}
//when you have worked through all of the files, call the final callback
if ((successes + errors) >= count) {
cb(
null,
{
successes:successes,
errors:errors
}
)
}
});
}
}
This would not be my recommended method, but another user has already suggested promises. I figure an alternative method would be more helpful.
Good Luck! | unknown | |
d7114 | train | Have you looked at cctalk-net? It's a rewrite of libcctalk and has been worked on up to August 2011 this year. It does not support everything in ccTalk but might just support enough for your needs.
A: I managed to come up with a solution based on the aforementioned cctalk-net project.
I hosted my project on github: https://github.com/Hitman666/nbCcTalkCoinAcceptor so I hope it will help someone. I made a full fledged documentation for an easy start (which can't be said for cctalk-net project). | unknown | |
d7115 | train | As Haylem suggests thought you'll need to do it in two steps, one for the compile and one for the jars.
For the compiler
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.5</version>
<executions>
<execution>
<configuration>
<source>1.3</source>
<target>1.5</target>
<fork>true</fork>
<outputDirectory>${project.build.outputDirectory}_jdk5</outputDirectory>
</configuration>
</execution>
<execution>
<configuration>
<source>1.3</source>
<target>1.6</target>
<fork>true</fork>
<outputDirectory>${project.build.outputDirectory}_jdk6</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
And then for the jar plugin
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>2.3.1</version>
<executions>
<execution>
<goals>
<goal>jar</goal>
</goals>
<configuration>
<classesDirectory>${project.build.outputDirectory}_jdk5</classesDirectory>
<classifier>jdk5</classifier>
</configuration>
</execution>
<execution>
<goals>
<goal>jar</goal>
</goals>
<configuration>
<classesDirectory>${project.build.outputDirectory}_jdk6</classesDirectory>
<classifier>jdk6</classifier>
</configuration>
</execution>
</executions>
</plugin>
you can then refer to the required jar by adding a <classifier> element to your dependency. e.g.
<dependency>
<groupId>br.com.comp.proj</groupId>
<artifactId>proj-cryptolib</artifactId>
<version>0.0.4-SNAPSHOT</version>
<classifier>jdk5</classifier>
</dependency>
A: You can configure this via the Maven compiler plugin.
Take a look at the Maven compiler plugin documentation.
You could enable this via different profiles for instance.
If you only want to have different target versions you could simply use a variable target. Something like this:
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<source>1.3</source>
<target>${TARGET_VERSION}</target>
<fork>true</fork>
</configuration>
</plugin>
A: To complement my comment to wjans' answer, as you requested more details.
The following would have the compiler plugin executed twice to produce two different sets of classfiles, identified by what is called a classifier (basically, a marker for Maven to know what you refer to when a single project can produce multiple artifacts).
Roughly, something like:
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.5</version>
<executions>
<execution>
<configuration>
<source>1.3</source>
<target>1.5</target>
<fork>true</fork>
<classifier>jdk5</classifier>
</configuration>
</execution>
<execution>
<configuration>
<source>1.3</source>
<target>1.6</target>
<fork>true</fork>
<classifier>jdk6</classifier>
</configuration>
</execution>
</executions>
</plugin>
Note that people sometimes frown on using classifiers, as they on using profiles, as they can possibly mean that your project should be scinded in multiple projects or that you are harming your build's portability. | unknown | |
d7116 | train | Yes. This method works well:
+ (void)clearTmpDirectory
{
NSArray* tmpDirectory = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:NSTemporaryDirectory() error:NULL];
for (NSString *file in tmpDirectory) {
[[NSFileManager defaultManager] removeItemAtPath:[NSString stringWithFormat:@"%@%@", NSTemporaryDirectory(), file] error:NULL];
}
}
A: Try this code to remove NSTemporaryDirectory files
-(void)deleteTempData
{
NSString *tmpDirectory = NSTemporaryDirectory();
NSFileManager *fileManager = [NSFileManager defaultManager];
NSError *error;
NSArray *cacheFiles = [fileManager contentsOfDirectoryAtPath:tmpDirectory error:&error];
for (NSString *file in cacheFiles)
{
error = nil;
[fileManager removeItemAtPath:[tmpDirectory stringByAppendingPathComponent:file] error:&error];
}
}
and to check data remove or not write code in didFinishLaunchingWithOptions
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
[self.window makeKeyAndVisible];
NSString *tmpDirectory = NSTemporaryDirectory();
NSFileManager *fileManager = [NSFileManager defaultManager];
NSError *error;
NSArray *cacheFiles = [fileManager contentsOfDirectoryAtPath:tmpDirectory error:&error];
NSLog(@"TempFile Count ::%lu",(unsigned long)cacheFiles.count);
return YES;
}
A: Thanks to Max Maier and Roman Barzyczak. Updated to Swift 3, using URLs instead of strings.
Swift 3
func clearTmpDir(){
var removed: Int = 0
do {
let tmpDirURL = URL(string: NSTemporaryDirectory())!
let tmpFiles = try FileManager.default.contentsOfDirectory(at: tmpDirURL, includingPropertiesForKeys: nil, options: .skipsHiddenFiles)
print("\(tmpFiles.count) temporary files found")
for url in tmpFiles {
removed += 1
try FileManager.default.removeItem(at: url)
}
print("\(removed) temporary files removed")
} catch {
print(error)
print("\(removed) temporary files removed")
}
}
A: Swift 3 version as extension:
extension FileManager {
func clearTmpDirectory() {
do {
let tmpDirectory = try contentsOfDirectory(atPath: NSTemporaryDirectory())
try tmpDirectory.forEach {[unowned self] file in
let path = String.init(format: "%@%@", NSTemporaryDirectory(), file)
try self.removeItem(atPath: path)
}
} catch {
print(error)
}
}
}
Example of usage:
FileManager.default.clearTmpDirectory()
Thanks to Max Maier, Swift 2 version:
func clearTmpDirectory() {
do {
let tmpDirectory = try NSFileManager.defaultManager().contentsOfDirectoryAtPath(NSTemporaryDirectory())
try tmpDirectory.forEach { file in
let path = String.init(format: "%@%@", NSTemporaryDirectory(), file)
try NSFileManager.defaultManager().removeItemAtPath(path)
}
} catch {
print(error)
}
}
A: Swift 4
One of the possible implementations
extension FileManager {
func clearTmpDirectory() {
do {
let tmpDirURL = FileManager.default.temporaryDirectory
let tmpDirectory = try contentsOfDirectory(atPath: tmpDirURL.path)
try tmpDirectory.forEach { file in
let fileUrl = tmpDirURL.appendingPathComponent(file)
try removeItem(atPath: fileUrl.path)
}
} catch {
//catch the error somehow
}
}
}
A: This works on a jailbroken iPad, but I think this should work on a non-jailbroken device also.
-(void) clearCache
{
for(int i=0; i< 100;i++)
{
NSLog(@"warning CLEAR CACHE--------");
}
NSFileManager *fileManager = [NSFileManager defaultManager];
NSError * error;
NSArray * cacheFiles = [fileManager contentsOfDirectoryAtPath:NSTemporaryDirectory() error:&error];
for(NSString * file in cacheFiles)
{
error=nil;
NSString * filePath = [NSTemporaryDirectory() stringByAppendingPathComponent:file ];
NSLog(@"filePath to remove = %@",filePath);
BOOL removed =[fileManager removeItemAtPath:filePath error:&error];
if(removed ==NO)
{
NSLog(@"removed ==NO");
}
if(error)
{
NSLog(@"%@", [error description]);
}
}
}
A: I know i'm late to the party but i'd like to drop my implementation which works straight on URLs, too:
let fileManager = FileManager.default
let temporaryDirectory = fileManager.temporaryDirectory
try? fileManager
.contentsOfDirectory(at: temporaryDirectory, includingPropertiesForKeys: nil, options: .skipsSubdirectoryDescendants)
.forEach { file in
try? fileManager.removeItem(atPath: file.path)
}
A: //
// FileManager+removeContentsOfTemporaryDirectory.swift
//
// Created by _ _ on _._.202_.
// Copyright © 202_ _ _. All rights reserved.
//
import Foundation
public extension FileManager {
/// Perform this method on a background thread.
/// Returns `true` if :
/// * all temporary folder files have been deleted.
/// * the temporary folder is empty.
/// Returns `false` if :
/// * some temporary folder files have not been deleted.
/// Error handling:
/// * Throws `contentsOfDirectory` directory access error.
/// * Ignores single file `removeItem` errors.
///
@discardableResult
func removeContentsOfTemporaryDirectory() throws -> Bool {
if Thread.isMainThread {
let mainThreadWarningMessage = "\(#file) - \(#function) executed on main thread. Do not block the main thread."
assertionFailure(mainThreadWarningMessage)
}
do {
let tmpDirURL = FileManager.default.temporaryDirectory
let tmpDirectoryContent = try contentsOfDirectory(atPath: tmpDirURL.path)
guard tmpDirectoryContent.count != 0 else { return true }
for tmpFilePath in tmpDirectoryContent {
let trashFileURL = tmpDirURL.appendingPathComponent(tmpFilePath)
try removeItem(atPath: trashFileURL.path)
}
let tmpDirectoryContentAfterDeletion = try contentsOfDirectory(atPath: tmpDirURL.path)
return tmpDirectoryContentAfterDeletion.count == 0
} catch let directoryAccessError {
throw directoryAccessError
}
}
} | unknown | |
d7117 | train | I have kept your HTML/CSS, and just added var current to track current slide.
slideW = $('#slides').width();
current = 0;
$(document).on('click', '#prev', function(e) {
if (current > 0 && current <= $('#slides').children().length - 1) {
current--;
}
console.log(current);
e.preventDefault();
$('#slides').animate({
scrollLeft: slideW * current - 100
}, 600);
});
$(document).on('click', '#next', function(e) {
if (current < $('#slides').children().length - 1)
current++;
console.log(current);
e.preventDefault();
$('#slides').animate({
scrollLeft: slideW * current + 100
}, 600);
});
Demo: http://jsfiddle.net/86he7L41/1/
Of course, there are conditions to prevent undesired scrolling: left, or right - number of slides is limit. | unknown | |
d7118 | train | I have later discovered that the problem is a result of an 'if' condition in the method that returns the file content. So, when the condition is not met for any reason, it returns 'false' as a response instead of the video file therefore resulting in the boolean response I receive.
That is the way the code is written to function when the required token is missing as shown below
if(!empty($token))
{
$token->delete();
$mime_type=Mime::from_extension($filename);
return response()->file(storage_path('app/lesson-files/'.$filename),[
'Content-Type' => $mime_type,
'Content-Disposition' => 'inline; filename="Lesson-file"'
]);
}
return false;
}
I later discovered that this happens when the user tries to access the application with an outdated browser which does not fulfill one of the conditions expected to return the video content.
In other words, the system is actually working as intended.
Thank you to all who tried to assist in one way or the other. I appreciate you all. | unknown | |
d7119 | train | You should always use atan2(y,x) instead of atan(y/x). It is a common mistake. – Somos
He wrote this on a math form were I asked this too, and that was my stupid mistake -_-
My new version is:
float gx = 2 * (x*z - w*y);
float gy = 2 * (w*x + y*z);
float gz = w*w - x*x - y*y + z*z;
float yaw = atan2(2*x*y - 2*w*z, 2*w*w + 2*x*x - 1); // about Z axis
float pitch = atan2(gx, sqrt(gy*gy + gz*gz)); // about Y axis
float roll = atan2(gy, gz); // about X axis
/*Serial.print(" yaw ");
Serial.print(yaw * 180/M_PI,0);*/
Serial.print(" pitch ");
Serial.print(pitch * 180/M_PI,2);
Serial.print(" sideways ");
// Please don't pay attention to the extra function I made for the project but it doesn't have to do with the problem
if(pitch > 0) Serial.println((roll * 180/M_PI) * (1/(1+pow(1.293,((pitch * 180/M_PI)-51.57)))), 2);
else if(pitch == 0) Serial.println(roll * 180/M_PI, 2);
else if(pitch < 0) Serial.println((roll * 180/M_PI) * (1/(1+pow(1.293,(((pitch) * (-180)/M_PI)-51.57)))), 2); | unknown | |
d7120 | train | You should try to clean and Rebuild the project.
Then go to File/InValidate Caches and Restart and select Restart. It should solve your problem.
I've had the same error and I think this happened because of cache memory store value in Android Studio.
A: Try this button :
Or simlpy do clean , rebuild . | unknown | |
d7121 | train | There are no classes for handling the selection of the articles.
So it comes down to using a query and looping through the result set:
$catId = 59; // the category ID
$query = "SELECT * FROM #__content WHERE catid ='" . $catId . "'"; // prepare query
$db = &JFactory::getDBO(); // get database object
$db->setQuery($query); // apply query
$articles = $db->loadObjectList(); // execute query, return result list
foreach($articles as $article){ // loop through articles
echo 'ID:' . $article->id . ' Title: ' . $article->title;
} | unknown | |
d7122 | train | In short, no
Why? To create an enumerable collection class to get something like
Class CTest
....
End Class
Dim oTest, mElement
Set oTest = New CTest
....
For Each mElement In oTest
....
Next
the class MUST follow some rules. We will need the class to expose
*
*A public readonly property called Count
*A public default method called Item
*A public readonly property called _NewEnum, that should return an
IUnknown interface to an object which implements the IEnumVARIANT interface and that must have the hidden attribute and a dispatch ID of -4
And from this list or requirements, VBScript does not include any way to indicate the dispatch ID or hidden attribute of a property.
So, this can not be done
The only way to enumerate over the elements stored in a container class is to have a property (or method) that returns
*
*an object that supports all the indicated requirements, usually the same object used to hold the elements (fast, but it will expose too much information)
*an array (in VBScript arrays can be enumerated) holding references to each of the elements in the container (slow if the array needs to be generated on call, but does not return any non required information) | unknown | |
d7123 | train | Looking at the example in the documentation and your code, probably the simplest "fix" is to instantiate the marker clusterer inside your display markers routine, then add each marker to the clusterer as it is created:
Comments:
*
*you have have a callback specified in you script include (&callback=myMap), but no function of that name (just remove that from your script include). Causes this error in the console:
"myMap is not a function"
*There is a javascript error Uncaught ReferenceError: marker is not defined on this line: var markerCluster = new MarkerClusterer(map, marker, because there is no variable marker available in that scope (and as @MrUpsidown observed in his comment, that should be the array of markers). To address that I suggest using the MarkerClusterer.addMarker method to add markers to it in displayMarkers, and changing your createMarker function to return the marker it creates.
function displayMarkers() {
// marker clusterer to manage the markers.
var markerCluster = new MarkerClusterer(map, [], {
imagePath: 'https://developers.google.com/maps/documentation/javascript/examples/markerclusterer/m'
});
var bounds = new google.maps.LatLngBounds();
for (var i = 0; i < markersData.length; i++) {
var latlng = new google.maps.LatLng(markersData[i].lat, markersData[i].lng);
var name = markersData[i].name;
var address1 = markersData[i].address1;
var address2 = markersData[i].address2;
var address3 = markersData[i].address3;
var address4 = markersData[i].address4;
var image = markersData[i].ikona;
var wwwsite = markersData[i].wwwsite;
markerCluster.addMarker(createMarker(latlng, name, address1, address2, address3, address4, image, wwwsite));
bounds.extend(latlng);
}
map.fitBounds(bounds);
}
function createMarker(latlng, name, address1, address2, address3, address4, image, wwwsite) {
var marker = new google.maps.Marker({
map: map,
position: latlng,
title: name,
icon: image
});
google.maps.event.addListener(marker, 'click', function() {
var iwContent = '<div id="iw_container">' +
'<div class="iw_title">' + name + '</div>' +
'<div class="iw_content">' + address1 + '<br />' +
address2 + '<br />' + address3 + '<br />' + address4 + '<br />' +
wwwsite + '</div></div>';
infoWindow.setContent(iwContent);
infoWindow.open(map, marker);
});
return marker;
}
proof of concept fiddle
code snippet:
html,
body {
height: 100%;
margin: 0;
padding: 0;
}
<script src="https://developers.google.com/maps/documentation/javascript/examples/markerclusterer/markerclusterer.js"></script>
<script src="https://maps.googleapis.com/maps/api/js?key=AIzaSyCkUOdZ5y7hMm0yrcCQoCvLwzdM6M8s5qk"></script>
<script type="text/javascript">
var map;
var infoWindow;
var markersData = [
{
lat: 50.25202,
lng: 19.015023,
name: "Test1",
address1: "Test1",
address2: "Test1",
address3: "2019-03-13",
address4: "2019-03-13",
ikona: "http://historia-lokalna.pl/images/places.png",
wwwsite: "<a href=https://www.historia-lokalna.pl target=_blank >Strona www</a>"
},
{
lat: 49.824791,
lng: 19.040867,
name: "Test2",
address1: "Test2",
address2: "Test2",
address3: "2019-03-22",
address4: "2019-03-22",
ikona: "http://historia-lokalna.pl/images/places.png",
wwwsite: "<a href=https://www.historia-lokalna.pl target=_blank >Strona www</a>"
},
{
lat: 50.334918,
lng: 18.14136,
name: "Test3",
address1: "Test3",
address2: "Test3",
address3: "2019-03-08",
address4: "2019-03-08",
ikona: "http://historia-lokalna.pl/images/places.png",
wwwsite: "<a href=https://www.historia-lokalna.pl target=_blank >Strona www</a>"
},
{
lat: 49.825794,
lng: 19.040889,
name: "Test4",
address1: "Test4",
address2: "Test4",
address3: "2019-03-13",
address4: "2019-03-13",
ikona: "http://historia-lokalna.pl/images/places.png",
wwwsite: "<a href=https://www.historia-lokalna.pl target=_blank >Strona www</a>"
},
]
function initialize() {
var mapOptions = {
center: new google.maps.LatLng(50.57628900072813, 21.356987357139587),
zoom: 9,
mapTypeId: 'roadmap',
};
map = new google.maps.Map(document.getElementById('map-canvas'), mapOptions);
infoWindow = new google.maps.InfoWindow();
google.maps.event.addListener(map, 'click', function() {
infoWindow.close();
});
displayMarkers();
// End
}
google.maps.event.addDomListener(window, 'load', initialize);
function displayMarkers() {
// marker clusterer to manage the markers.
var markerCluster = new MarkerClusterer(map, [], {
imagePath: 'https://developers.google.com/maps/documentation/javascript/examples/markerclusterer/m'
});
var bounds = new google.maps.LatLngBounds();
for (var i = 0; i < markersData.length; i++) {
var latlng = new google.maps.LatLng(markersData[i].lat, markersData[i].lng);
var name = markersData[i].name;
var address1 = markersData[i].address1;
var address2 = markersData[i].address2;
var address3 = markersData[i].address3;
var address4 = markersData[i].address4;
var image = markersData[i].ikona;
var wwwsite = markersData[i].wwwsite;
markerCluster.addMarker(createMarker(latlng, name, address1, address2, address3, address4, image, wwwsite));
bounds.extend(latlng);
}
map.fitBounds(bounds);
}
function createMarker(latlng, name, address1, address2, address3, address4, image, wwwsite) {
var marker = new google.maps.Marker({
map: map,
position: latlng,
title: name,
// icon: image - so shows default icon in code snippet
});
google.maps.event.addListener(marker, 'click', function() {
var iwContent = '<div id="iw_container">' +
'<div class="iw_title">' + name + '</div>' +
'<div class="iw_content">' + address1 + '<br />' +
address2 + '<br />' + address3 + '<br />' + address4 + '<br />' +
wwwsite + '</div></div>';
infoWindow.setContent(iwContent);
infoWindow.open(map, marker);
});
return marker;
}
</script>
<!-- markerclusterer script -->
<script src="https://developers.google.com/maps/documentation/javascript/examples/markerclusterer/markerclusterer.js"></script>
<!-- End -->
<h2 class="przeg">Map:</h2>
<div id="map-canvas" style="width:100%;height:80%;"> </div> | unknown | |
d7124 | train | I had the same problem and found the solution after just an hour or so.
The issue is that jpgraph loads a default set of font files each time a Graph is created. I couldn't find a way to unload a font, so I made a slight change so that it only loads the fonts one time.
To make the fix for your installation, edit "gd_image.inc.php" as follows:
Add the following somewhere near the beginning of the file (just before the CLASS Image):
// load fonts only once, and define a constant for them
define("GD_FF_FONT0", imageloadfont(dirname(__FILE__) . "/fonts/FF_FONT0.gdf"));
define("GD_FF_FONT1", imageloadfont(dirname(__FILE__) . "/fonts/FF_FONT1.gdf"));
define("GD_FF_FONT2", imageloadfont(dirname(__FILE__) . "/fonts/FF_FONT2.gdf"));
define("GD_FF_FONT1_BOLD", imageloadfont(dirname(__FILE__) . "/fonts/FF_FONT1-Bold.gdf"));
define("GD_FF_FONT2_BOLD", imageloadfont(dirname(__FILE__) . "/fonts/FF_FONT2-Bold.gdf"));
then at the end of the Image class constructor (lines 91-95), replace this:
$this->ff_font0 = imageloadfont(dirname(__FILE__) . "/fonts/FF_FONT0.gdf");
$this->ff_font1 = imageloadfont(dirname(__FILE__) . "/fonts/FF_FONT1.gdf");
$this->ff_font2 = imageloadfont(dirname(__FILE__) . "/fonts/FF_FONT2.gdf");
$this->ff_font1_bold = imageloadfont(dirname(__FILE__) . "/fonts/FF_FONT1-Bold.gdf");
$this->ff_font2_bold = imageloadfont(dirname(__FILE__) . "/fonts/FF_FONT2-Bold.gdf");
with this:
$this->ff_font0 = GD_FF_FONT0;
$this->ff_font1 = GD_FF_FONT1;
$this->ff_font2 = GD_FF_FONT2;
$this->ff_font1_bold = GD_FF_FONT1_BOLD;
$this->ff_font2_bold = GD_FF_FONT2_BOLD;
I didn't test this with multiple versions of php or jpgraph, but it should work fine. ymmv.
A: You could try using PHP >= 5.3 Garbage collection
gc_enable() + gc_collect_cycles()
http://php.net/manual/en/features.gc.php
A: @bobD's answer is right on the money and helped solved my same question.
However there is also one other potential memory leak source for those still looking for an answer to this very old problem.
If you are creating multiple charts with the same background image, each load of the background image causes an increase in memory with each chart creation.
Similar to bobD's answer to the font loading issue, this can be solved by making the background image(s) global variables instead of loading them each time.
EDIT: It looks like there is a very small memory leak when using MGraph() as well.
Specifically the function Add(). Perhaps it also loads a font library or something similar with each recursive call. | unknown | |
d7125 | train | Currently, I don't believe there is a simple way to specify a hash check within setup.py. My solution around it is to simply use virtualenv with hashed dependencies in requirements.txt. Once installed in the virtual environment you can run pip setup.py install and it will check the local environment (which is your virtual environment) and the packages installed is hashed.
Inside requirements.txt your hashed packages will look something like this:
requests==2.19.1 \
--hash=sha256:63b52e3c866428a224f97cab011de738c36aec0185aa91cfacd418b5d58911d1 \
--hash=sha256:ec22d826a36ed72a7358ff3fe56cbd4ba69dd7a6718ffd450ff0e9df7a47ce6a
Activate your virtualenv and install requirements.txt file:
pip install -r requirements.txt --require-hashes | unknown | |
d7126 | train | Check that the port on the server isn't being blocked by the firewall. An easy way to check is to simply type into your local machine's browser address bar the URL of the web service - http:/ /ServerName:8001/ServiceClass/ServiceMethod
If you get a 404 error or something like that, check the Firewall settings (inbound) to see that port is open. However, if you get a good response in your browser, then you know it's not the Firewall.
OK, so here's a very simple htm/JavaScript program I wrote as a test harness to ping a WCF web service.
<title>SOAP JavaScript Client Test</title>
<script type="text/javascript">
function Ping() {
//set up varable
var sContent;
sContent= "<SoapXML><Your Content></XML>";
var xmlhttp = new XMLHttpRequest();
xmlhttp.open('POST', Demo.URL.value, true);
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState == 4||xmlhttp.readyState == 0) {
//alert("Ready state: " + xmlhttp.readyState.toString());
if (xmlhttp.status == 200) {
//alert("good");
Demo.pingresponse.value = "Response: " +xmlhttp.responseText;
}
if (xmlhttp.status !=200){
//alert("bad");
Demo.pingresponse.value = "Error: " +xmlhttp.status.toString() +" response text: " +xmlhttp.responseText;
}
} else {
//alert("readystate bad");
}
}
//send request
xmlhttp.setRequestHeader("POST http:localhost:8085/ServiceName HTTP/1.1");
xmlhttp.setRequestHeader("VsDebuggerCausalityData","uI8ACQAA");
xmlhttp.setRequestHeader("SOAPAction","\"http://SoapRequestHeader\"");
xmlhttp.setRequestHeader("Host","localhost:8085");
xmlhttp.setRequestHeader("Expect","100-continue");
xmlhttp.setRequestHeader("Accept-Encoding","gzip, deflate");
xmlhttp.setRequestHeader("Connection","Keep-Alive");
xmlhttp.setRequestHeader("Content-Length","639");
xmlhttp.setRequestHeader("Content-type", "text/xml; charset=utf-8");
xmlhttp.send(sContent);
}
</script>
</head>
<body>
<form name="Demo" action="" method="post">
<div>
Web Service URL (i.e. http://ServerName:8085/ServiceName <br />
<input id="URL" type="text" size="140" value="http:/ /localhost:8085/ServiceName " />
<br />
<input type="button" value="Ping" onclick="Ping();" /><br />
<textarea id="pingresponse"cols="100" rows="10">
</textarea> <br />
</div>
</form>
</body>
<html>
Obviously, this won't work for your site, but with some tweaks for the URL, Port and expected content, this might be a good starting point. | unknown | |
d7127 | train | You can always access variables of another ViewController by creating an instance of that class in your current VC. In this case, you could create an instance of the VC in which the SQLite DB code exists in the MapViewController, and then assign the coordinates to a variable in the first VC. If you need to perform a task like writing to the Database with the coordinate being passed, then you could use the NSNotificationCenter class, which allows you to communicate between your View Controllers. The links below should give you a better idea as to what I'm talking about.
Ref:
StackOverflow post on passing values
Medium Tutorial on NSNotificationCenter | unknown | |
d7128 | train | Works fine for me, I created the files as https://gist.github.com/boyvinall/f23420215707fa3e73e21c3f9a5ff22b
$ make
cc -c -o main.o main.c
cc -c -o hello.o hello.c
cc -o hello main.o hello.o
Might be the version of make like @Beta said, but even an old version of GNU make should work just fine for this.
Otherwise, ensure you're using tabs to indent in the makefile, not spaces. | unknown | |
d7129 | train | I had the same problem recently try the following :
private today = new Date();
public min: Date = new Date(this.today.getFullYear(), this.today.getMonth(), this.today.getDate());
this worked for me, also change the "max" to some date after 11/7/2017 :) | unknown | |
d7130 | train | First thing first, you don't use this.state inside this.setState, instead use a function to update state. Check this for reference: https://reactjs.org/docs/state-and-lifecycle.html#state-updates-may-be-asynchronous
Your code should be as follows:
this.setState((state) => ({
li: state.li.concat([newListItem])
}));
Second, why are you assigning an array to newlistitem by doing: const newListItem = [this.state.ActiveText]; ?
It should be const newListItem = this.state.ActiveText;
A: Problem is this line:
this.setState({
li: this.state.li.push(newListItem)
});
example:
var arr = [];
console.log(arr.push({}))// it will print 1.
in your case:
this.setState({
li: this.state.li.push(newListItem)// here you assigned `li`1.
});
Fix the above.
A: 1. Don't mutate this.state
In handleOnClick(), do not write this.state.li.push(newListItem).
Instead, make a clone of this.state.li, add newListItem into that clone, and set the state of li to the new clone:
handleOnClick(e) {
e.preventDefault();
this.setState({
li: [
...this.state.li, // spread out the existing li
this.state.ActiveText // append this newListItem at the end of this array
]
});
}
2. Consider destructuring
In your render(), you could destructure this.state.li:
render() {
const { li } = this.state
return (
<React.Fragment>
...other stuffs
<ul>
{li.map(e => (
<li>{e}</li>
))}
</ul>
</React.Fragment>
);
} | unknown | |
d7131 | train | add position: relative to parent element
.parent{
position: relative;
}
.child{
position: sticky;
top: 0
} | unknown | |
d7132 | train | Try enabling CORS like this -
but first install latest flask-cors by running -
pip install -U flask-cors
from flask import Flask
from flask_cors import CORS, cross_origin
app = Flask(__name__)
cors = CORS(app) # This will enable CORS for all routes
@app.route("/")
@cross_origin()
def helloWorld():
return "Helloworld!" | unknown | |
d7133 | train | The error stems from this bit of code in CPython's gen_send_ex2, i.e. it occurs if gi_frame_state is FRAME_CREATED.
The only place that matters for this discussion that sets gi_frame_state is here in gen_send_ex2, after a (possibly None) value has been sent and a frame is about to be evaluated.
Based on that, I'd say no, there's no way to send a non-None value to a just-started generator.
A: Not sure if this is helpful in your specific case, but you could use a decorator to initialize coroutines.
def initialized(coro_func):
def coro_init(*args, **kwargs):
g = coro_func(*args, **kwargs)
next(g)
return g
return coro_init
@initialized
def gen(n):
m = (yield) or "did not send m to gen"
print(n, m)
g = gen(10)
g.send("sent m to g") # prints "10 sent m to g" | unknown | |
d7134 | train | Whoops, it looks like I overlooked the resolve function in a subscription.
From the graphql-subscriptions github page
Payload Manipulation
You can also manipulate the published payload, by adding resolve methods to your subscription:
const SOMETHING_UPDATED = 'something_updated';
export const resolvers = {
Subscription: {
somethingChanged: {
resolve: (payload, args, context, info) => {
// Manipulate and return the new value
return payload.somethingChanged;
},
subscribe: () => pubsub.asyncIterator(SOMETHING_UPDATED),
},
},
} | unknown | |
d7135 | train | The trick here is that the loadgrid data has to be executed in the OnPreRender. | unknown | |
d7136 | train | I would suggest you have a column status on your Order table and update the status to complete when all order items get delivered.
It will make simple your query to get status as well improve performance.
A: Put it into a subquery to try to make the case statement less confusing:
SELECT Order_ID,
CASE WHEN incomplete_count > 0 THEN 'INCOMPLETE' ELSE 'COMPLETE' END
AS Order_status
FROM ( SELECT o.Order_ID
,SUM( CASE WHEN OrderItem.Delivery_ID IS NULL OR OrderItem.Delivery_ID='' THEN 1 ELSE 0 END )
AS incomplete_count
FROM Order o
INNER JOIN OrderItem i ON (i.Order_ID = o.Order_ID)
GROUP by o.Order_ID
) x
ORDER BY ORder_ID DESC
The idea is to keep a counter every time you encounter a null item. If the sum is 0, there were no empty order items.
A: Try this one -
SELECT
o.Order_ID
, Order_status =
CASE WHEN ot.Order_ID IS NULL
THEN 'COMPLETE'
ELSE 'INCOMPLETE'
END
FROM dbo.[Order] o
LEFT JOIN (
SELECT DISTINCT ot.Order_ID
FROM dbo.OrderItem ot
WHERE ISNULL(ot.Delivery_ID, '') = ''
) ot ON ot.Order_ID = o.Order_ID | unknown | |
d7137 | train | You need a NameVirtualHost directive matching your virtualhosts somewhere in your config.
In your case, you'd need that, before the VirtualHosts declarations:
NameVirtualHost *:7070
As a matter of fact, you must have NameVirtualHost *:80 somewhere already, just change the port there too. | unknown | |
d7138 | train | I believe your issue is here:
in = new BufferedReader(new InputStreamReader(new FileInputStream(filePath), "UTF-8"));
Instead it should be
in = new BufferedReader(new FileReader(new File(filePath));
This should read it correctly. If not, you can just use RandomAccessFile:
public static void readBooksFromTxtFile(Context context, String filePath, ArrayList<SingleBook> books) {
RandomAccessFile in;
try {
in = new RandomAccessFile(new File(filePath), "r");
String line = null;
while ((line = in.readUTF8()) != null) {
String title = line;
String author = in.readUTF8();
String pages = in.readUTF8();
String date = in.readUTF8();
// just for debugging
System.out.println(title);
books.add(new SingleBook(title, author, pages, date));
}
} catch (Exception e) {
Toast.makeText(context, "Error during reading file.", Toast.LENGTH_LONG).show();
return;
}
} | unknown | |
d7139 | train | *
*You evaluate the float score() function for current std::vector<T> solution, store them in a std::pair<vector<T>, float>.
*You use a std::priority_queue< pair<vector<T>, float> > to store the 10 best solutions based on their score, and the score itself. std::priority_queue is a heap, so it allows you to extract its max value according to a compare function that you can set up to compare score_a < score_b.
*Store the first 10 pairs, then for each new one compare it with the top of the heap, if score(new) > score(10th) then insert(new) into the priority_queue p, and p.pop_back() to get rid of the old 10th element.
*You keep doing this inside a loop until you run out of vector<T> solutions.
A: Have a vector of pair, where pair has 1 element as solution and other element as usefulness. Then write custom comparator to compare elements in the vector.
Add element at last, then sort this vector and remove last element.
As @user4581301 mentioned in comments, for 10 elements, you dont need to sort. Just traverse vector everytime, or you can also perform ordered insert in vector.
Here are some links to help you:
https://www.geeksforgeeks.org/sorting-vector-of-pairs-in-c-set-1-sort-by-first-and-second/
Comparator for vector<pair<int,int>> | unknown | |
d7140 | train | Yes, You can try like this,
- (NSString *)tableView:(UITableView *)tableView titleForDeleteConfirmationButtonForRowAtIndexPath:(NSIndexPath *)indexPath{
return @"Name";
} | unknown | |
d7141 | train | Here's what I think happens. The crash happens already in the linker, because it expects NSWindowDidExitFullScreenNotification to exist, but it doesn't in older versions of os x.
I haven't got any experience in this. The solutions seem to be kind of hacky.
Have a look at this question, where someone has an almost exact same question:
How to build a backwards compatible OS X app, when a new API is present? | unknown | |
d7142 | train | if you always want to go back to the top-left item (scroll back all the way to the left), just select item[0] programmatically on SelectedIndexChanged... this will still fire off the "check" and actually DO the "check on check off", but will return to the first item in the list...
like this:
private void lst_Servers_SelectedIndexChanged(object sender, EventArgs e)
{
this.lst_Servers.SelectedIndex = 0;
}
A: The problem was: when you size your table you have to do it very carefully, ensure that the rightmost column is entirely within the visible area, otherwise if you edit a cell in that column the table would scroll left... | unknown | |
d7143 | train | checkout this: http://developers.facebook.com/docs/guides/mobile/#android
This will surely help you. | unknown | |
d7144 | train | Hidden fields are a good way of persisting the id during posts.
A: You could use a hidden field or you could just parse the value into your route. I'm not sure how you're parsing the group id to the view but it would look something like:
<% using (Html.BeginForm("AddUser", "Group", new { groupId = Model.GroupID })) { %>
Then your controller will look something like this using the PRG pattern
[HttpGet]
public ViewResult Edit(int groupId) {
//your logic here
var model = new MyModel() {
GroupID = groupId
};
return View("Edit", model);
}
[HttpPost]
public ActionResult AddUser(int groupId, string username) {
//your logic here
return RedirectToAction("Edit", new { GroupID = groupId });
}
[HttpPost]
public ActionResult RemoveUser(int groupId, string username) {
//your logic here
return RedirectToAction("Edit", new { GroupID = groupId });
}
The advantage to this method is that it's more RESTful | unknown | |
d7145 | train | All the recommendations in the comments are correct, it's better to keep services in different containers.
Nevertheless and just to let you know, the problem in the Dockerfile is that starting services in RUN statements is useless. For every line in the Dockerfile, docker creates a new image. For example RUN service postgresql start, it may start postgresql during docker build, but it doesn't persist in the final image. Only the filesystem persist from one step to another, not the processes.
Every process need to be started in the entrypoint, this is the only command that's called when you exec docker run:
FROM debian
RUN apt update
RUN apt install postgresql-9.6 tomcat8 tomcat8-admin -y
COPY target/App-1.0.war /var/lib/tomcat8/webapps/
ENTRYPOINT["/bin/bash", "-c", "service postgresql start && service postgresql status && createdb db_example && psql -c \"CREATE USER springuser WITH PASSWORD 'test123';\" && service tomcat8 start && sleep infinity"]
(It may have problems with quotes on psql command)
A: I have the problem hat in the war file the localhost for the database war hard coded.
Thanks to Light.G, he suggested me to use --net=host for the container, so now there is one container with the database and one with the tomcat server.
This are the steps I followed.
Build the docker image
docker build -t $USER/App .
Start a postgres database
We are using the host namespace it is not possible to run another programm on the post 5432.
Start the postgres container like this:
docker run -it --rm --net=host -e POSTGRES_USER='springuser' -e POSTGRES_DB='db_example' -e POSTGRES_PASSWORD='test123' postgres
Start the tomcat
Start the App container, with this command:
docker run -it --net=host --rm $USER/App | unknown | |
d7146 | train | I found out that an old condition still existed in a system template which caused the var/log/typo3_x.log entry. So the condition examples above are good. | unknown | |
d7147 | train | You still need to create schema manually - but literally only the create schema my-schema-name statement, let hibernate create the tables
<jdbc:embedded-database id="dataSource" type="HSQL">
<jdbc:script location="classpath:create_schema.sql"/>
</jdbc:embedded-database>
If populating the database with values is a problem, yeah it is. No shortcuts really.
And the import script will have to run after the tables are created, in a postconstruct method probably.
A: I solved the issue with the help of geoand comment. (Thanks, you should have posted it as an answer). But let me share more information to make future reader's life easier.
First. It seems that telling that you have an script to initializa database makes hibernate to avoid the creation proccess. I changed the code embedded elemento to:
<jdbc:embedded-database id="dataSource" type="HSQL"/>
Hibernate will automatically execute import.sql from the classpath (at least in Create or Create-Drop mode) so there was no need to indicate further scripts. As Spring test framework does rollback the transaction you do no need to restore the database after each test.
Second, I had some problems with my context configuration that needed to be solved if I wanted Spring to create my schema. The best way to do it was to set the ignore failures attribute to "ALL" and look and the logs.
Third. Using HSQL for a few things MySQL for others has its own issues. Syntax is sometimes different and I had to fix my import.sql to become compatible with both databases.
Appart from that, it works:
@ContextConfiguration(locations={"file:src/test/resources/beans-datasource.xml",
"file:src/main/webapp/WEB-INF/applicationContext.xml",
"file:src/main/webapp/WEB-INF/my-servlet.xml"})
@Configuration
@RunWith(SpringJUnit4ClassRunner.class)
public static class fooTest{
@Inject BeanInterface myBean;
@Test
@Transactional
public void fooGetListTest(){
assertTrue("Expected a non-empty list", myBean.getList().size() > 0 );
}
}
} | unknown | |
d7148 | train | But im just curious if there is another alternative
Typically DELETE requests do not have a request body though that doesn't mean you cannot use one.
From the client side, something like this...
axios.delete("/url/for/delete", {
data: { playerId }
});
will send an application/json request with body {"playerId":"some-id-value"}.
On the server side, with Express you would use this
router.delete("/url/for/delete", async (req, res, next) => {
const { playerId } = req.body;
try {
// do something with the ID...
res.sendStatus(204);
} catch (err) {
next(err.toJSON());
}
});
To handle JSON payloads you should have registered the appropriate middleware
app.use(express.json()); | unknown | |
d7149 | train | Your (1) has nothing to do with (2) and (3).
And there are other places where you can bind controllers (e.g. a directive's controller property).
Each way is serves a different purpose, so go with the one that suits your situation.
*
*If you have a directive and want to give it a specific controller, use the Directive Definition Object's controller property.
*If you use ngView and want to give each view a specific controller (as is usually the case) use the $routeProviders controller.
*If you want to assign a controller to some part of your view (in the main file or in a view or partial) use ngController.
All the above are methods for "binding" a controller to some part of the view (be it a single element or the whole HTML page or anything in between).
A: Im quite new too but Ill try to explain in a more layman way.
1 For each .js file you have (which may contain one or more controller defined), you need a corresponding entry into the script for #1 there. Its not the controller itself, more like to allow script to recognise that this .js file is part of the set of files to run.
2 is more like specify a state or route, which may or may not use a controller. Its much like saying how one event should lead to another. The controller may be involved in the transitions of the states/routes (ie. responsible from one state, to another) or within a view itself.
3 is for using a controller's functions within a view itself.
A: I've added comments to one of the answers, but aside from syntax this may is more of a design question. Here is my opinion
Firstly, (1) is irrelevant to the conversation.
(2) is the preferred approach when specifying the controller for the view as it decouples the controller from the view itself. This can be useful when you want to re-use the same view by providing a different controller.
If you find yourself using (3),consider making that area into a directive, since by specifying a controller you are indicating that it requires its own logic. | unknown | |
d7150 | train | You need to add below permission in your manifest.xml file.
If an app uses a targetSdkLevel of 26 or above and prompts the user to install other apps, the manifest file needs to include the REQUEST_INSTALL_PACKAGES permission:
<uses-permission android:name="android.permission.REQUEST_INSTALL_PACKAGES" />
You can see below link why it's needed
*
*Link1
*Link2 | unknown | |
d7151 | train | I assume that you have a lot of httpd processes because you have a lot of users accessing your site. If not, please edit your question with details about the load on the server.
I recently had the same problem and I was using the same amount of memory as you. First I adjusted the swap space, because the default swap space is too small. The details of how to do this depend on your particular Linux distribution but if you google it you can easily find instructions.
Eventually though, I rented another server with twice as much memory. This is the best long-term solution. | unknown | |
d7152 | train | First, we create a dictionary to lookup values using values_list.txt then we iterate over all the lines in the sqlfile and replace the dictionary keys with their values. The code is as follows:
valsfile = open('values_list.txt')
valsline = valsfile.read().splitlines()
d = {}
for i in valsline:
i = i.split(',')
d[i[0]] = i[1]
sqlfile = open('insert_values.sql')
sqlcontents = sqlfile.read()
sqlline = sqlcontents.splitlines()
text = []
for i in sqlline:
for word, initial in d.items():
i = i.replace(word, initial)
text.append(i+'\n')
f = open("final_sqlfile.sql", "a")
f.writelines(text)
f.close()
The contents of final_sqlfile.sql are given below:
insert into users(fname,lname,age) values( "Sean", "Sean"1, "Sean"11)
insert into users(fname,lname,age) values( "Bob", "Bob"2, "Bob"22)
insert into users(fname,lname,age) values( "Michael", "Michael"3, "Michael"33)
insert into users(fname,lname,age) values( "Aaron", "Aaron"4, "Aaron"44)
insert into users(fname,lname,age) values( "John", "John"5, "John"55)
A: This presupposes that the values are grouped in threes. Try this:-
with open('values_list.txt') as vlist:
with open('insert_values.sql', 'w') as sql:
vals = []
for line in vlist.readlines():
v = line.strip().split(',')
if len(v) == 2:
vals.append(v[1])
if len(vals) == 3:
sql.write(f'insert into users(fname,lname,age) values({vals[0]},{vals[1]},{vals[2]})\n')
vals=[] | unknown | |
d7153 | train | If you want the total sales for the type, then you need to nest the sum()s:
select id, product_name, product_type,
sum(sales) as total_sales,
sum(sum(sales)) over (partition by type) as sales_by_type
from some_table
group by 1,2,3;
If you also want the total of all sales, then:
select id, product_name, product_type,
sum(sales) as total_sales,
sum(sum(sales)) over (partition by type) as sales_by_type,
sum(sum(sales)) over () as total_total_sales
from some_table
group by 1,2,3;
A: What you need is something like below
select
id
, product_name
, product_type
, sum(sales) over () as total_sales
, sum(sales) over (partition by type) as sales_by_type
from some_table
or
select
id
, product_name
, product_type
, sum(sales) over (partition by (select 1)) as total_sales
, sum(sales) over (partition by type) as sales_by_type
from some_table
Both of these works in sql server. Not sure what/if it will work for presto though.
I have seen below variation as well.
over (partition by null) | unknown | |
d7154 | train | The issue was with the mongo version 2.6.10. I installed the latest 3.4.4 in my Ubuntu 64 machine following the instructions https://docs.mongodb.com/master/tutorial/install-mongodb-on-ubuntu/ Now I am able to dump the data without any problem. | unknown | |
d7155 | train | I think you are going about this all wrong. Inside your switch instead of including other controllers which will not work use the redirect() to take them where they should go. | unknown | |
d7156 | train | Just got answer.
Set
use_embedded_content = True | unknown | |
d7157 | train | Your original question shows an error message referring to "?", but the code yout posted a as comment would raise a similar error for `"IN"' instead:
2/24 PLS-00103: Encountered the symbol "IN" when expecting one of the following:
That is because you've used IN for a local variable; but IN, OUT and IN OUT are only applicable to stored procedure parameters. You could have declared the function with an explicit IN for example, though it is the default anyway:
create or replace function contact_restriction(obj_schema IN varchar2, ...
So that needs to be removed from the v_contact_info_visible declaration. You've linked to an example you're working from, but you've removed a lot of important quotes from that, which will still cause it to fail when executed as a part of a VPD; because v_contact_info_visible will be out of scope to the caller. And you have a typo, with a hyphen instead of an underscore.
You need something like:
create or replace function contact_restriction(obj_schema varchar2,
obj_name varchar2)
return varchar2 is
v_contact_info_visible user_access.contact_info_visible%type;
begin
select nvl(max(contact_info_visible),'N')
into v_contact_info_visible
from user_access
where username = user;
return ''''||v_contact_info_visible ||''' =''Y''';
end;
/
When called, that will return a string which is either 'N'='Y' or 'Y'='Y'. VPD will include that as a filter in the original query, which will either prevent any rows being returned (in the first case) or have no effect and allow all rows that match any other existing conditions to be returned (in the second case).
A: The syntax of the function header is incorrect. It should be:
create or replace function contact_restriction(obj_schema IN varchar2, obj_name IN varchar2)
return varchar2
is | unknown | |
d7158 | train | Try invalidating the layout before you call reloadData on collection View.
[self.collectionView.collectionViewLayout invalidateLayout];
[self.collectionView reloadData];
A: You should try this:
In -(NSArray*)layoutAttributesForElementsInRect:(CGRect)rect you put this:
for(NSInteger i=0 ; i < self.collectionView.numberOfSections; i++) {
for (NSInteger j=0 ; j < [self.collectionView numberOfItemsInSection:i]; j++) {
NSIndexPath* indexPath = [NSIndexPath indexPathForItem:j inSection:i];
[attributes addObject:[self layoutAttributesForItemAtIndexPath:indexPath]];
}
}
Hope this could help.
A: Or you can do this
override func viewWillLayoutSubviews() {
super.viewWillLayoutSubviews()
self.collectionView.collectionViewLayout.invalidateLayout()
} | unknown | |
d7159 | train | You just need to put download attribute in the anchor tag . and the anchor tag will allow the user to get the file from the href location. A small example is give below
<a download href="/media/{{friend.picture}}"><img height="100%" width="100%" class="img-fluid d-block mx-auto" src="/media/{{friend.picture}}"></a> | unknown | |
d7160 | train | If you are using SSH key for Jenkins to authenticate try using SSH version e.g. [email protected]:FOO/BAR.git instead of HTTPS one. | unknown | |
d7161 | train | you can add a list of model attributes to exclude from the output when serializing it. check it out here
return bookshelf.model('User', {
tableName: 'users',
hidden: ['password']
}) | unknown | |
d7162 | train | This code runs perfectly fine in Visual Studio 2013. Only change is that the last semicolon needs to come after the return statement not after }. Here is the output:
-----------------------------------------------------------------|empty|empty|em
pty|empty|empty|empty|empty|empty|empty|empty|----------------------------------
-------------------------------|empty|empty|empty|empty|empty|empty|empty|empty|
empty|empty|-----------------------------------------------------------------|em
pty|empty|empty|empty|empty|empty|empty|empty|empty|empty|----------------------
-------------------------------------------|empty|empty|empty|empty|empty|empty|
empty|empty|empty|empty|--------------------------------------------------------
---------|empty|empty|empty|empty|empty|empty|empty|empty|empty|empty|----------
-------------------------------------------------------|empty|empty|empty|empty|
empty|empty|empty|empty|empty|empty|--------------------------------------------
---------------------|empty|empty|empty|empty|empty|empty|empty|empty|empty|empt
y|-----------------------------------------------------------------|empty|empty|
empty|empty|empty|empty|empty|empty|empty|empty|--------------------------------
---------------------------------|empty|empty|empty|empty|empty|empty|empty|empt
y|empty||empty|empty|empty|empty|empty|empty|empty|empty|empty||----------------
-------------------------------------------------|||empty|empty|empty|empty|empt
y|empty|empty|empty|empty| | unknown | |
d7163 | train | The problem is parent name 'home.app' instead of 'home.apps'
// wrong
.state('home.app.detail', { ...
// should be
.state('home.apps.detail', { ...
because parent is
.state('home.apps', { ...
EXTEND in case, that this should not be child of 'home.apps' we have to options
1) do not inherit at all
.state('detail', { ...
2) introduce the parent(s) which is(are) used in the dot-state-name-notation
// exists already
.state('home', { ...
// this parent must be declared to be used later
.state('home.app', {
// now we can use parent 'home.app' because it exists
.state('home.app.detail', { | unknown | |
d7164 | train | Collision detction and score increase ;-)
public class MainActivity extends AppCompatActivity
{
//Layout
private RelativeLayout myLayout = null;
//Screen Size
private int screenWidth;
private int screenHeight;
//Position
private float ballDownY;
private float ballDownX;
//Initialize Class
private Handler handler = new Handler();
private Timer timer = new Timer();
//Images
private ImageView net = null;
private ImageView ball = null;
//score
private TextView score = null;
//for net movement along x-axis
public float x = 0;
public float y = 0;
//points
private int points = 0;
@Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
this.setContentView(R.layout.activity_main);
this.myLayout = (RelativeLayout) findViewById(R.id.myLayout);
this.score = (TextView) findViewById(R.id.score);
this.net = (ImageView) findViewById(R.id.net);
this.ball = (ImageView) findViewById(R.id.ball);
//retrieving screen size
WindowManager wm = getWindowManager();
Display disp = wm.getDefaultDisplay();
Point size = new Point();
disp.getSize(size);
screenWidth = size.x;
screenHeight = size.y;
//move to out of screen
this.ball.setX(-80.0f);
this.ball.setY(screenHeight + 80.0f);
//Error here
/*//Run constantly
new Handler().postDelayed(new Runnable()
{
@Override
public void run()
{
Render();
}
}, 100); //100 is miliseconds interval than sleep this process, 1000 miliseconds is 1 second*/
Thread t = new Thread() {
@Override
public void run() {
try {
while (!isInterrupted()) {
Thread.sleep(100);
runOnUiThread(new Runnable() {
@Override
public void run(){Render();}});}
}catch (InterruptedException e) {}}};
t.start();
}
public void Render()
{
changePos();
if(Collision(net, ball))
{
points++; //You dont need findView Textview score for that exists in OnCreate Method
this.score.setText("Score:" + points);
}
}
public void changePos()
{
//down
ballDownY += 10;
if (ball.getY() > screenHeight) {
ballDownX = (float) Math.floor((Math.random() * (screenWidth - ball.getWidth())));
ballDownY = -100.0f;
}
ball.setY(ballDownY);
ball.setX(ballDownX);
//make net follow finger
myLayout.setOnTouchListener(new View.OnTouchListener() {
@Override
public boolean onTouch(View view, MotionEvent event) {
x = event.getX();
y = event.getY();
if (event.getAction() == MotionEvent.ACTION_MOVE) {
net.setX(x);
net.setY(y);
}
return true;
}
});
public boolean Collision(ImageView net, ImageView ball)
{
Rect BallRect = new Rect();
ball.getHitRect(BallRect);
Rect NetRect = new Rect();
net.getHitRect(NetRect);
return BallRect.intersect(NetRect);
}
}
A: Let me give you an example how I implemented working collision detection in only 10 rows of code. It is not exactly the same problem but it can give you an idea how to manipulate objects based on coordinates.
// update the canvas in order to display the game action
@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);
int xx = 200;
int yy = 0;
if (persons != null) {
synchronized (persons) {
Iterator<Person> iterate = persons.iterator();
while (iterate.hasNext()) {
Person p = iterate.next();
if (p.getImage() != 0) {
bitmap = BitmapFactory.decodeResource(getResources(), p.getImage()); //load a character image
// Draw the visible person's appearance
if(xx > canvas.getWidth())
xx = 0;
canvas.drawBitmap(bitmap, xx , canvas.getHeight()- bitmap.getHeight() , null);
// Draw the name
Paint paint = new Paint();
paint.setStyle(Paint.Style.FILL);
canvas.save();
paint.setStrokeWidth(1);
paint.setColor(Color.WHITE);
paint.setTextSize(50);
canvas.drawText(p.name, (float)(xx+0.25*bitmap.getWidth()), (float) (canvas.getHeight() ), paint);
xx += bitmap.getWidth()*0.75;
}
}
}
}
canvas.save(); //Save the position of the canvas.
canvas.restore();
//Call the next frame.
invalidate();
}
}
In the above code, I just check if xx collides with an array of other images, then I just update xx accordingly. You are welcome to check out my open source repository with this code. | unknown | |
d7165 | train | So found the solution to the query I wanted to construct in the end.
The scope looks like following:
has_many :full_tags, lambda { |low|
where_clause = 'top_tags.top_id = ? or low_tags.low_id = ?'
where_args = [low.top_id, low.id]
if low.middles.any?
where_clause += ' or middle_tags.zone_id IN ?'
where_args << low.middle_ids
end
unscope(where: :low_id)
.left_joins(:middle_tags, :top_tags, :low_tags)
.where(where_clause, *where_args).distinct
}, class_name: 'Tag'
Calling xxx.full_tags on any instance of low returns the whole collection of tags from every middle it belongs to , plus those from the top it belongs to, plus its own tags, and the distinct makes it a unique collection.
That being said, that didn't fully fixed my problem because the whole purpose was to pass this scoped has_many relation as an attribute used by the Ransack gem to filter out my Low models, out of their full inherited collection of tags.
Big disappointment it was when i discovered that Ransack performs eager loading when it comes to search on associations:
Rails does not support Eager Loading on scoped associations
So i ended up implementing a whole other solution for my tagging system. But hey, i learned a lot. | unknown | |
d7166 | train | There are many ways to approach such a problem, a simple one is using a table and rand() PHP function to set the background of each cell:
<?php
$size=78;
$cellsize=4;
$table="<table cellpadding='$cellsize' cellspacing='1'";
for($y=0;$y<$size;$y++) {
$table.="<tr>";
for($x=0;$x<$size;$x++) {
// Random color
$r=rand(0,255);
$g=rand(0,255);
$b=rand(0,255);
$table.="<td style='background-color:rgb($r,$g,$b)'></td>";
}
$table.="</tr>\n";
}
$table.="</table>";
print $table;
?>
This code produces an HTML table. You can use other systems (for example GD). | unknown | |
d7167 | train | Have a look at this, it's in PHP but understandable https://www.kksou.com/php-gtk2/sample-codes/read-a-text-file-into-GtkTextView.php | unknown | |
d7168 | train | You may use put method of
laravel collection.
$collection = collect(['product_id' => 1, 'name' => 'Desk']);
$collection->put('price', 100);
$collection->all();
// ['product_id' => 1, 'name' => 'Desk', 'price' => 100]
A: You can itarete over collections and add fields manually as with simple array:
for ($i = 0; $i < count($secondCollection); $i++) {
// Add field to an item.
$secondCollection[$i]['custom_description'] = $firstCollection[$i]['custom_description'];
}
This code will give an idea. Actual code is really depends on collections structure.
A: In the case that you were doing this per individual record using the examples given, I think it would be:
$api_product->put($custom_product->toArray()[0]);
If you wanted to combine multiple records you could use the map() method
$api_products_collection->map(function($key,$item) {
//combine the record
return $item;
}); | unknown | |
d7169 | train | Assuming both processes are on the same machine (or at least on machines of the same architecture), the results of std::time() (from <ctime>) will be seconds since the Epoch, and will not need any conversion:
std::time_t seconds_since_epoch = std::time(NULL);
Disclaimer: This is not the best method of ipc and you will need to lock the file for reading while it is being written, etc. Just answering the question.
Update, following comment.
If you need to write a timeval, perhaps the easiest way is to define << and >> operators for timeval and write and read these as text to the file (no need to worry about byte-ordering) as-is (with no conversion):
std::ostream& operator <<(std::ostream& out, timeval const& tv)
{
return out << tv.tv_sec << " " << tv.tv_usec;
}
std::istream& operator >>(std::istream& is, timeval& tv)
{
return is >> tv.tv_sec >> tv.tv_usec;
}
This will allow you to do the following (ignoring concurrency):
// Writer
{
timeval tv;
gettimeofday(&tv, NULL);
std::ofstream timefile(filename, std::ofstream::trunc);
timefile << tv << std::endl;
}
// Reader
{
timeval tv;
std::ifstream timefile(filename);
timefile >> tv;
}
If both process are running concurrently, you'll need to lock the file. Here's an example using Boost:
// Writer
{
timeval tv;
gettimeofday(&tv, NULL);
file_lock lock(filename);
scoped_lock<file_lock> lock_the_file(lock);
std::ofstream timefile(filename, std::ofstream::trunc);
timefile << tv << std::endl;
timefile.flush();
}
// Reader
{
timeval tv;
file_lock lock(filename);
sharable_lock<file_lock> lock_the_file(lock);
std::ifstream timefile(filename);
timefile >> tv;
std::cout << tv << std::endl;
}
...I've omitted the exception handling (when the file does not exist) for clarity; you'd need to add this to any production-worthy code. | unknown | |
d7170 | train | Finally found the solution here :
Applying .gitignore to committed files
It's apparently because some files inside has been commited one time, so they are in the repo.
If I understand it right, that means it's very important to modify the .gitignore before commiting any file inside, otherwise it can be a mess ! | unknown | |
d7171 | train | You need to read more literature. In particular on:
*
*Color Moments
*k-means clustering
Without reading this literature, you will not be able to understand the article you linked. | unknown | |
d7172 | train | use android.intent.action.VIEW instead of Intent.ACTION_MAIN as:
Intent startupIntent = new Intent();
ComponentName distantActivity= new ComponentName("YOUR_CAMRA_APP_PACKAGE","YOUR_CAMRA_APP_PACKAGE.ACTIVTY_NAME");
// LIKE IN LG DEVICE WE HAVE AS
//ComponentName distantActivity= new //ComponentName("com.lge.camera","com.lge.camera.CameraApp");
startupIntent .setComponent(distantActivity);
startupIntent .setAction("android.intent.action.VIEW");
startActivity(startupIntent); | unknown | |
d7173 | train | You can use @include "file" to import files.
e.g. Create a file named func_lib:
function abs(x){
return ((x < 0.0) ? -x : x)
}
Then include it with awk:
awk '@include "func_lib"; { ...calls to "abs" .... }' file
A: Also try
$ cat function_lib.awk
function abs(x){
return ((x < 0.0) ? -x : x)
}
call function like this
$ awk -f function_lib.awk --source 'BEGIN{ print abs(-1)}' | unknown | |
d7174 | train | You're sort of doing it wrong. When checking if a script source can be loaded, there are built in onload and onerror events, so you don't need try / catch blocks for that, as those are for errors with script execution, not "404 file not found" errors, and the catch part will not be executed by a 404 :
var jq = document.createElement('script');
jq.onload = function() {
// script loaded successfully
}
jq.onerror = function() {
// script error
}
jq.type = 'text/javascript';
jq.src = 'http://127.0.0.1:9666/jdcheck.js';
document.getElementsByTagName('head')[0].appendChild(jq); | unknown | |
d7175 | train | Here is an alternative solution, there are many packages for merging pdf files.
Here is how you can use one of the many pdf merging packages.
const PDFMerge = require('pdf-merge');
const files = [
`${__dirname}/1.pdf`,
`${__dirname}/2.pdf`
];
const finalFile = `${__dirname}/final.pdf`;
Here is how you can print multiple pages and then merge them.
// goto first page and save pdf file
await page.goto('http://example1.com', {waitUntil: 'networkidle'});
await page.pdf({path: files[0], format: 'A4', printBackground: true})
// goto first page and save pdf file
await page.goto('http://example2.com', {waitUntil: 'networkidle'});
await page.pdf({path: files[1], format: 'A4', printBackground: true})
// merge two of them and save to another file
await PDFMerge(files, {output: finalFile);
It's all about how you take advantages of your resources.
A: var fs = require('fs');
var pdf = require('html-pdf');
var html = fs.readFileSync('https://www.google.co.in/', 'utf8');
var options = {
format: 'A4',
"border": {
"top": "0.2in", // default is 0, units: mm, cm, in, px
"bottom": "1in",
"left": "0.1cm",
"right": "0.1cm"
},
};
pdf.create(html, options).toFile('./google.pdf', function(err, res) {
if (err) return console.log(err);
console.log(res); // { filename: '/app/businesscard.pdf' }
});
You have to install html-pdf after this use above code. for more information about to convert check link. https://www.npmjs.com/package/html-pdf | unknown | |
d7176 | train | Assuming that you're using Spring Boot you can try:
spring.transaction.defaultTimeout=1
This property sets defaultTimeout for transactions to 1 second.
(Looking at the source code of TransactionDefinition it seems that it is not possible to use anything more precise than seconds.)
See also: TransactionProperties
javax.persistence.query.timeout
This is a hint for Query. It is supposed to work if you use it like this:
entityManager.createQuery("select e from SampleEntity e")
.setHint(QueryHints.SPEC_HINT_TIMEOUT, 1)
.getResultList();
See also QueryHints
spring.jdbc.template.query-timeout
Remember that according to the JdbcTemplate#setQueryTimeout javadoc:
Any timeout specified here will be overridden by the remaining transaction timeout when executing within a transaction that has a timeout specified at the transaction level.
hibernate.c3p0.timeout
I suspect that this property specifies timeout for getting from the connection pool, not for a query execution | unknown | |
d7177 | train | You have not initialized variable terms, so it remains null. Therefore condition cmd==terms is always false and you never enter the if statement.
Separate line termsItem.setDefaultCommand(new Command("terms", Command.ITEM, 1)); to two:
terms = new Command("terms", Command.ITEM, 1);
termsItem.setDefaultCommand(terms);
Now you have a chance.
BTW why not to debug you program? Run it in emulator, put break point into commandAction and see what happens. | unknown | |
d7178 | train | From the administrator, go to User Manager
At the top right, you'll see Options
That's where you set the user/registration options | unknown | |
d7179 | train | The browser DOES NOT convert pre-processed (LESS, SCSS, Compass) CSS rules.
You need to use a build script/compiler BEFORE linking a normal CSS file to your HTML. This process converts SCSS/LESS -> CSS for your browser to render.
You can use Webpack, Grunt, Gulp, or even desktop/GUI tools to do this.
You can also use a javascript parser to inject the final CSS into the page onLoad but this has performance implications and IS NOT recommended. | unknown | |
d7180 | train | Could you not convert it to a JSON Array and then use it directly in Javascript, rather than picking out individual elements of the array?
<script>
var myArray = <?php echo json_encode($resultsArr); ?>;
</script>
Then use jQuery each to read the array.
This would give you greater flexibility in the long term of what was available to javascript for reading and manipulation.
EDIT
You can read a specific element like so, this will alert "vv":
<script>
var myVar = myArray[111].A;
alert(myVar);
</script>
A: In php use :
$ResultsArr = json_encode($resultsArr);
$this->jsonResultsArr = $ResultsArr; //its seems u r using smarty.
In javascript
jsonResultsArr = "~$jsonResultsArr`";
requireValue = jsonResultsArr[111].A; | unknown | |
d7181 | train | There is no need to use if and rewrite
return 301 $scheme://domain2.com$request_uri; | unknown | |
d7182 | train | Long-ish story short:
This is a macro that expands to a set of gcc attributes. They are a way of providing the compiler with special information about various stuff in your code, like, in this case, a function.
Different compilers have different syntaxis for this purpose, it isn't standartized. For example, gcc uses attributes, but other compilers use different constructs.
Long-ish story long-ish:
So, I'm no Linux kernel expert, but judging by the source code, this macro is used for Hotplug. I believe it signifies that the function should do something with a specific device exiting.
For example, the function you provided seems to be from the set of Hotplug functions for working with a Realtek PCI-Express card reader driver.
What does that macro actually do? Well, let's take a closer look at the macro's definition:
#define __devexit __section(.devexit.text) __exitused __cold
The first part is __section(.devexit.text):
# define __section(S) __attribute__ ((__section__(#S)))
As you can see, this creates an __attribute__(__section__()) with the section name being ".devexit.text". This means that gcc will compile the assembly code of a function with this attribute into a named section in the compiled binary with the name .devexit.text (instead of the default section).
The second part is __exitused (defined to something only if the MODULE macro is defined):
#define __exitused __used
And __used is, depending on the gcc version, defined either like this:
# define __used __attribute__((__used__))
or like this:
# define __used __attribute__((__unused__))
The former makes sure the function that has this attribute is compiled even if it is not referenced anywhere. The latter suppresses compiler warnings in the same case, although it doesn't affect the compilation in any way.
And, finally, __cold:
#define __cold __attribute__((__cold__))
This is an attribute that informs the compiler that the function with this attribute is not going to be called often, so that it can optimize accordingly.
Sooo, what do we have in the end? Looks like functions marked with __devexit are just functions that aren't called often (if called at all), and stuffed into a named section.
All the source code was taken from here. It looks like the macro has now actually been removed from the Linux Kernel.
A: "...It is most likely just an annotation..." --barak manos
Eureka! It turns out that the mystery element is maybe called an annotation, which adds extra information about a function. This extra information can be checked by the compiler to catch bugs that might otherwise go unnoticed.
Edit: @MattMcNabb says it's not an annotation. Added uncertainty.
A: These attributes were used in the Linux Kernel on certain driver functions and data declarations, putting them in a separate section that could be discarded under certain circumstances.
However, they are no longer used (or defined) from 3.10.x onward. See: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=54b956b903607 | unknown | |
d7183 | train | The output can be saved as a txt file this way. You can also subset the object created with the alpha function using the $ operator to get only the information you are interested in.
setwd("~/Desktop")
out <- psych::alpha(d)
capture.output(out,file = "alpha.txt")
A: As is true of everything R, there are many ways of doing what you want to do. The first thing is to look at the help menu for the function (in this case ?alpha). There you will see that a number of objects are returned from the alpha function. (This is what is listed in Values part of the help file.)
When you print the output of alpha you are shown just a subset of these objects. However, to see the entire list of objects that are returned, use the "str" command
my.results <- alpha(my.data)
str(my.results) #or just list the names of the objects
names(my.alpha)
[1] "total" "alpha.drop" "item.stats" "response.freq" "keys" "scores" "nvar" "boot.ci"
[9] "boot" "Unidim" "Fit" "call" "title"
You can then choose to capture any of those objects for your own use.
Thus
my.alpha <- alpha(ability) #use the ability data set in the psych package
my.alpha #will give the normal (and nicely formatted output)
totals <- my.alpha$total #just get one object from my.alpha
totals #show that object
will produce a single line (without the fancy output):
raw_alpha std.alpha G6(smc) average_r S/N ase mean sd
0.8292414 0.8307712 0.8355999 0.2347851 4.909159 0.006384736 0.5125148 0.2497765
You can do this for any of the objects returned. Most of us who write packages print what we think are the essential elements of the output of the function, but include other useful information. We also allow for other functions (such as summary) to print out other information.
So, using the example from above,
summary(my.alpha) #prints the rounded to 2 decimals my.alpha$total object
Reliability analysis
raw_alpha std.alpha G6(smc) average_r S/N ase mean sd
0.83 0.83 0.84 0.23 4.9 0.0064 0.51 0.25
A final word of caution. Many of us do not find alpha a particularly useful statistic to describe the structure of a scale. You might want to read the tutorial on how to find coefficient omega using the psych package at
http://personality-project.org/r/psych/HowTo/R_for_omega.pdf | unknown | |
d7184 | train | It actually works exactly as you explained. You just call predict with model and iterator:
preds = predict(model, test.iter)
The only trick here is that the predictions are displayed column-wise. By that I mean, if you take the whole sample you are referring to, execute it and add the following lines:
test.iter <- CustomCSVIter$new(iter = NULL, data.csv = "mnist_train.csv", data.shape = 28, batch.size = batch.size)
preds = predict(model, test.iter)
preds[,1] # index of the sample to see in the column position
You receive:
[1] 5.882561e-11 2.826923e-11 7.873914e-11 2.760162e-04 1.221306e-12 9.997239e-01 4.567645e-11 3.177564e-08 1.763889e-07 3.578671e-09
This show the softmax output for the 1st element of the training set. If you try to print everything by just writing preds, then you will see only empty values because of the RStudio print limit of 1000 - real data will have no chance to appear.
Notice that I reuse the training data for prediction. I do so, since I don't want to adjust iterator's code, which needs to be able to consume the data with and without a label in front (training and test sets). In real-world scenario you would need to adjust iterator so it would work with and without a label. | unknown | |
d7185 | train | The issue is because the resize() event is fired once for every pixel the window is resized. Therefore you're attaching multiple click handlers when the resize occurs. You just need to move the click outside the resize handler, and use a delegated event handler. Try this:
$(window).resize(function() {
if ($(window).width() <= 768 && $('body').hasClass('page-search-ads') && !$('#-clasifika-results-simple-search-form img').hasClass('funnel')) {
$('#-clasifika-results-simple-search-form').append("<img class='funnel' src='" + Drupal.settings.basePath + "sites/all/themes/clasifika/images/filter.png'/>");
}
});
$('#-clasifika-results-simple-search-form').on('click', '.funnel', function(){
$('.vehicle-cat, .vehicle-brand, .city-name-filter, .vehicle-mileage, .overall-cat, .city-name, .boat-bed, .boat-type, .boat-brand, .nautical-length, .overall-year, .airplane-type, .fashion-cat, .airplane-brand, .airframe-time, .propeller-hours, .monthly-salary, .amount-slider, .area-slider').slideToggle();
console.log("funnel click");
});
I'd also suggest you look at using a common class or single containing element to group all the elements in the click handler, as that's about the biggest selector I've ever seen. Also, CSS media queries may be a better solution for the resize() logic too, assuming you're able to just show/hide the relevant element. | unknown | |
d7186 | train | Inside the AppServiceProvider i put the custom validation
public function boot()
{
Validator::extend('image64', function ($attribute, $value, $parameters, $validator) {
$type = explode('/', explode(':', substr($value, 0, strpos($value, ';')))[1])[1];
if (in_array($type, $parameters)) {
return true;
}
return false;
});
Validator::replacer('image64', function($message, $attribute, $rule, $parameters) {
return str_replace(':values',join(",",$parameters),$message);
});
}
and on the validation.php i put the
'image64' => 'The :attribute must be a file of type: :values.',
now i can use this in validating the request
'image' => 'required|image64:jpeg,jpg,png'
credits to https://medium.com/@jagadeshanh/image-upload-and-validation-using-laravel-and-vuejs-e71e0f094fbb
A: If you only want to validate uploaded file to be an image type:
$this->validate($request, [
'name' => 'required|max:255',
'price' => 'required|numeric',
'cover_image' => 'required|image'
]);
The file under validation must be an image (jpeg, png, bmp, gif, or
svg)
Laravel 5.6 image validation rule | unknown | |
d7187 | train | I assume you want the pagination to work for the user, so it should be done server-side. Paginating the content that was already downloaded doesn't make much sense (unless you only care for a feeling)
*
*Before showing the list - get the optimum length for a single page
*Put it (with a bit of js) in the URL as a parameter
*Paginate with this setting like in the old times
To determine the number:
Make the button that lets user go to the list a 1 element listview.
Get the window height, substract height of heder and footer, divide by the 1 element height and put as a parameter to the link.
done | unknown | |
d7188 | train | You can use Text Property of ComboBox Control to show Default Text
Try:
ComboBox1.Text="Select Email Use";
It will be shown ByDefault
A: I think you have to draw the string yourself, here is the working code for you, there is a small issue with the flicker, the string is a little flickering when the mouse is hovered on the combobox, even enabling the DoubleBuffered doesn't help, however it's acceptable I think:
public partial class Form1 : Form {
public Form1(){
InitializeComponent();
comboBox1.HandleCreated += (s,e) => {
new NativeComboBox{StaticText = "Select Email Use"}
.AssignHandle(comboBox1.Handle);
};
}
public class NativeComboBox : NativeWindow {
public string StaticText { get; set; }
protected override void WndProc(ref Message m)
{
base.WndProc(ref m);
if (m.Msg == 0xf)//WM_PAINT = 0xf
{
var combo = Control.FromHandle(Handle) as ComboBox;
if (combo != null && combo.SelectedIndex == -1)
{
using (Graphics g = combo.CreateGraphics())
using (StringFormat sf = new StringFormat { LineAlignment = StringAlignment.Center })
using (Brush brush = new SolidBrush(combo.ForeColor))
{
g.DrawString(StaticText, combo.Font, brush, combo.ClientRectangle, sf);
}
}
}
}
}
} | unknown | |
d7189 | train | Here's how I'd go about it with base plotting functions. It wasn't entirely clear to me whether you need the "background" polygon to be differences against the state polygon, or whether it's fine for it to be a simple rectangle that will have the state poly overlain. Either is possible, but I'll do the latter here for brevity/simplicity.
library(rgdal)
library(raster) # for extent() and crs() convenience
# download, unzip, and read in shapefile
download.file(file.path('ftp://ftp2.census.gov/geo/pvs/tiger2010st/09_Connecticut/09',
'tl_2010_09_state10.zip'), f <- tempfile(), mode='wb')
unzip(f, exdir=tempdir())
ct <- readOGR(tempdir(), 'tl_2010_09_state10')
# define albers and project ct
# I've set the standard parallels inwards from the latitudinal limits by one sixth of
# the latitudinal range, and the central meridian to the mid-longitude. Lat of origin
# is arbitrary since we transform it back to longlat anyway.
alb <- CRS('+proj=aea +lat_1=41.13422 +lat_2=41.86731 +lat_0=0 +lon_0=-72.75751
+x_0=0 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs')
ct.albers <- spTransform(ct, alb)
# expand bbox by 10% and make a polygon of this extent
buf <- as(1.2 * extent(ct.albers), 'SpatialPolygons')
proj4string(buf) <- alb
# plot without axes
par(mar=c(6, 5, 1, 1)) # space for axis labels
plot(buf, col='white', border=NA)
do.call(rect, as.list(c(par('usr')[c(1, 3, 2, 4)], col='gray90')))
# the above line is just in case you needed the grey bg
plot(buf, add=TRUE, col='white', border=NA) # add the buffer
plot(ct.albers, add=TRUE, col='gray90', border=NA)
title(xlab='Longitude')
title(ylab='Latitude', line=4)
Now, if I understand correctly, despite being in a projected coordinate system, you want to plot axes that are in the units of another (the original) coordinate system. Here's a function that can do that for you.
[EDIT: I've made some changes to the following code. It now (optionally) plots the grid lines, which are particularly important when plotting axis in units that are in a different projection to the plot.]
axis.crs <- function(plotCRS, axisCRS, grid=TRUE, lty=1, col='gray', ...) {
require(sp)
require(raster)
e <- as(extent(par('usr')), 'SpatialPolygons')
proj4string(e) <- plotCRS
e.ax <- spTransform(e, axisCRS)
if(isTRUE(grid)) lines(spTransform(gridlines(e.ax), plotCRS), lty=lty, col=col)
axis(1, coordinates(spTransform(gridat(e.ax), plotCRS))[gridat(e.ax)$pos==1, 1],
parse(text=gridat(e.ax)$labels[gridat(e.ax)$pos==1]), ...)
axis(2, coordinates(spTransform(gridat(e.ax), plotCRS))[gridat(e.ax)$pos==2, 2],
parse(text=gridat(e.ax)$labels[gridat(e.ax)$pos==2]), las=1, ...)
box(lend=2) # to deal with cases where axes have been plotted over the original box
}
axis.crs(alb, crs(ct), cex.axis=0.8, lty=3)
A: This is because `coord_map', or more generally non-linear coordinates, internally interpolates vertices so that line is draw as a curve corresponding the coordinate.
In your case, interpolation will be performed between a point of the outer rectangle and a point of inner edge, which you see as the break.
You can change this by:
co2 <- co
class(co2) <- c("hoge", class(co2))
is.linear.hoge <- function(coord) TRUE
plot + layer1 + layer3 + co2
You can also find the difference of behavior here:
ggplot(data.frame(x = c(0, 90), y = 45), aes(x, y)) + geom_line() + co + ylim(0, 90)
ggplot(data.frame(x = c(0, 90), y = 45), aes(x, y)) + geom_line() + co2 + ylim(0, 90) | unknown | |
d7190 | train | You have to set the DB credentials in .env file. It's in the root of your project. If it does not exists, you can rename .env.example and make changes to it.
Based on your code (interaction with database in view is not standard in an MVC framework, atleast), I think it's better to get familiar with laravel first. There are a lot resources to learn it. | unknown | |
d7191 | train | The Kinect for Windows SDK v1.7 introduced Grip recognition for up to four hands simultaneously, which includes new controls for WPF.
I suggest you download that version of the SDK in case you are not using it yet, and check the documentation for details of its usage and capabilities.
Source: kinectingforwindows.com
Source: blog.msdn.com | unknown | |
d7192 | train | The answer will probably not be relevant to many people, but as Anton pointed out, that it is an issue to do with the promise loading asynchronously.
I had an event that was calling the same promise at the same time. As soon as I remove the trigger to that event, I don't get any errors. | unknown | |
d7193 | train | Looks like the first one is submitting the whole script as a single batch via jdbc. Whereas the second appears to be sending each sql statement via sqlcmd - hence the print statements succeed (and result in synchronized output - which is not always guaranteed with print - raiserror(str, 10, 1) with nowait; is the only guarantee of timely messaging) and both sp calls are attempted, each producing their own (sql) error. | unknown | |
d7194 | train | Mass Transit now has an experimental feature to process individual message's in a batch.
Configure your bus:
_massTransitBus = Bus.Factory.CreateUsingRabbitMq(
cfg =>
{
var host = cfg.Host(new Uri("amqp://@localhost"),
cfg =>
{
cfg.Username("");
cfg.Password("");
});
cfg.ReceiveEndpoint(
host,
"queuename",
e =>
{
e.PrefetchCount = 30;
e.Batch<MySingularEvent>(
ss =>
{
ss.MessageLimit = 30;
ss.TimeLimit = TimeSpan.FromMilliseconds(1000);
ss.Consumer(() => new BatchSingularEventConsumer());
});
});
});
And Create your Consumer:
public class BatchSingularEventConsumer: IConsumer<Batch<MySingularEvent>>
{
public Task Consume(ConsumeContext<Batch<MySingularEvent>> context)
{
Console.WriteLine($"Number of messages consumed {context.Message.Length}");
return Task.CompletedTask;
}
}
You can configure your Batch with a Message Limit and a Time Limit.
I suggest reading Chris Patterson's issue on the matter Batch Message Consumption especially the part regarding prefetch
The batch size must be less than or equal to any prefetch counts or concurrent message delivery limits in order reach the size limit. If other limits prevent the batch size from being reached, the consumer will never be called.
Batch consumption is also documented on the MassTransit website.
A: As it turns out, today you can do this:
public class MyConsumer : IConsumer<Batch<MyMessage>>
{
public async Task Consume(ConsumeContext<Batch<MyMessage>> context)
{
...
}
} | unknown | |
d7195 | train | Every time your Android app sends a request to AWS Lambda (via AWS API Gateway I assume) the Lambda function will have to download the entire index file from S3 to the Lambda /tmp directory (where Lambda has a 512MB limit) and then perform a search against that index file. This seems extremely inefficient, and depending on how large your index file is, it might perform terribly or it might not even fit into the space you have available on Lambda.
I would suggest looking into the AWS Elasticsearch Service. This is a fully managed search engine service, based on Lucene, that you should be able to query directly from your Android application.
A: As you already have your index files in S3, you can direct your Lucene Index reader to point to a Location on S3.
String index = "/<BUCKET_NAME>/<INDEX_LOCATION>/";
String endpoint = "s3://s3.amazonaws.com/";
Path path = new com.upplication.s3fs.S3FileSystemProvider().newFileSystem(URI.create(endpoint), env).getPath(index);
IndexReader reader = DirectoryReader.open(FSDirectory.open(path))
You can either pass in client credentials in env or you can assign role to your Lambda function.
Ref:
https://github.com/prathameshjagtap/aws-lambda-s3-index-search/blob/master/lucene-s3-searcher/src/com/printlele/SearchFiles.java
A: For Lucene indices less than 512MB you can experiment with lucene-s3directory.
As Mark said, on AWS Lambda you are limited to 512MB on /tmp. I think having a completely serverless search service is very desirable but until that limit is gone, we're stuck with EC2 for production deployments. Once you go with running Lucene on EC2, storing the index on S3 becomes pointless as you have access to EBS or ephemeral storage.
In case you want to try out S3Directory, here's how to get started:
S3Directory dir = new S3Directory("my-lucene-index");
dir.create();
// use it in your code in place of FSDirectory, for example
dir.close();
dir.delete(); | unknown | |
d7196 | train | If I understand your question properly, you're trying to use the "Exclude Pattern" to exclude certain values from populating in the chart.
The "Exclude Pattern" and "Include Pattern" fields are for Regular Expressions and are documented here: http://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html.
If you want to exclude multiple fields, you could do something like this:
term1|term2|term3
A: The query field in Kibana uses Lucene syntax which has some info at http://www.lucenetutorial.com/lucene-query-syntax.html.
To exclude a term containing specific text, use
-field: "text"
to exclude different texts, I use
-field: ("text1" or "text2")
If it's two separate fields, try
-field1: "text1" -field2: "text2"
A: in newer version of kibana if you want to exclude some term use this:
not field : "text"
if you want to exclude a phrase use this:
not field : "some text phrase"
you can use other logical operation with not:
field: "should have phrase" and not field: "excluded phrase"
A: https://www.elastic.co/guide/en/kibana/master/kuery-query.html
To match documents where response is 200 but extension is not php or css.
response:200 and not (extension:php or extension:css)
A: So in the query above the visualization, you can use Lucene syntax to exclude the hits, once saved this will perform the same as an attempt of using regex or Lucene syntax in the Exclude Field of the Buckets advanced options. | unknown | |
d7197 | train | Try starting with a defining diagram that helps you identify the problem you're trying to solve, continuing through the remaining steps of the problem solving process. This will yield a much better result than jumping immediately to coding and posting algorithm questions on Internet forums (which by the way, violates the Honor Code).
Good luck! | unknown | |
d7198 | train | As Hadley notes in Advanced R:
Attributes should generally be thought of as ephemeral. For example, most attributes are lost by most operations.
But one option to keep your labels would be to make use of a helper function which first saves the label attribute and resets is afterwards:
library(dplyr)
to_na <- function(x, rn) {
label <- attr(x, "label")
levels <- levels(x)
x <- as.character(x)
x[rn == 3] <- NA_character_
x <- factor(x, levels = levels)
attr(x, "label") <- label
x
}
test_df <- test_df %>%
dplyr::mutate(dplyr::across(
.cols = dplyr::matches("[G]\\d{1,2}"),
.fns = ~ to_na(.x, random_number)))
test_df
#> # A tibble: 6 × 5
#> random_number G1 G2 G3 G4
#> <int> <fct> <fct> <fct> <fct>
#> 1 1 Often Sometimes Never Never
#> 2 1 Often Sometimes Sometimes Never
#> 3 2 Often Often Often Never
#> 4 2 Sometimes Never Never Never
#> 5 3 <NA> <NA> <NA> <NA>
#> 6 3 <NA> <NA> <NA> <NA>
lapply(test_df, attr, "label")
#> $random_number
#> NULL
#>
#> $G1
#> [1] "Question 1: Do you use R?"
#>
#> $G2
#> [1] "Question 2: Do you use Python?"
#>
#> $G3
#> [1] "Question 3: Do you use SQL?"
#>
#> $G4
#> [1] "Question 4: Do you use PowerBI?" | unknown | |
d7199 | train | (def mymap
(zipmap
(map #(str "NAT-" %) (map first raw-vector-list))
(map #(map (fn [v] (Double/parseDouble v)) %)
(map rest raw-vector-list))))
(pprint (take 1 mymap))
-> (["NAT-1991-09-30" (41.75 42.25 41.25 42.25 3.62112E7 1.03)])
Another version
(def mymap
(map (fn [[date & values]]
[(str "NAT-" date)
(map #(Double/parseDouble %) values)])
;; Drop first non-parsable element in raw-vector-list
;; ["Date" "Open" "High" "Low" "Close" "Volume" "Adj Close"]
(drop 1 raw-vector-list)))
A: So for the tail/rest portion of this data. You are mapping an anonymous, map function, to a list of strings, and then mapping the type conversion to the elements in each sublist.
(def mymap
(zipmap
(map #(str "NAT-" %) (map first raw-vector-list))
(map #(map (fn [v] (Double/parseDouble v)) %)
(map rest raw-vector-list))))
How can I pull out the type conversion into a function like below...And then utilize my custom method?
(defn str-to-dbl [n] (Double/parseDouble n))
This code complains about nested #'s.
(def mymap
(zipmap
(map #(str "NAT-" %) (map first raw-vector-list))
(map #(map #(str-to-double %)
(map rest raw-vector-list)))) | unknown | |
d7200 | train | what you want is
.a .b, .c { position: relative; }
.a .b .c expects this
<div class="a">
<div class="b">
<div class="c"></div>
</div>
</div>
having a comma means .a .b OR .c | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.