_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d2301 | train | The way I would do it is with 2 different controllers for the simple reason of speed and responsiveness. Loading all contacts and filtering isn't as quick as loading the one part only.
However, you can always set in your controller the same return with different data. Such as EmployeeController@index returns view('contacts.index', compact('employeeContacts')), and OfficesController@index returns view('contacts.index', compact('officesContacts'))
EDIT:
Sorry, I have misread it...I thought you wanted to do the filtering in the view. Anyway, my practice is to do it separately, simply because the code is cleaner. If you want to make the whole REST more readable, you can put both resources in a group like so: Route::group(['prefix' => 'contact'], function(){ //routes here// });
So now you will have routes like:
example.com/contact/employees/
example.com/contact/offices/
A: I am not familiar at all with Laravel but since this question is about REST concepts (I have a small background on these) I should give it a try.
Since you are building a RESTful application, you must not consider others as human beings but only as machines. IMO the urls should determine the action that will be performed. Thus, by using different urls for different actions (perform a CRUD on a contact - either an Employee or a Directorate or SomethingElseInTheFuture) sounds good to me and fits the REST nice.
Hope this clarify the things for you!
[EDIT]
I believe jannis is right. It should be the verbs (GET, POST, PUT, PATCH etc) that make the action instead of the URLs. The urls are just respresenting the resources. My mistake. So both of your points of view are correct. It's just how convenient each approach is for your project (for now and for the near future of your project). IMO, I see #1 (two different restful controllers) more approchable.
Cheers and sorry for any misconception! | unknown | |
d2302 | train | *
*For bare web framework and middleware abstraction, please see express
*For Extensive API development and integration, please see loopback
*For enterprise features to manage your deployments, please see strongloop | unknown | |
d2303 | train | I made some changes please see whether it's as per your expected output or not.
Ts file
myFunction(value) {
console.log(value);
if (value == 1) {
this.availableBtn = !this.availableBtn;
}
if (value == 2) {
this.vaccanttoggle = !this.vaccanttoggle;
}
}
HTML File
<div id="myDropdown" *ngIf="availableBtn">
<a href="#">Link 1</a>
<a href="#">Link 2</a>
<a href="#">Link 3</a>
seems like there is problem with class="dropdown-content". remove it and try. | unknown | |
d2304 | train | It is because the queuing nature of animations, every mouser enter and mouse leave operation queues a toggle operation. So if there are quick movement of mouse triggering the enter and leave events then even after the events are over the animations will keep happening.
The solution is to stop and clear previous animations before the toggle is called using .stop()
$(document).ready(function() {
$(".result").hover(function() {
$(this).find(".result-user-facts").stop(true, true).toggle("slow");
});
});
Demo: Fiddle
A: If you want to make things better, just put div:.result-user-facts into an Variable if there only has one. Like this:
$(function (){
var container = $(".result");
var item = container.find(".result-user-facts").eq(0);
$(".result").hover(function (){
item.stop().toggle("slow");
});
});
A: Use .stop(true,true)
Stop the currently-running animation on the matched elements.
Fiddle DEMO
$(document).ready(function() {
$(".result").hover(function() {
$(this).find(".result-user-facts").stop(true,true).toggle("slow");
});
}); | unknown | |
d2305 | train | Support for Groovy 4 is coming in Spring Framework 6 and Spring Boot 3. It’s currently available in Spring Boot 3.0.0-M2 which is published to https://repo.spring.io/milestone.
A: you 1st have to change settings.gradle to add the following:
pluginManagement {
repositories {
maven { url 'https://repo.spring.io/milestone' }
gradlePluginPortal()
}
}
Then I had to modify my build.gradle as follows:
plugins {
// id 'org.springframework.boot' version '2.6.7'
id 'io.spring.dependency-management' version '1.0.11.RELEASE'
id 'groovy'
}
plugins {
id 'org.springframework.boot' version '3.0.0-M2'
}
group = 'com.example'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = javaSrcVersion
targetCompatibility = javaClassVersion
repositories {
mavenCentral()
maven { url("https://repo.spring.io/milestone/")}
}
dependencies {
runtimeOnly('com.h2database:h2')
implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
implementation 'org.springframework.boot:spring-boot-starter-data-rest'
// implementation 'org.codehaus.groovy:groovy'
implementation("org.apache.groovy:groovy:${groovyVersion}")
testImplementation 'org.springframework.boot:spring-boot-starter-test'
implementation('com.google.code.findbugs:jsr305:latest.integration')
implementation group: 'javax.annotation', name: 'javax.annotation-api', version: '1.3.2'
implementation group: 'jakarta.persistence', name: 'jakarta.persistence-api', version: '3.1.0'
implementation group: 'commons-io', name: 'commons-io', version: '2.11.0'
testImplementation("org.testng:testng:${testNgVersion}")
}
tasks.named('test') {
useJUnitPlatform()
} | unknown | |
d2306 | train | I believe you simply need:
int newNum = ( num - 1 ) / 1000 + 1;
This gives you:
0 -> 1
1 -> 1
300 -> 1
979 -> 1
1000 -> 1
1015 -> 2
1999 -> 2
2000 -> 2
2001 -> 3
A: Apparently what I wanted was
int i = 1000;
if (i % 1000 >0){
i = i/1000 + 1;
}
else{
i = i/1000;
}
System.out.print(" " + i);
thanks anyways for helping, I just didn't realize it was a single stupid thing | unknown | |
d2307 | train | Hi one of the options is to use Azure DevOps REST API of Policy Configuration, however you need to construct JObject arrays and use PUT Request.
what you need is on the resetOnSourcePush and maybecreatorVoteCounts | unknown | |
d2308 | train | There is no port defined for a servlet, so there's no place to query. Tomcat can have 26 HTTP connectors listening on 26 different TCP ports. You are trying to be smarter than the system by picking the port number from some HTTP request because HTTP requests of course have a destination port - however that's just that: the destination port used for that particular HTTP request, and it must be known before writing the HTTP request to the socket. Chicken and egg.
By the way, why do you need a port number? I mean, in a reverse-proxy deployment, for example, the port number is only used by the reverse proxy and should not be used to make hyperlinks, for example.
So, here are some advices: the Internet address of your application (protocol, hostname, port) is deployment configuration that cannot be guessed inside the application itself. Similarly, low level connection details like port numbers are server configuration that still can't be guessed inside the application and must be passed instead. These pieces of configuration are usually passed via:
*
*a table in the database
*a configuration file on the filesystem
*environment variables
The most recent trend is employing environment variables, that are used to pass configuration bits between programs written in many different languages and deployed in a variety of environments (virtual machines, containers) | unknown | |
d2309 | train | Your question explictly says not to use after, but that's exactly how you do it with tkinter (assuming your function doesn't take more than a couple hundred ms to complete). For example:
def printit():
if not stopFlag:
root.after(100,printit)
...
def stop():
global stopFlag
stopFlag = False
...
printit()
The above will cause printit to be called every 100ms until some other piece of code sets stopFlag to False.
Note: this will not work very well if printit takes more than 100ms. If the function takes two much time, your only choices are to move the function into a thread, or move it to another process. If printit takes 100ms or less, the above code is sufficient to keep your UI responsive.
A: From Python Documentation
from threading import Timer
def hello():
print "hello, world"
t = Timer(30.0, hello)
t.start() # after 30 seconds, "hello, world" will be printed` | unknown | |
d2310 | train | Your entities should look like this :
User.php
class User {
/**
* @ORM\OneToMany(targetEntity="Post", mappedBy="user", cascade={"persist"})
*/
private $posts;
public function addPost(Post $post) {
$post->setUser($this); // Call Post's setter here
$this->$posts[] = $post; // Add post to the collection
}
}
Post.php
class Post {
/**
* @ORM\ManyToOne(targetEntity="User", inversedBy="posts")
*/
private $user;
public function setUser(User $user) {
$this->user = $user; // Set post's author
}
}
In that case, you could use cascade if you are creating the user and its posts at the same time : you want to persist both the user and the posts, and attach the posts to the user.
If the user already exists at time you're persisting the post, you just have to set the post's author and persist the latter :
Controller.php
public function editPostAction() {
// ...
$post->setUser($this->getUser());
$em->persist($post);
$em->flush();
// ...
}
By the way, in a One-To-Many relation, the owning side is the Many side, Post in this case.
A: /**
* @ORM\OneToMany(targetEntity="Post", mappedBy="posts", cascade={"persist"})
*/
private $posts;
So all $em->persist($user); does is tell the entity manager that it should be managing $user. An entity being managed just means that when you call $em->flush() it will save that entity in its current state to the database, either by creating a new row in all the tables required, or updating the existing ones.
So to actually answer your question.
By adding the cascade={"persist"} to this annotation, the entity manager knows that if this User object is being managed, when a flush call is made, it will also need to perform whatever cascade operations you have defined for all Post objects associated with this User, and save their changes (or create new rows as required) to the database (or delete if you have cascade delete and remove a post from this user's post collection). | unknown | |
d2311 | train | No, there is no ready utils for this in standard java libraries.
BTW, your loop is incorrect and will work infinitely until memory end. You should increment i variable one more time:
for (int i = 1; i < exampleInts.size(); i++) {
int delimiter = 0;
exampleInts.add(i, delimiter);
i++;
}
or change loop conditions to for (int i = 1; i < exampleInts.size(); i+=2) {
A: Try this solution it is working correctly.
List<Integer> exampleInts = new ArrayList<>(Arrays.asList(1, 2, 3, 5,
8, 13, 21));
int size = (exampleInts.size()-1)*2;
for (int i = 0; i < size; i+=2) {
int delimiter = 0;
exampleInts.add(i+1, delimiter);
}
System.out.println(exampleInts); | unknown | |
d2312 | train | String.indexOf() is your friend. Keep in mind that in Oracle counting start at 1, in Java at 0 (zero), so result in Java for your Oracle example will be 14. For docu see at Oracle DB server and Java.
In your specific case you can test it with System.out.println("is on index: " + "vivek Srinivasamoorthy".indexOf("a", 13));
A:
getInstring(String input, String substr, int start, int occurence) {
char[] tempstring = input.substring(start).replaceAll(substr, "-").toCharArray();
int occurindex = 0;
int counter = 0;
for (int i = 0; i < tempstring.length; i++) {
if (":-".equals(":" + tempstring[i])) {
occurindex++;
counter = i;
}
if (occurindex == occurence) {
break;
}
}
return counter == 0 ? -1 : counter + start + 1;
} | unknown | |
d2313 | train | You could rescale server side with something like imagemagick http://www.imagemagick.org/script/index.php
This has bindings for many different programming languages
A: CSS scaling does usually not reduce the memory footprint. I think it might actually increase it, because the browser has to buffer/cache the scaled version and the original version of the image.
I think you could use the Canvas API to effectively draw a smaller version of the image and use that instead.
Also take a look at this question.
Plus, if you know the effective, final size of the image, you could of course do that on the web server and cache the smaller version. This should offer some degree of downwards compatibility. | unknown | |
d2314 | train | jq '.incidents[]| select(.links.policy_id==199383)' file.json
should do it.
Output
{
"links": {
"policy_id": 199383,
"violations": [
69892478
]
},
"incident_preference": "PER_CONDITION_AND_TARGET",
"closed_at": 1519408001909,
"opened_at": 1519407125437,
"id": 17821334
}
{
"links": {
"policy_id": 199383,
"violations": [
69889831
]
},
"incident_preference": "PER_CONDITION_AND_TARGET",
"closed_at": 1519408011851,
"opened_at": 1519406230858,
"id": 17820349
}
{
"links": {
"policy_id": 199383,
"violations": [
69892488
]
},
"incident_preference": "PER_CONDITION_AND_TARGET",
"closed_at": 1519402345676,
"opened_at": 1519401235467,
"id": 17821334
}
From json.org
A string is a sequence of zero or more Unicode characters, wrapped in
double quotes, using backslash escapes. A character is represented as
a single character string. A string is very much like a C or Java
string.)
and
A number is very much like a C or Java number, except that the octal
and hexadecimal formats are not used.
So 199383 is clearly different from "199383". They are number and string respectively.
Note : Emphasis in quoted text are mine. | unknown | |
d2315 | train | You want to use the collect() aggregate function.
Here's a link to it's Oracle documentation.
For your case, this would be:
create or replace type names_t as table of varchar2(50);
/
create or replace function join_names(names names_t)
return varchar2
as
ret varchar2(4000);
begin
for i in 1 .. names.count loop
if i > 1 then
ret := ret || ',';
end if;
ret := ret || names(i);
end loop;
return ret;
end join_names;
/
create table tq84_table (
id number,
seq number,
first_name varchar2(20),
last_name varchar2(20)
);
insert into tq84_table values (1, 1, 'John' , 'Walter');
insert into tq84_table values (1, 2, 'Michael', 'Jordan');
insert into tq84_table values (1, 3, 'Sally' , 'May' );
select
t1.id,
t1.seq,
join_names(
cast(collect(t2.first_name || ' ' || t2.last_name order by t2.seq)
as names_t)
)
from
tq84_table t1,
tq84_table t2
where
t1.id = t2.id and
t1.seq != t2.seq
group by t1.id, t1.seq
If you're using Oracle 11R2 or higher, you can also use
LISTAGG, which is a lot simpler (without the necessity of
creating a type or function):
The query then becomes
select listagg(t2.first_name || ' ' || t2.last_name, ',')
within group (order by t2.seq)
over (partition by id) as names
from .... same as above ...
A: Will work not only for 3 columns.This is in general.
DECLARE @Names VARCHAR(8000)
SELECT @Names = COALESCE(@Names + ', ', '') + First_Name +' '+Last_Name FROM A
WHERE Seq !=2 and Id IS NOT NULL
select Id,Seq,@Names from A where Seq = 2
print @Names
You need to pass the Seq value so that you can get the records.
Thanks,
Prema | unknown | |
d2316 | train | I am not sure what caused the leak, but if you want to only avoid it you can change your method to:
- (NSString *)capitalizeFirstLetter {
if (self.length == 0) {
return self;
}
return [NSString stringWithFormat:@"%@%@", [self substringToIndex:1].capitalizedString, [self substringFromIndex:1]];
}
also you could review the answeres here Need help fixing memory leak - NSMutableString | unknown | |
d2317 | train | From the specification
If the computed value of overflow on a block box is neither visible nor clip nor a combination thereof, it establishes an independent formatting context for its contents.
The creation of formatting context is the main difference
Here is a demo
.box {
border:2px solid;
margin:10px;
}
.box div {
float:left;
width:50px;
height:50px;
background:blue;
}
<div class="box" style="overflow:auto">
<div></div> text
</div>
<div class="box" style="overflow:clip">
<div></div> text
</div>
Notice how in the second case, the div remain collapsed because there is no creating of a block formatting context to contain the float element | unknown | |
d2318 | train | Very much depends on how much control you (want to) have on the html...
For complete layout control (magazine like) there's baker framework.
Or if you need a quick and dirty script auto generate html file with pagination (instapaper like), I'd use css3 multi-column layout, with some js to calculate the column needed. And use something like SwipeView to manage the scrolling.
A: This is not trivial, and there are a couple of HTML projects having to do with pagination. The ubiquitous jQuery also includes support for paginating HTML content.
Have look at this S.O. post for more details.
A: You can use UISwipeGestureRecognizer on UIWebView and move to the Page programmatically
Good Luck
A: To do this, you could start with a UIPageViewController and populate each page with a UIWebView, each scrolled down to a certain offset and disable scrolling of the underlying scroll view. | unknown | |
d2319 | train | Collation (sorting order according to natural language) might be what you're looking for
The ICU Library provides such:
http://userguide.icu-project.org/collation/api | unknown | |
d2320 | train | You need to tell EF Core to load related entities. One way is through eager loading:
// notice the Include statement
_Context.RootDataLists.Include(x => x.InsideDatas).First().InsideDatas | unknown | |
d2321 | train | From your question what I understood is that you want to set the dimension of the Client Area. And in SWT lingo it is defined as a rectangle which describes the area of the receiver which is capable of displaying data (that is, not covered by the "trimmings").
You cannot directly set the dimension of Client Area because there is no API for it. Although you can achieve this by a little hack. In the below sample code I want my client area to be 300 by 250. To achieve this I have used the shell.addShellListener() event listener. When the shell is completely active (see the public void shellActivated(ShellEvent e)) then I calculate the different margins and again set the size of my shell. The calculation and resetting of the shell size gives me the desired shell size.
>>Code:
import org.eclipse.swt.SWT;
import org.eclipse.swt.events.ShellEvent;
import org.eclipse.swt.events.ShellListener;
import org.eclipse.swt.layout.GridData;
import org.eclipse.swt.layout.GridLayout;
import org.eclipse.swt.widgets.Display;
import org.eclipse.swt.widgets.Menu;
import org.eclipse.swt.widgets.Shell;
public class MenuTest {
public static void main (String [] args)
{
Display display = new Display ();
final Shell shell = new Shell (display);
GridLayout layout = new GridLayout();
layout.marginHeight = 0;
layout.marginWidth = 0;
layout.horizontalSpacing = 0;
layout.verticalSpacing = 0;
layout.numColumns = 1;
shell.setLayout(layout);
shell.setLayoutData(new GridData(SWT.FILL, SWT.FILL, true,true));
final Menu bar = new Menu (shell, SWT.BAR);
shell.setMenuBar (bar);
shell.addShellListener(new ShellListener() {
public void shellIconified(ShellEvent e) {
}
public void shellDeiconified(ShellEvent e) {
}
public void shellDeactivated(ShellEvent e) {
}
public void shellClosed(ShellEvent e) {
System.out.println("Client Area: " + shell.getClientArea());
}
public void shellActivated(ShellEvent e) {
int frameX = shell.getSize().x - shell.getClientArea().width;
int frameY = shell.getSize().y - shell.getClientArea().height;
shell.setSize(300 + frameX, 250 + frameY);
}
});
shell.open ();
while (!shell.isDisposed()) {
if (!display.readAndDispatch ()) display.sleep ();
}
display.dispose ();
}
}
A: If I get you right you should set the size of the inner component to the needed size and use the method pack() (of the frame).
A: import org.eclipse.swt.SWT;
import org.eclipse.swt.graphics.*;
import org.eclipse.swt.widgets.*;
public class SWTClientAreaTest
{
Display display;
Shell shell;
final int DESIRED_CLIENT_AREA_WIDTH = 300;
final int DESIRED_CLIENT_AREA_HEIGHT = 200;
void render()
{
display = Display.getDefault();
shell = new Shell(display, SWT.SHELL_TRIM | SWT.CENTER);
Point shell_size = shell.getSize();
Rectangle client_area = shell.getClientArea();
shell.setSize
(
DESIRED_CLIENT_AREA_WIDTH + shell_size.x - client_area.width,
DESIRED_CLIENT_AREA_HEIGHT + shell_size.y - client_area.height
);
shell.open();
while (!shell.isDisposed())
{
if (!display.readAndDispatch())
{
display.sleep();
}
}
display.dispose();
}
public static void main(String[] args)
{
SWTClientAreaTest appl = new SWTClientAreaTest();
appl.render();
}
}
A: Use computeTrim to calculate the bounds that are necessary to display a given client area. The method returns a rectangle that describes the bounds that are needed to provide room for the client area specified in the arguments.
In this example the size of the shell is set so that it is capable to display a client area of 100 x 200 (width x height):
Rectangle bounds = shell.computeTrim(0, 0, 100, 200);
shell.setSize(bounds.width, bounds.height);
This article describes the terms used by SWT for widget dimensions:
https://www.eclipse.org/articles/Article-Understanding-Layouts/Understanding-Layouts.htm | unknown | |
d2322 | train | The problem with __IPHONE_3_0 and the like is that they are defined even if targeting other iOS versions; they are version identification constants, not constants that identify the target iOS version. Use __IPHONE_OS_VERSION_MIN_REQUIRED
#if __IPHONE_OS_VERSION_MIN_REQUIRED >= __IPHONE_4_0
#elif __IPHONE_OS_VERSION_MIN_REQUIRED >= __IPHONE_3_0
#else
#endif
or even:
#if __IPHONE_OS_VERSION_MIN_REQUIRED >= 40000
#elif __IPHONE_OS_VERSION_MIN_REQUIRED >= 30000
#else
#endif
to get around the bug mentioned in the comments for "How to target a specific iPhone version?" __IPHONE_OS_VERSION_MAX_ALLOWED might also be of use, in limited circumstances.
And, yes, it doesn't matter what device the app will run on. These constants are defined by the compiler and don't exist on the devices. Once the pre-processor runs, no macros are left. Though there are differences in the devices themselves, the iPhone and iPad both run iOS, and that's what you're really targetting.
A: The code you posted is a compiler directive. This means that it will not run on iPad or iPhone. It is handled when you build your app binary. Incidentally, if you're building for iPad, then you are building for 3.2, not 3.0 or 4.0.
If you use 3_2 or 4_2 instead of 3_0 or 4_0 it should work.
Good luck. | unknown | |
d2323 | train | From my top comments ...
*
*From the problem description, the input [.csv] file is a series of lines of the form: key,value.
*Your code should be doing (e.g.): fscanf(file,"%d,%d",&curkey,¤tValue), so you are reading the file incorrectly.
*What you call value, I would call desired_key as it needs to match the key [and not the value] and if a match is found on the key, you want to print the value.
*You have to do this line-by-line if given -s as a command line arg.
*If given -t, you have to reading the file and store the key/value pairs in an array. You need a struct such as: struct row { int key; int value; };
*The problem statement doesn't mention a hash.
*I don't see a need for visited at all. To me, that's some extraneous code from a shortest path algorithm (e.g. Dijkstra's or A*).
*Your code doesn't look at argc/argv at all. So, you can't select the method from -s or -t and get the desired key value to search for.
Here is the corrected code. It is annotated:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int opt_s;
int opt_t;
// key/value item
struct keyval {
int key;
int val;
};
struct keyval *csvdata; // array of all pairs in file
int csvcount; // number of pairs in array
// method_s -- match on a line-by-line basis
int
method_s(FILE *file,int desired_key,int *retval)
{
int curkey;
int curval;
int found = 0;
// read all lines of the form:
// key,val
while (fscanf(file,"%d,%d",&curkey,&curval) == 2) {
found = (curkey == desired_key);
if (found)
break;
}
// return the value if a match
if (found)
*retval = curval;
return found;
}
// load_t -- load up CSV array
int
load_t(FILE *file)
{
int cap = 0;
int curkey;
int curval;
struct keyval *pair;
csvdata = NULL;
csvcount = 0;
while (fscanf(file,"%d,%d",&curkey,&curval) == 2) {
if (csvcount >= cap) {
cap += 10;
csvdata = realloc(csvdata,sizeof(*csvdata) * cap);
if (csvdata == NULL) {
perror("realloc");
exit(1);
}
}
pair = &csvdata[csvcount++];
pair->key = curkey;
pair->val = curval;
}
// trim array to actual size used
csvdata = realloc(csvdata,sizeof(*csvdata) * csvcount);
if (csvdata == NULL) {
perror("realloc");
exit(1);
}
}
// method_t -- match on a stored array basis
int
method_t(FILE *file,int desired_key,int *retval)
{
struct keyval *pair;
int found = 0;
// load up the array
load_t(file);
// loop through the stored array, looking for a match
for (int idx = 0; idx < csvcount; ++idx) {
pair = &csvdata[idx];
found = (pair->key == desired_key);
if (found)
break;
}
// return the value if a match
if (found)
*retval = pair->val;
return found;
}
int
timems(void)
{
struct timespec ts;
long long nsec;
static long long timebase = 0;
clock_gettime(CLOCK_MONOTONIC,&ts);
nsec = ts.tv_sec;
nsec *= 1000000000;
nsec += ts.tv_nsec;
if (timebase == 0)
timebase = nsec;
nsec -= timebase;
nsec /= 1000000;
return nsec;
}
int
main(int argc,char **argv)
{
int err = 0;
--argc;
++argv;
for (; argc > 0; --argc, ++argv) {
char *cp = *argv;
if (cp[0] != '-')
break;
if ((cp[1] == '-') && (cp[2] == 0))
break;
cp += 2;
switch (cp[-1]) {
case 's':
opt_s = 1;
break;
case 't':
opt_t = 1;
break;
default:
err = 1;
break;
}
}
if (opt_s && opt_t)
err = 1;
if (! (opt_s || opt_t))
err = 1;
int desired_key = -1;
if (argc != 1)
err = 1;
else
desired_key = atoi(*argv);
if (err) {
printf("No method defined Proper Syntax is find [ -s | -t ] number\n");
exit(1);
}
FILE *file = fopen("DATA_FILE.csv", "r");
if (file == NULL) {
perror("Error while opening the file");
return 1;
}
int found = 0;
int retval = 0;
int msbeg = timems();
do {
if (opt_s) {
found = method_s(file,desired_key,&retval);
break;
}
if (opt_t) {
found = method_t(file,desired_key,&retval);
break;
}
} while (0);
int msend = timems();
fclose(file);
if (found)
printf("Found %d with data on second column is %d Time elapsed %d ms\n",
desired_key,retval,msend - msbeg);
return 0;
}
Here is the sample input (DATA_FILE.csv) file I used:
46079,96649
71463,62685
81995,47037
50876,79762
95492,88344
45272,72553
41950,80235
81543,99066
67329,78252
79563,36472
63549,83972
94173,58525
81532,25036
39514,94603
90454,25278
33306,95455
85052,5202
49451,58592
69333,80996
94039,34855
Here is the output for ./find -s 67329:
Found 67329 with data on second column is 78252 Time elapsed 0 ms
Here is the output for ./find -t 67329:
Found 67329 with data on second column is 78252 Time elapsed 0 ms
A: /* find_value.c */
#include <stdio.h>
int main()
{
FILE *file;
file = fopen("DATA_FILE.csv", "r");
if (!file)
{
perror("DATA_FILE.csv");
return 1;
}
int n;
int find_value = 32;
while(fscanf(file, "%d", &n) == 1) {
if(n == find_value) {
printf("%d\n", n);
}
}
fclose(file);
return 0;
} | unknown | |
d2324 | train | In Dialogflow CX, we can select environment before enabling Dialogflow Messenger.
Like the picture below, select the environment in the dropdown when connect Dialogflow Messenger integration so that the it will work with selected environment.
If you already enabled it, then disable it to select environment. | unknown | |
d2325 | train | If your reference is not a Project Reference, but a file reference, you may need to build the first project first. Form the Build menu, select Rebuild All.
If that doesn't help, you may have referenced to the wrong file. Remove the reference to the first project, and add a Project reference to it.
A: Found this one link.
The problem was in the .csproject file. | unknown | |
d2326 | train | This works for Kitkat
public class BrowsePictureActivity extends Activity{
private static final int SELECT_PICTURE = 1;
private String selectedImagePath;
private ImageView imageView;
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.browsepicture);
imageView = (ImageView)findViewById(R.id.imageView1);
((Button) findViewById(R.id.button1))
.setOnClickListener(new OnClickListener() {
public void onClick(View arg0) {
Intent intent = new Intent();
intent.setType("image/*");
intent.setAction(Intent.ACTION_GET_CONTENT);
startActivityForResult(Intent.createChooser(intent,
"Select Picture"), SELECT_PICTURE);
}
});
}
public void onActivityResult(int requestCode, int resultCode, Intent data) {
if (resultCode == RESULT_OK) {
if (requestCode == SELECT_PICTURE) {
Uri selectedImageUri = data.getData();
if (Build.VERSION.SDK_INT < 19) {
selectedImagePath = getPath(selectedImageUri);
Bitmap bitmap = BitmapFactory.decodeFile(selectedImagePath);
imageView.setImageBitmap(bitmap);
}
else {
ParcelFileDescriptor parcelFileDescriptor;
try {
parcelFileDescriptor = getContentResolver().openFileDescriptor(selectedImageUri, "r");
FileDescriptor fileDescriptor = parcelFileDescriptor.getFileDescriptor();
Bitmap image = BitmapFactory.decodeFileDescriptor(fileDescriptor);
parcelFileDescriptor.close();
imageView.setImageBitmap(image);
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
}
/**
* helper to retrieve the path of an image URI
*/
public String getPath(Uri uri) {
if( uri == null ) {
return null;
}
String[] projection = { MediaStore.Images.Media.DATA };
Cursor cursor = getContentResolver().query(uri, projection, null, null, null);
if( cursor != null ){
int column_index = cursor
.getColumnIndexOrThrow(MediaStore.Images.Media.DATA);
cursor.moveToFirst();
return cursor.getString(column_index);
}
return uri.getPath();
}
}
You need to add permission
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> | unknown | |
d2327 | train | There are many good libraries for working with images in C and C++, none of which is clearly superior to all others. OpenCVwiki, project page has great support for some of these tasks, while ImageMagickwiki, project page is good at others. The JPEG group has its own implementation of JPEG processing functions as well. These are probably good resources to start from; the API documentation can guide you more specifically on how to use each of these.
As for whether C or C++ libraries are bound to be faster, there's no clear winner between the two. After all, you can always compile a C library in C++. That said, C++ libraries tend to be a bit trickier to pick up because of the language complexity, but much easier to use once you've gotten a good feel for the language. (I am a bit biased toward C++, so be sure to consider the source). I'd recommend going with whatever language you find easier for the task; neither is a bad choice here, especially if performance is important.
Best of luck with your project!
A: well for basic image manipulations you could also try Qt's QImage class (and other). This gives you basic functionality for opening, scaling, resizing, cropping, pixel manipulations and other tasks.
Otherwise you could as already said use ImageMagick or OpenCV. OpenCV provides a lot of examples with it for many image manipulation/image recognition tasks...
Hope it helps...
A: here is an example using magick library.
program which reads an image, crops it, and writes it to a new file (the exception handling is optional but strongly recommended):
#include <Magick++.h>
#include <iostream>
using namespace std;
using namespace Magick;
int main(int argc,char **argv)
{
// Construct the image object. Seperating image construction from the
// the read operation ensures that a failure to read the image file
// doesn't render the image object useless.
Image image;
try {
// Read a file into image object
image.read( "girl.jpeg" );
// Crop the image to specified size (width, height, xOffset, yOffset)
image.crop( Geometry(100,100, 100, 100) );
// Write the image to a file
image.write( "x.jpeg" );
}
catch( Exception &error_ )
{
cout << "Caught exception: " << error_.what() << endl;
return 1;
}
return 0;
}
check many more examples here
A: libgd is about the easiest, lightest-weight solution.
gdImageCreateFromJpeg
gdImageCopyMergeGray
gdImageCopyResized
Oh, and it's all C.
A: If running time is really important thing then you must consider image processing library which offloads processing job to GPU chip, such as:
*
*Core Image (Osx)
*OpenVIDIA (Windows)
*GpuCV (Windows, Linux) | unknown | |
d2328 | train | I use this to launch the default device player -
Intent intent = new Intent(Intent.ACTION_VIEW,Uri.parse(url));
// use this if you want to launch from a non activity class
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
context.startActivity(intent);
Hope this helps. | unknown | |
d2329 | train | $parts = explode('-', $string);
$parts = array_map('ucfirst', $parts);
$string = lcfirst(implode('', $parts));
You might want to replace the first line with $parts = explode('-', strtolower($string)); in case someone uses uppercase characters in the hyphen-delimited string though.
A: $subject = 'abc-def-xyz';
$results = preg_replace_callback ('/-(.)/', create_function('$matches','return strtoupper($matches[1]);'), $subject);
echo $results;
A: If that works, why not use it? Unless you're parsing a ginormous amount of text you probably won't notice the difference.
The only thing I see is that with your code the first letter is going to get capitalized too, so maybe you could add this:
foreach($parts as $k=>$part)
$new_string .= ($k == 0) ? strtolower($part) : ucfirst($part);
A: str_replace('-', '', lcfirst(ucwords('foo-bar-baz', '-'))); // fooBarBaz
ucwords accepts a word separator as a second parameter, so we only need to pass an hyphen and then lowercase the first letter with lcfirst and finally remove all hyphens with str_replace. | unknown | |
d2330 | train | If z is a zoo series (as stated in the question) then subscripting and window should both work. In the second and third examples we have assumed that the index is of POSIXct class:
z[4, ] # fourth row
window(z, as.POSIXct("2008-04-06 00:03:00"))
window(z, as.POSIXct("2008-04-06")) # assumes time is 00:00:00
Added One can also subscript with a time:
z[as.POSIXct("2008-04-06 00:00:00"), ]
z[as.POSIXct("2008-04-06 00:00:00")] # same
See ?window.zoo for more info. | unknown | |
d2331 | train | Try this and check out this works!!
Replace
file=request.FILES.get('file')
with
files = request.FILES.getlist('file')
you have to loop through each element in your view
if form.is_valid():
name = form.cleaned_data['name']
for f in files:
File.objects.create(name=name, file=f)
return HttpResponse('OK')
Here name is your model field and saving all uploaded files in name field | unknown | |
d2332 | train | You cannot specify a custom way of conflict resolution during replication (a.k.a. sync). CouchDB automatically chooses the winning revision, and you cannot influence that:
By default, CouchDB picks one arbitrary revision as the "winner",
using a deterministic algorithm so that the same choice will be made
on all peers.
You can wait for the replication to finish and handle conflicts afterwards, by performing application-specific merging of document revisions.
Looking into the documentation for Working with conflicting documents, I found the following pseudocode example:
*
*GET docid?conflicts=true
*For each member in the _conflicts array:
GET docid?rev=xxx
If any errors occur at this stage, restart from step 1.
(There could be a race where someone else has already resolved this
conflict and deleted that rev)
*Perform application-specific merging
*Write _bulk_docs with an update to the first rev and deletes of
the other revs. | unknown | |
d2333 | train | Probably foo/ is missing, so try this:
dpkg-scanpackages --arch arm64 pool/ > dists/stable/main/binary-amd64/Packages
A: I had the same problem. Solution form Chris G. works for me: make sure that name of your .deb file contains the architecture like this:
XnViewMP-linux-64_amd64.deb
After this, dpkg-scanpackages work me as expected:
gabor@focal-autoinstall:/var/www/html/repo$ ll pool/main/
total 53664
drwxrwxr-x 2 gabor gabor 4096 May 21 14:54 ./
drwxrwxr-x 3 gabor gabor 4096 May 21 14:46 ../
-rw-rw-r-- 1 gabor gabor 54943400 May 3 13:25 XnViewMP-linux-64_amd64.deb
gabor@focal-autoinstall:/var/www/html/repo$ dpkg-scanpackages --arch amd64 pool/
Package: xnview
Version: 1.00.0
Architecture: amd64
Maintainer: None <[email protected]>
Installed-Size: 16
Depends: libasound2 (>= 1.0.16), libatk1.0-0 (>= 1.12.4), libbz2-1.0, libc6 (>= 2.17), libcairo-gobject2 (>= 1.10.0), libcairo2 (>= 1.2.4), libcups2 (>= 1.4.0), libdbus-1-3 (>= 1.9.14), libdrm2 (>= 2.4.30), libegl1-mesa | libegl1, libfontconfig1 (>= 2.11), libfreetype6 (>= 2.3.5), libgcc1 (>= 1:3.4), libgdk-pixbuf2.0-0 (>= 2.22.0), libgl1-mesa-glx | libgl1, libglib2.0-0 (>= 2.33.14), libgstreamer-plugins-base1.0-0 (>= 1.0.0), libgstreamer1.0-0 (>= 1.4.0), libgtk-3-0 (>= 3.5.18), libopenal1 (>= 1.14), libpango-1.0-0 (>= 1.14.0), libpangocairo-1.0-0 (>= 1.14.0), libpulse-mainloop-glib0 (>= 0.99.1), libpulse0 (>= 0.99.4), libsqlite3-0 (>= 3.5.9), libstdc++6 (>= 5), libx11-6 (>= 2:1.4.99.1), libx11-xcb1, libxcb-shm0, libxcb1 (>= 1.8), libxcb-xinerama0, libxext6, libxfixes3, libxi6 (>= 2:1.5.99.2), libxv1, zlib1g (>= 1:1.2.3.4), libopenal1
Filename: pool/main/XnViewMP-linux-64_amd64.deb
Size: 54943400
MD5sum: cf5aea700b14b50fe657c406f6f84894
SHA1: a27d7a0d17dc11825666c9175b974f51f5e7d69f
SHA256: 6f409eb6d890a827bd382b38a8a9e89eacbad6eb2b5edba01265bd20f2ed3655
Section: graphics
Priority: optional
Homepage: http://www.xnview.com
Description: Graphic viewer, browser, converter.
dpkg-scanpackages: info: Wrote 1 entries to output Packages file.
gabor@focal-autoinstall:/var/www/html/repo$ | unknown | |
d2334 | train | Did you try updating your workloads? After installing VS, if you didn't choose the right workloads to load certain types of projects such as Win32 console projects, go into your programs folder from the control panel and right click on Visual Studio. Choose 'change', not 'uninstall'. The page you are given is your workloads. Read them and decide which one supports that type of program. Check that workload, then in the right sidebar you will see custom options. Not all the ones you need may be automatically checked. Make sure any that refer to the build process are checked. Then click to proceed and VS will install the updates.
For Win32 programs, you need to add the workload called "Desktop development with C++" and then on the Installation Details pane on the right check all boxes that mention a "build". | unknown | |
d2335 | train | __new__() always calls __init__() if the returned instance is of the correct class. This means, every Database() call is still going through and setting a new random value to id.
This id is an instance variable that you have defined, it will not really affect the result of id(), which is why if you do
print(Database() is Database())
it will return True even though the id check returned false. The instance is the same, you just changed id value in between, so they evaluate as False.
In the second case, because the instance is the same, both of them get the value generated by __init__() when you assign d2. You can confirm this by adding a print(d1.id) before and after d2 is assigned.
You need to make use of the initialized boolean to correctly skip the code in __init__()
import random
class Database:
initialized = False
def __init__(self):
if not self.initialized:
self.id = random.randint(1, 101)
self.initialized = True
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(Database, cls).__new__(cls, *args, **kwargs)
return cls._instance
Take a look at issue with singleton python call two times __init__ if you want to see other implementations of a singleton class or alternative approaches that avoid singleton entirely. | unknown | |
d2336 | train | First what we should do in these situation we should go to the Source where the fliesKmPerHour(); method is coming from.
As we see here this method is coming from Flies interface
What we know here is that this method belongs to the Airoplane class so in the arrayList we should check for instance of Flies Interface and typecast elements to Flies
so then we can call the method
So with the if(warMachineArrayList.get(i) instanceof Flies){ we check for Flies instances in the arrayList
then we try to get the Airoplane that has the hightest speed with
if(((Flies)warMachineArrayList.get(i)).fliesKmPerHour() > speedKmh){
airoplane = (Airoplane) warMachineArrayList.get(i);
speedKmh = ((Flies)warMachineArrayList.get(i)).fliesKmPerHour();
And in the end we return an airoplane object
public Airoplane getMaxAiroplane(){
Airoplane airoplane = null;
int speedKmh = 0;
for(int i=0;i<warMachineArrayList.size();i++){
if(warMachineArrayList.get(i) instanceof Flies){
if(((Flies)warMachineArrayList.get(i)).fliesKmPerHour() > speedKmh){
airoplane = (Airoplane) warMachineArrayList.get(i);
speedKmh = ((Flies)warMachineArrayList.get(i)).fliesKmPerHour();
}
}
}
return airoplane;
}
A: You can find instance with desired value by iteration over warMachineArrayList. Then you get some war mashine then check and cast and do what you need. Somting like that:
for (WarMachine warMashine : warMachineArrayList ) {
if (warMashine instanceof Airoplane ) {
Airoplane airplane = (Airoplane) warMashine;
// do with airplane anything you need
}
} | unknown | |
d2337 | train | I discovered the convention (couldn't find it the way I was Googling it because of my assumptions about the solution). You put both fields in a FormGroup and add a validator to the group. | unknown | |
d2338 | train | Yes, it is possible. If you write something like this:
XLApp.Selection.Copy
PPSlide.Shapes.PasteSpecial ppPasteOLEObject | unknown | |
d2339 | train | Here's one way:
Create a dict of the expiration to IV_model by finding the min distance between undPrice and strike.
desiredOutcomeMap = df.groupby('expiration').apply(lambda x: df.loc[abs(x['undPrice']-x['strike']).idxmin(), 'IV_model']).to_dict()
Then map it to the original df.
df['desired_outcome'] = df['expiration'].map(desiredOutcomeMap) | unknown | |
d2340 | train | Tensorflow documentation offers amazing tutorial to start with.
You can explore the tutorial here to understand the timeseries using Tensorflow.
Also, refer to some of the blogs mentioned below and use the method which fits your requirement.
*
*Multi-step LSTM time series.
*Multivariate Time series.
*Many to Many LSTM with TimeDistributed. | unknown | |
d2341 | train | You can try override get_or_create method too. Like this:
class VoteQuerySet(models.query.QuerySet):
def get_or_create(...):
"""Your realization"""
class VoteManager(models.Manager):
def get_queryset(self):
return VoteQuerySet(model=self.model, using=self._db, hints=self._hints).filter(is_deleted=False)
A: get_or_create() is a method from Queryset (check this). One way to solve this problem is create your own queryset class and overwrite this method to not hide the soft-deleted instances.
Another way is create a manager and override all() method adding a flag to return all items from database (even soft-deleted instances), like this:
class VoteManager(models.Manager):
def get_queryset(self):
return super().get_queryset().filter(is_deleted=False)
def all(self, force_all=False):
if force_all:
return super().get_queryset() # queryset with all items
return self.get_queryset() # queryset without soft-deleted items
Now you can call get_or_create() and soft-deleted instances will not be hidden.
Vote.objects.all(force_all=True).get_or_create(**data)
Vote.objects.all(force_all=True) returns a queryset with all items from database. You can call methods over this queryset like filter(), get_or_create(), update_or_create() and soft-deleted instances will not be hidden. | unknown | |
d2342 | train | Update: It turns out he uses Gmail business to control his domain emails and there was a filter in there that bounced the messages because the sender was the same as the recipient.
Bypassing the Gmail spam filter has fixed the problem.
A: Use this code and replace your smtp server information
<%
Set myMail=CreateObject("CDO.Message")
myMail.BodyPart.Charset = "UTF-8"
myMail.Subject= Your Message Subject
myMail.From= "[email protected]"
myMail.To=Receiver Email Address
myMail.CreateMHTMLBody "Test Email Subject"
myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/sendusing")=2
myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpserver")= SMTP_SERVER
myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpserverport")=25
myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/smtpauthenticate") = 1
myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/sendusername")=SMTP_Email_Username
myMail.Configuration.Fields.Item ("http://schemas.microsoft.com/cdo/configuration/sendpassword")=Smtp_Email_Password
myMail.Configuration.Fields.Update
myMail.Send
set myMail=nothing
%> | unknown | |
d2343 | train | you could use the InternalsVisibleTo-Attribute to expose your internals to some other assembly.
http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute(VS.100).aspx
EDIT
If your assembly is signed, you will also need to sign the Friend assembly, and provide the public key in the InternalsVisibleTo attribute:
[<assembly: InternalsVisibleTo("ProcessorTests, PublicKey=0024000004800...)")>] | unknown | |
d2344 | train | I believe there was an issue with how I was attaching the controllers to the base angular module, but I still can't say for sure. | unknown | |
d2345 | train | Since you're trying to describe a relation between a list, a potential element of that list and its position, why not give the predicate a more descriptive name, e.g. list_element_position/3. Then consider what the relation should describe. Basically there are two cases:
1) The head of the list matches the element. In that case you already know a position for that element. But there might be other occurrences as well, so:
2) Regardless whether the head matches the element or not, we should take a look at the tail of the list as well, since there might be further occurrences.
An accumlator can be used to count the positions. I take it from your first example that you'd like to start counting at 1. Putting all that together you could write something like:
list_element_position(L,E,P) :-
list_element_position_(L,E,P,1). % start counting at 1
list_element_position_([X|Xs],X,P,P). % case 1)
list_element_position_([X|Xs],Y,R,P0) :- % case 2)
P1 is P0+1,
list_element_position_(Xs,Y,R,P1).
Your example queries:
?- list_element_position([a,b,c,d,e],E,3).
E = c ? ;
no
?- list_element_position([a,b,c,d,e],E,X).
E = a,
X = 1 ? ;
E = b,
X = 2 ? ;
E = c,
X = 3 ? ;
E = d,
X = 4 ? ;
E = e,
X = 5 ? ;
no
Multiple occurrences of an element:
?- list_element_position([a,b,c,d,e,a],a,X).
X = 1 ? ;
X = 6 ? ;
no
If you only intend to use the predicate with the first argument being ground, this already works fine. However, if you want to ask questions like: What lists are there with a certain element, say a, at a certain position, say 1?:
?- list_element_position(L,a,1).
L = [a|_A] ? ;
You get one answer and then the predicate loops. You can avoid that by adding a goal with length/2 to list_element_position/3:
list_element_position(L,E,P) :-
length(L,_), % <- here
list_element_position_(L,E,P,1).
The above query now produces additional solutions:
?- list_element_position(L,a,1).
L = [a] ? ;
L = [a,_A] ? ;
L = [a,_A,_B] ? ;
L = [a,_A,_B,_C] ? ;
...
Or even better, you can avoid the loop by using clpfd and adding a goal in list_element_position_/4 to ensure that the accumlator does not become larger than the actual position:
:- use_module(library(clpfd)).
list_element_position(L,E,P) :-
list_element_position_(L,E,P,1).
list_element_position_([X|Xs],X,P,P).
list_element_position_([X|Xs],Y,R,P0) :-
R #> P0, % <- here
P1 #= P0+1,
list_element_position_(Xs,Y,R,P1).
This way the query yields a single solution and terminates subsequently:
?- list_element_position(L,a,1).
L = [a|_A] ? ;
no | unknown | |
d2346 | train | I've never seen anyone use this the way you are in Angular. I always use $scope but you must inject it into your controller. Any properties you add to $scope are available in the view.
app.controller('storeCtrl', ['$scope', function($scope)
{
$scope.showPromo = false;
$scope.showAcc = false;
...
$scope.appendAcc = function () {
$scope.showAcc = 'on';
}
}
You will also need to remove the store prefix from your view and reference properties off of $scope directly like this:
<div ng-show="!showPromo" class="editing">
<div ng-repeat="bike in products">
<input ng-model="bike.name">
<input ng-model="bike.price">
<button ng-click="appendAcc()">add accessories</button>
<div ng-show="showAcc" ng-repeat="accessory in bike.accessories" class="add-parts">
<input class="accessory-input" ng-model="accessory.name">
<input class="accessory-input" ng-model="accessory.price">
<button ng-click="hideAcc()">submit accessories</button>
</div>
</div>
<button ng-click="newBike()" type="submit">Create New</button>
<button ng-click="switchToPromo()">See Promo Screen</button>
</div>
Also, without seeing the rest of your code, I'm not sure why you have your code wrapped in a closure. I don't believe that is necessary, and could cause issues.
A: I think it's probably that accessories is undefined when you create a new bike, so there is nothing to "repeat" in the ng-repeat. I suggest initializing a new bike with
self.newBike = function(named, priced, partName, partPrice) {
self.products.push({
name: named || 'edit bike',
price: priced || 'edit price',
accessories: [{
name: partName || 'add part',
price: partPrice || 'add price'
}]
});
} | unknown | |
d2347 | train | /* I had same issue before few days and resolved with below function */
/* please try this line of code */
$args = array(
'post_type' => 'project',
'posts_per_page' => -1,
'author' => get_current_user_id(),
'name' => get_the_title()
);
$query = new WP_Query($args);
if ($query->have_posts()) {
global $post;
while ($query->have_posts()) {
$query->the_post();
$submitdate = get_field('submitdate', $post->ID ); // if changed field name then update key in this query
echo $post->ID;
echo $submitdate;
}
} | unknown | |
d2348 | train | Python does not support method overloading. The method defined in the end will overwrite all the methods with same name defined earlier.
However, You can make use of Multi Method pattern to achieve this. Please refer Guido's post
A: You are trying to overload a method. whatIsYourName(self) is being over-ridden by whatIsYourName(sel,name). If you are a C++/Java programmer, this might sound normal to you, but unfortunately it's not the same with Python. If you want to display a name, try defining it in a constructor and have it printed. | unknown | |
d2349 | train | Hans has nailed it. Technically, your code is breaking because there's no SynchronizationContext captured by the await. But even if you write one, it won't be enough.
The one big problem with this approach is that your STA thread isn't pumping. STA threads must pump a Win32 message queue, or else they're not STA threads. SetApartmentState(ApartmentState.STA) is just telling the runtime that this is an STA thread; it doesn't make it an STA thread. You have to pump messages for it to be an STA thread.
You can write that message pump yourself, though I don't know of anyone brave enough to have done this. Most people install a message pump from WinForms (a la Hans' answer) or WPF. It may also be possible to do this with a UWP message pump.
One nice side effect of using the provided message pumps is that they also provide a SynchronizationContext (e.g., WinFormsSynchronizationContext / DispatcherSynchronizationContext), so await works naturally. Also, since every .NET UI framework defines a "run this delegate" Win32 message, the underlying Win32 message queue can also contain all the work you want to queue to your thread, so the explicit queue and its "runner" code is no longer necessary.
A: Because after await Task.Delay() statement , your code runs inside one of the ThreadPool thread, and since the ThreadPool threads are MTA by design.
var th = new Thread(async () =>
{
var beforAwait = Thread.CurrentThread.GetApartmentState(); // ==> STA
await Task.Delay(1000);
var afterAwait = Thread.CurrentThread.GetApartmentState(); // ==> MTA
});
th.SetApartmentState(ApartmentState.STA);
th.Start(); | unknown | |
d2350 | train | Use MultiIndex.get_level_values for create conditions, chain together and set new values by f-strings:
m1 = df.index.get_level_values(0) == 'func1'
m2 = df.index.get_level_values(1) == 'In'
df[m1 & m2] = df[m1 & m2].astype(int).applymap(lambda x: f'{x:b}')
print (df)
Val1 Val2 Val3 Val4 Val5
Function Type Name
env In Volt Max Typ Min Max Max
Temp High Mid Low High Low
BD# 1 2 3 4 5
func1 In Name1 11 100 11 11 11
Name2 101 111 110 1001 100
out Name3 6 6 3 4 5
A: By creating mask of the dataframe:
mask = ((df.index.get_level_values('Function') == 'func1')&
(df.index.get_level_values('Type') == 'In')&
(df.index.get_level_values('Name').isin(['Name1', 'Name2'])))
df[mask] = df[mask].astype(int).applymap(lambda x: format(x, 'b'))
print(df[mask])
Val1 Val2 Val3 Val4 Val5
Function Type Name
env In Volt Max Typ Min Max Max
Temp High Mid Low High Low
BD# 1 2 3 4 5
func1 In Name1 11 100 11 11 11
Name2 101 111 110 1001 100
out Name3 6 6 3 4 5 | unknown | |
d2351 | train | This isn't an error.
Symfony threats all classes as entities and if you're "mapping" them with doctrine you'll create the corrisponding tables onto database.
Now inheritance have to be taken into account: every "field" (property) into parent classe, will be extended or inherited by child.
So is perfectly clear that the corresponding parent field will be created into database.
To me the best way for solve this is to create a parent class and to migrate all commons (fields,methods and so on...) into it.
Then, you'll extend that new parent calss into user with specific fields (in that case username) aswell into student with student's specific fields.
A: Yeah, that's probably not an error, although I agree it can be annoying.
To see how exactly are they generated you can look into the command class. | unknown | |
d2352 | train | You can try to set min and max values for the axis:
xAxis: {
plotLines: [{
color: '#000000',
width: 2,
value: 1
}],
max: 2,
min: 0
},
yAxis: {
plotLines: [{
color: '#000000',
width: 2,
value: 1
}],
max: 2,
min: 0
},
Example SQL FIDDLE HERE
A: Instaed of plotLines, you can move your axis by offset parameter or use plugin | unknown | |
d2353 | train | The second file is overwriting the first. You must change the name of one of the functions to avoid the collision.
You'll find something like this in the files:
jQuery.fn.datepick = function() {
// etc.
};
This is where the jQuery plugin method is created. Just change datepick in one or both files to something different and unique.
A: As noted in FishBasketGordo's answer, the second function to load will overwrite the first, and changing the name of one will fix this, however there is a deeper problem that needs to be addressed.
The reason this happens is that the functions are being added to the global namespace. In javascript the file that the code comes from is irrelevant. One way to avoid avoid polluting the global namespace is to wrap functions inside an object, so in your case you might have:
/* 1-jquery.datepick.js */
datepick = new Object;
datepick.datepick = function() {
/* function definition here */
alert('first file function');
};
/* 2-jquery.hijridatepick.js */
hijridatepick = new Object;
hijridatepick.datepick = function() {
/* function definition here */
alert('second file function');
};
These functions can then be accessed using:
datepick.datepick();
/* alerts 'first file function' */
hijridatepick.datepick();
/* alerts 'second file function' */ | unknown | |
d2354 | train | Before you updated your question, it was correct - you should copy the asset file to "assets" not "src":
<source-file src="myfile.ext" target-dir="assets"/>
Then you can reference it via the AssetManager:
AssetManager assetManager = this.cordova.getActivity().getAssets();
InputStream inputStream = assetManager.open("myfile.ext");
In terms of "path" to the file, assets are stored in the APK differently from how your Android project is constructed, so the "path" to your file would be file:///android_asset/myfile.ext, but you most likely wouldn't actually be referencing it like this from MyPlugin.java. | unknown | |
d2355 | train | I assume you are asking about the MongoDB aggregation pipeline. This is a server feature and if you haven't already, you should at least read this page to get a basic understanding of what it is from the server's standpoint.
Next, if you haven't done this already, you should install mongo shell and get it working on your machine, and try executing some simple operations (such as an insert and a find), after which you should run the various aggregation examples in the mongo shell to familiarize yourself with the feature.
After this, review your MongoDB driver documentation for how to work with the aggregation pipeline. For Ruby driver for example, this is documented here. You'll note that the syntax is different from the mongo shell examples. If you are using node.js, the syntax will also be different from mongo shell examples even though both your driver and mongo shell use javascript.
If you aren't using a driver directly but are using a higher level library such as an ODM (e.g. Mongoose), there may be yet another way of executing aggregation with its own syntax provided by that library. You may not need to know how driver aggregation works if you are using an ODM, but you may find that the ODM doesn't implement some feature that the driver implements (and MongoDB documentation references).
The reason why I suggest doing all of this is that when you are constructing aggregation pipelines, you almost always need to do so as the aggregation pipeline stages, operators and expressions are documented for the server, even if you are using a much higher level library to execute the aggregation or generally interact with your database.
Once you understand how the aggregation pipelines are executed and how to construct them, use your application schema to construct the appropriate queries. There generally isn't comprehensive documentation for aggregation pipeline beyond the official MongoDB docs, due to the size of the aggregation pipeline as a feature. You are generally on your own as far as actually constructing the pipelines for your application in the language/framework that you are using. | unknown | |
d2356 | train | If I understood correctly, you just have to give the "playArea" div the right height.
Edit: I mean, the combined height of everything inside it.
A:
But i would like to have a paragraph of text under the 'playArea' div, but because all the divs inside playArea is absolute, the text doesnt appear at the bottom of the last absolute positioned div.
As you seem to know all the dimensions and positions already, just add another absolutely positioned div to it and put the relative content in it.
I have looked into this and found an alternative by using float:left and clear:left however after using this method on the first div, you cannot position the div correctly as the starting point of the second div is under the first div and not at (0,0). Any ideas of how i can get by this.
You need to remove position: absolute to get the floats right. Just width and height are enough.
A: Float the three inner divs left, put overflow: hidden; on the playArea div and put your <p> under the three inner divs with clear: both;
A: After reading the comment thread between you and "BalusC", it appears that you have modified your CSS and are now trying to float your items, and use margin-top and margin-left for positioning. You are totally able to do it that way, but you are forgetting that you can also use negative margins to position your elements as well. For example if you use margin-top:-10px; then it will pull the element up (instead of pushing it down, like a normal positive valued margin). The same goes for all of your other margins.
That seems to be the missing ingredient for you now. | unknown | |
d2357 | train | Maybe use the clip: true; property which is present on every item in QtQuick ? | unknown | |
d2358 | train | *
*read_until might read beyond the delimiter (therefore request_buf.size() can be more than siz). This is a conceptual problem when you implement save because you read data_size bytes from the socket, which ignores any data already in request_buf
*These things are code smells:
if (output_file.tellp() == (std::fstream::pos_type)(std::streamsize)filesize) {
(never use C-style casts). And
return __LINE__; // huh? just `true` then
And
buf.empty();
(That has no effect whatsoever).
I present here three versions:
*
*First Cleanup
*Simplify (using tcp::iostream)
*Simplify! (assuming more things about the request format)
First Cleanup
Here's a reasonable cleanup:
Live On Coliru
#include <boost/asio.hpp>
#include <boost/array.hpp>
#include <iostream>
#include <fstream>
namespace ba = boost::asio;
using ba::ip::tcp;
struct Conf {
int def_port = 6767;
} s_config;
struct Request {
std::string command;
std::string parameter;
std::size_t data_size = 0;
std::string get_filename() const {
// cut filename from path - TODO use boost::filesystem::path instead
return parameter.substr(parameter.find_last_of('\\') + 1);
}
friend std::istream& operator>>(std::istream& is, Request& req) {
return is >> req.command >> req.parameter >> req.data_size;
}
};
struct Sync {
bool start_server();
bool save(Request const& req, boost::asio::streambuf& request_buf);
ba::io_service& io_service;
tcp::socket socket{ io_service };
Conf const *conf = &s_config;
};
bool Sync::start_server() {
boost::asio::streambuf request_buf;
boost::system::error_code error;
try {
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), conf->def_port));
acceptor.accept(socket); // socket is a member of class Sync
while (true) {
error.clear();
std::string req_txt;
{
char const* delim = "\n\n";
size_t siz = boost::asio::read_until(socket, request_buf, delim, error);
// correct for actual request siz
auto b = buffers_begin(request_buf.data()),
e = buffers_end(request_buf.data());
auto where = std::search(b, e, delim, delim+strlen(delim));
siz = where==e
? std::distance(b,e)
: std::distance(b,where)+strlen(delim);
std::copy_n(b, siz, back_inserter(req_txt));
request_buf.consume(siz); // consume only the request text bits from the buffer
}
std::cout << "request size:" << req_txt.size() << "\n";
std::cout << "Request text: '" << req_txt << "'\n";
Request req;
{
std::istringstream request_stream(req_txt);
request_stream.exceptions(std::ios::failbit);
request_stream >> req;
}
save(req, request_buf); // parameter is filename
}
} catch (std::exception &e) {
std::cerr << "Error parsing request: " << e.what() << std::endl;
}
return false;
}
bool Sync::save(Request const& req, boost::asio::streambuf& request_buf) {
auto filesize = req.data_size;
std::cout << "filesize is: " << filesize << "\n";
{
std::ofstream output_file(req.get_filename(), std::ios::binary);
if (!output_file) {
std::cout << "failed to open " << req.get_filename() << std::endl;
return true;
}
// deplete request_buf
if (request_buf.size()) {
if (request_buf.size() < filesize)
{
filesize -= request_buf.size();
output_file << &request_buf;
}
else {
// copy only filesize already available bytes
std::copy_n(std::istreambuf_iterator<char>(&request_buf), filesize,
std::ostreambuf_iterator<char>(output_file));
filesize = 0;
}
}
while (filesize) {
boost::array<char, 1024> buf;
boost::system::error_code error;
std::streamsize len = socket.read_some(boost::asio::buffer(buf), error);
if (len > 0)
{
output_file.write(buf.c_array(), len);
filesize -= len;
}
if (error) {
socket.shutdown(boost::asio::ip::tcp::socket::shutdown_both, error); // ignore error
socket.close(error);
break; // an error occured
}
}
} // closes output_file
return false;
}
int main() {
ba::io_service svc;
Sync s{svc};
s.start_server();
svc.run();
}
Prints with a client like echo -ne "save test.txt 12\n\nHello world\n" | netcat 127.0.0.1 6767:
request size:18
Request text: 'save test.txt 12
'
filesize is: 12
request size:1
Request text: '
'
Error parsing request: basic_ios::clear: iostream error
SIMPLIFY
However, since everything is synchronous, why not just use tcp::iostream socket;. That would make start_server look like this:
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), conf->def_port));
acceptor.accept(*socket.rdbuf());
while (socket) {
std::string req_txt, line;
while (getline(socket, line) && !line.empty()) {
req_txt += line + "\n";
}
std::cout << "request size:" << req_txt.size() << "\n";
std::cout << "Request text: '" << req_txt << "'\n";
Request req;
if (std::istringstream(req_txt) >> req)
save(req);
}
And save even simpler:
void Sync::save(Request const& req) {
char buf[1024];
size_t remain = req.data_size, n = 0;
for (std::ofstream of(req.get_filename(), std::ios::binary);
socket.read(buf, std::min(sizeof(buf), remain)), (n = socket.gcount());
remain -= n)
{
if (!of.write(buf, n))
break;
}
}
See it Live On Coliru
When tested with
for f in test{a..z}.txt; do (echo -ne "save $f 12\n\nHello world\n"); done | netcat 127.0.0.1 6767
that prints:
request size:18
Request text: 'save testa.txt 12
'
request size:18
Request text: 'save testb.txt 12
'
[... snip ...]
request size:18
Request text: 'save testz.txt 12
'
request size:0
Request text: ''
Even Simpler
If you know that the request is a single line, or whitespace is not significant:
struct Sync {
void run_server();
void save(Request const& req);
private:
Conf const *conf = &s_config;
tcp::iostream socket;
};
void Sync::run_server() {
ba::io_service io_service;
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), conf->def_port));
acceptor.accept(*socket.rdbuf());
for (Request req; socket >> std::noskipws >> req; std::cout << req << " handled\n")
save(req);
}
void Sync::save(Request const& req) {
char buf[1024];
size_t remain = req.data_size, n = 0;
for (std::ofstream of(req.get_filename(), std::ios::binary);
socket.read(buf, std::min(sizeof(buf), remain)), (n = socket.gcount());
remain -= n)
{
if (!of.write(buf, n)) break;
}
}
int main() {
Sync().run_server();
}
That's the entire program in ~33 lines of code. See it Live On Coliru, printing:
Request {"save" "testa.txt"} handled
Request {"save" "testb.txt"} handled
Request {"save" "testc.txt"} handled
[... snip ...]
Request {"save" "testy.txt"} handled
Request {"save" "testz.txt"} handled | unknown | |
d2359 | train | Your object model design basically allows mapping (converting) only via construction, hence can't benefit the most of the AutoMapper automatic and explicit mapping capabilities.
ConstructUsing is used to select a non default constructor for creating destination instances, but still requires member mapping.
What you need is the ConvertUsing method:
Skip member mapping and use a custom expression to convert to the destination type
Mapper.Initialize(config =>
{
config.CreateMap<Location, LocationEntity>()
.ConvertUsing(source => new LocationEntity(source.Name, source.GeoLocation?.Latitude ?? 0.0, source.GeoLocation?.Longitude ?? 0));
config.CreateMap<LocationEntity, Location>()
.ConvertUsing(source => new Location(source.Name, new GeoLocation(source.Latitude, source.Longitude)));
});
A: ConvertUsing is helpful if you really want to take over the mapping. But more idiomatic in this case would be to map through constructors. By adding another constructor to Location (private if needed) you could even remove ForCtorParam.
CreateMap<Location, LocationEntity>().ReverseMap().ForCtorParam("geoLocation", o=>o.MapFrom(s=>s));
class LocationEntity
{
public LocationEntity(string name, double geoLocationLatitude, double geoLocationLongitude)
{
this.Name = name;
this.Latitude = geoLocationLatitude;
this.Longitude = geoLocationLongitude;
}
public string Name { get; }
public double Latitude { get; }
public double Longitude { get; }
} | unknown | |
d2360 | train | The problem would not occur if newlines are at the end of the text lines.
Now I have an explanation: The <a href="mailto is matched by the regular expression <a\s.*?href=([^ >]+). The following .*? will match any character sequence (without line breaks) until it finds <img.... And it does exactly this (in absence of line breaks).
Example (one with and one without newlines):
private static final Pattern P = Pattern.compile("<a\\s.*?href=([^ >]+).*?<img\\s.*?src=([^ ]+)(.*?>.*?<\\/a>)");
private static final String TEXT = "<font size=\"4\">Mail : </font><a href=\"mailto:[email protected]\"><u><font size=\"4\" color=\"#0000ff\">[email protected]</font></u></a><br />"
+ "<br />"
+ "<font size=\"4\">Internet : </font><a href=\"http://www.pgt-gmbh.com/\"><u><font size=\"4\" color=\"#0000ff\">http://www.pgt-gmbh.com</font></u></a><font size=\"4\"> </font><br />"
+ "<br />"
+ "<br />"
+ "<font size=\"4\"> </font><a class=\"domino-attachment-link\" style=\"display: inline-block; text-align: center\" href=\"/_dv/_dv/documents_DE.nsf/0/7fadd8be280a2e34c1257dfd00307098/$FILE/Anfrage.pdf\" title=\"Anfrage.pdf\"><img src=\"/_dv/_dv/documents_DE.nsf/0/7fadd8be280a2e34c1257dfd00307098/f_Text/0.5F66?OpenElement&FieldElemFormat=gif\" width=\"32\" height=\"32\" alt=\"Anfrage.pdf\" border=\"0\" /> - Anfrage.pdf</a>";
private static final String NEWLINE_TEXT = "<font size=\"4\">Mail : </font><a href=\"mailto:[email protected]\"><u><font size=\"4\" color=\"#0000ff\">[email protected]</font></u></a><br />\n"
+ "<br />\n"
+ "<font size=\"4\">Internet : </font><a href=\"http://www.pgt-gmbh.com/\"><u><font size=\"4\" color=\"#0000ff\">http://www.pgt-gmbh.com</font></u></a><font size=\"4\"> </font><br />\n"
+ "<br />\n"
+ "<br />\n"
+ "<font size=\"4\"> </font><a class=\"domino-attachment-link\" style=\"display: inline-block; text-align: center\" href=\"/_dv/_dv/documents_DE.nsf/0/7fadd8be280a2e34c1257dfd00307098/$FILE/Anfrage.pdf\" title=\"Anfrage.pdf\"><img src=\"/_dv/_dv/documents_DE.nsf/0/7fadd8be280a2e34c1257dfd00307098/f_Text/0.5F66?OpenElement&FieldElemFormat=gif\" width=\"32\" height=\"32\" alt=\"Anfrage.pdf\" border=\"0\" /> - Anfrage.pdf</a>";
public static void main(String[] args) {
Matcher m = P.matcher(TEXT);
if (m.find()) {
System.out.println(m.group());
}
m = P.matcher(NEWLINE_TEXT);
if (m.find()) {
System.out.println(m.group());
}
}
Output:
<a href="mailto:[email protected]">... without newlines
<a class="domino-attachment-link"... with newlines
A better pattern
<a\s[^>]*?href=([^>]+)><img\s.*?src=([^ ]+)(.*?>.*?<\/a>)
The problem with HTML and regex is that the upper pattern matches only a specific situation, if some markup is between <a...> and <img...> then it would fail. Surely this could be fixed, but the expression gets more and more incomprehensible.
So: If you want to do this extraction issues for more than one link, you should switch to an HTML-Parser (although finding the best is a science of its own). | unknown | |
d2361 | train | Because you're not using ListFragment or ListActivity, you cannot use a built in ListView because there isn't one. In order to have access to a ListView, you must have on in your xml layout as well as instantiate it in your onCreateView() method.
The following is a quick fix to give you an idea of how you should implement:
public class Dialogo extends DialogFragment {
private File currentDir;
private FileArrayAdapter adapter;
private ListView list;
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
//you don't want to use the same layout as your list view!
View view = inflater.inflate(R.layout.new_layout, container);
//New stuff
list = (ListView)view.findViewById(R.id.your_list);
Context c = getActivity();
currentDir = c.getExternalFilesDir(Environment.DIRECTORY_PICTURES);
Toast.makeText(c, "Current Dir: "+currentDir.getName(), Toast.LENGTH_SHORT).show();
fill(currentDir);
return view;
} //oncreateview
private void fill(File f)
{
File[]dirs = f.listFiles();
getDialog().setTitle("Directorio actual: "+f.getName());
List<Option>dir = new ArrayList<Option>();
List<Option>fls = new ArrayList<Option>();
try{
for(File ff: dirs)
{
if(ff.isDirectory())
dir.add(new Option(ff.getName(),"Folder",ff.getAbsolutePath()));
else
{
fls.add(new Option(ff.getName(),"File Size: "+ff.length(),ff.getAbsolutePath()));
}
}
}catch(Exception e)
{
}
Collections.sort(dir);
Collections.sort(fls);
dir.addAll(fls);
if(!f.getName().equalsIgnoreCase("sdcard"))
dir.add(0,new Option("..","Parent Directory",f.getParent()));
adapter = new FileArrayAdapter(getActivity(),R.layout.activity_browser,dir);
list.setAdapter(adapter); <--- No More Error
}
}
Here's the code for your new layout to the DialogFragment
new_layout.xml
<LinearLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_height="wrap_content"
android:orientation="vertical"
android:layout_width="fill_parent">
<ListView
android:id="@+id/your_list"
android:layout_height="wrap_content"
android:layout_width="match_parent"/>
</LinearLayout> | unknown | |
d2362 | train | may be you forgot
renderTo : Ext.getBody()
inside combobox... | unknown | |
d2363 | train | Two things I was doing wrong:
*
*I had to id each of the <include>s with the EXACT name that it was going to and
*I had to go through each includes because I had two:
LinearLayout layout = (LinearLayout) findViewById(R.id.app_bar_main).findViewById(R.id.content_main); | unknown | |
d2364 | train | *
*Is it possible to use 512 X 288 images to train ResNet without cropping the images? I do not want to crop the image because the tools
are positioned rather randomly inside the image, and I think cropping
the image will cut off part of the tools as well.
*
*Yes you can train ResNet without cropping your images. you can resize them, or if that's not possible for some reason, you can alter the network, e.g. add a global pooling at the very end and account for the different input sizes. (you might need to change kernel sizes, or downsampling rate).
If your bigest issue here is that resnet requires 224x224 while your images are of size 512x228, the simplest solution would be to first resize them into 224x224. only if that`s not a possibility for you for some technical reasons, then create a fully convolutional network by adding a global pooling at the end.(I guess ResNet does have a GP at the end, in case it does not, you can add it.)
*For the training and test set images, do I need to draw a rectangle around the object I want to classify?
*For classification no, you do not. having a bounding box for an object is only needed if you want to do detection (that's when you want your model to also draw a rectangle around the objects of interest.)
*Is it okay if multiple different objects are in one image? The data set I am using often has multiple tools appearing in one image, and I
wonder if I must only use images that only have one tool appearing at
a time.
3.Its ok to have multiple different objects in one image, as long as they do not belong to different classes that you are training against. That is, if you are trying to classify apples vs oranges, its obvious that, an image can not contain both of them at the same time. but if for example it contains anything else, a screwdriver, key, person, cucumber, etc, its fine.
*If I were to crop the images to fit one tool, will it be okay even if the sizes of the images vary?
It depends on your model. cropping and image size are two different things. you can crop an image of any size, and yet resize it to your desired dimensions. you usually want to have all images with the same size, as it makes your life easier, but its not a hard condition and based on your requirements you can have varying images, etc as well. | unknown | |
d2365 | train | Uses display: inline-block;text-decoration:none;, the trick is display: inline-block;.
Css spec states
For block containers that establish an inline formatting context, the
decorations are propagated to an anonymous inline element that wraps
all the in-flow inline-level children of the block container. For all
other elements it is propagated to any in-flow children. Note that
text decorations are not propagated to floating and absolutely
positioned descendants, nor to the contents of atomic inline-level
descendants such as inline blocks and inline tables.
Example: The link COVID-19 in your codes will remove the underline.
<router-link :to="{name: 'Plan'}">
<div>Plan Your Trip</div>
<div class='expander'>
<router-link :to="{name: 'Plan'}" style="display: inline-block;text-decoration:none;">COVID-19</router-link>
<router-link :to="{name: 'Plan'}">Visa</router-link>
<router-link :to="{name: 'Plan'}">Essentials</router-link>
</div>
</router-link>
Below is one demo:
let Layout = {
template: `<div>
<h4>Layout Page </h4>
<router-link to="/contact">
<div>
<p>Links<p>
<router-link to="/contact/add" style="display: inline-block;text-decoration:none;">Add1</router-link>
<router-link to="/addcontact">Add2</router-link>
</div>
</router-link>
<router-view></router-view>
</div>`
};
let Home = {
template: '<div>this is the home page. Go to <router-link to="/contact">contact</router-link> </div>'
};
let ContactList = {
// add <router-view> in order to load children route of path='/contact'
template: '<div>this is contact list, click <router-link to="/contact/add">Add Contact In sub Router-View</router-link> here to add contact<p><router-view></router-view></p> Or Click <router-link to="/addcontact">Add Contact In Current Router-View</router-link></div>'
};
let ContactAdd = {
template: '<div>Contact Add</div>'
}
let router = new VueRouter({
routes: [{
path: '/',
redirect: 'home',
component: Layout,
children: [{
path: 'home',
component: Home
},
{
path: 'contact',
component: ContactList,
children: [{
path: 'add',
component: ContactAdd
}]
},
{
path: 'addcontact', // or move ContactAdd as direct child route of path=`/`
component: ContactAdd,
}
]
}]
});
new Vue({
el: '#app',
components: {
'App': {
template: '<div><router-view></router-view></div>'
},
},
router
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.js"></script>
<script src="https://unpkg.com/[email protected]/dist/vue-router.js"></script>
<section id="app">
<app></app>
</section>
A: When you inspect the DOM for a router-link, you see that it's an a tag. Bear in mind, that even when the initial underline is removed, there is an underline that happens when you hover over the router link text.
Using this snippet
<router-link :to="{name: 'Plan'}">
<div>Plan Your Trip</div>
<div class='expander'>
<router-link :to="{name: 'Plan'}">COVID-19</router-link>
<router-link :to="{name: 'Plan'}">Visa</router-link>
<router-link :to="{name: 'Plan'}">Essentials</router-link>
</div>
</router-link>
.expander a {
text-decoration: none;
}
.expander a:hover {
text-decoration: none;
}
A: The outer router-link is applying text-decoration: underline to its inner-text and the inner router-links are also applying text-decoration: underline to their inner-text.
You essentially have double underlines applied to your inner router-links at the moment.
You need to remove it from both. If you need another element to have text-decoration: underline then set it for that element separately. | unknown | |
d2366 | train | Perhaps, there are other ways to achieve your goals, besides going noSQL.
In short, if you just need dynamic fields, you have other options. I have an extensive writeup about them in another answer:
*
*Entity–attribute–value model (Django-eav)
*PostgreSQL hstore (Django-hstore)
*Dynamic models based on migrations (Django-mutant)
Yes, that's not exactly what you've asked for, but that's all that we've currently got.
A: As you said, forked code is never the best alternative: changes take longer to get into the fork, it might break things... And even with django-nonrel, is not really Django as you loose things like model inheritance, M2M... basically anything that will need to do a JOIN query behind the scenes.
Is Django going to support NoSQL? As far as I know, there's no plans on the roadmap for doing so in the short run. According to Russell Keith-Magee on his talk on PyCon Russia 2013, "NoSQL" is on the roadmap but in the long term, as well as SQLAlchemy. So if you wanna wait, is going to take a long time, I'm afraid.
Anyway, even if it's not ideal, you still can use Django but use something else as a ORM. Nothing stops you from use vanilla Django and something like MongoDB instead of Django ORM. | unknown | |
d2367 | train | Here is a possible solution (which assumes that your XQuery processor allows you to pass on maps as user-defined input):
declare variable external $INPUT := map {
'Anna': 25,
'Marco': 25
};
for $description in collection('db')/Description
where every $test in map:for-each($INPUT, function($name, $age) {
exists($description/Persons/Person[Name = $name and Age = $age])
}) satisfies $test
return $description/FileName
The second alternative is closer to your original solution. Names and ages are bound to separate variables:
declare variable $NAMES external := ('Anna', 'Marco');
declare variable $AGES external := (25, 25);
for $description in collection('db')/Description
where every $test in for-each-pair($NAMES, $AGES, function($name, $age) {
exists($description/Persons/Person[Name = $name and Age = $age])
}) satisfies $test
return $description/FileName | unknown | |
d2368 | train | I recommend you to use NSZombieEnabled to find out what is causing a bad access to memory.
*
*Do you use DEBUG / RELEASE defines to branch your code?
*Do you use SDK version checkers to branch your code?
Otherwise I can't see how your app can behave diferently on different devices/configurations.
A: I had the exact same problem recently, however I am not entirely sure the cause is the same. What I can tell you though is what resolved the issue for me (although I'm still not entirely satisfied with the solution).
In the end, it seems like a compiler issue, and this might confirm what others have said about compiler optimization.
I am using Xcode 4.0 (build 4A304a). The issue was with LLVM compiler 2.0 Code Generation. One key in particular: "Optimization Level"
Debug was set to "None".
Release was set to "Fastest, Smallest"
Changing Release to "None" fixed the crash (and similarly changing Debug to "Fastest, Smallest" caused the app the crash on launch).
A: I can propose to change optimization level of release settings to "None".
I met the same problem few times (with different apps) and solved it in this way.
A: I never "solved" this but I did track down the offending code. I suspect that something in this segment of Quartz code was causing a buffer overrun somewhere deep inside the core - and it only caused a problem on 3G. Some of the setup for this segment is not included but this is definitely where it is happening:
gradient = CGGradientCreateWithColors(space, (CFArrayRef)colors, locations);
CGContextAddPath(context, path);
CGContextSaveGState(context);
CGContextEOClip(context);
transform = CGAffineTransformMakeRotation(1.571f);
tempPath = CGPathCreateMutable();
CGPathAddPath(tempPath, &transform, path);
pathBounds = CGPathGetPathBoundingBox(tempPath);
point = pathBounds.origin;
point2 = CGPointMake(CGRectGetMaxX(pathBounds), CGRectGetMinY(pathBounds));
transform = CGAffineTransformInvert(transform);
point = CGPointApplyAffineTransform(point, transform);
point2 = CGPointApplyAffineTransform(point2, transform);
CGPathRelease(tempPath);
CGContextDrawLinearGradient(context, gradient, point, point2, (kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation));
CGContextRestoreGState(context);
CGGradientRelease(gradient);
A: You say "My object is not released by any of my code". I've found that it's not uncommon in Objective-C to run into situations where your code has not explicitly released an object yet the object has been released all the same. For example, off the top of my head, let's say that you have an object #1 with retain count of 1 and you release it but then autorelease it accidentally. Then, before the autorelease pool is actually drained, you allocate a new object #2 -- it's not inconceivable that this new object #2 could be allocated at the same address as object #1. So when the autorelease pool is subsequently drained, it will release object #2 accidentally. | unknown | |
d2369 | train | Yes you are correct. You need sockets. There are bunch of articles over the internet but I would like to give a summary and try to explain why sockets will be the best fit to your requirements.
Sockets are way of achieving two way communication between client and server without the need of polling.
There is a package called Flask-SocketIO
Flask-SocketIO gives Flask applications access to low latency
bi-directional communications between the clients and the server.
Then for the scenario where you would like to send changes to all the connected client when one client does some work to your database or something similar, you will need to use broadcasting. When a message is sent with the broadcast option enabled, all clients connected to the namespace receive it, including the sender. Here you can find details of the broadcasting using Flask-SocketIO. | unknown | |
d2370 | train | You need to tell SCons to use the D compiler, as I dont believe it does so by default. This does more than just load the compiler, it also sets the corresponding Construction Variables, which among other things sets the object file extension that you are asking about.
If you create your environment as follows, then the D compiler and related construction variables will be loaded.
env=Environment(tools=['default', 'dmd']) | unknown | |
d2371 | train | I'd break up the image into smaller images, put the smaller image cells into their own ImageIcons and then display whichever Icons I desired in JLabels, perhaps several of them. BufferedImage#getSubimage(...) can help you break the big image into smaller ones.
(decided to make it an answer)
A: If you don't need a physical copy of the sub image and only need to display it then you could add the image to a JLabel which you add to a JScrollPane without any scrollbars. Set the preferredSize() of the scrollpane equal to the dimension of your sub images (25x25). Then you can use
scrollPane.getViewport().setViewPosition(...);
to position the viewport to disply any sub image. | unknown | |
d2372 | train | I recommend you to subclass UITableViewCell and create your own custom cell :D
but, anyway... try by adding this
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellIdentifier];
UIImageView *imgView = nil;
if (cell == nil)
{
cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:cellIdentifier];
imgView=[[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 54, 54)];
imgView.backgroundColor=[UIColor clearColor];
imgView.contentMode = UIViewContentModeScaleAspectFill;
imgView.tag = 123;
[cell.contentView insertSubview:imgView atIndex:0];
[cell setIndentationWidth:54];
[cell setIndentationLevel:1];
}
else
{
imgView = (UIImageView*)[cell.contentView viewWithTag:123];
}
imgView.image = [UIImage imageNamed:@"myImage"]; | unknown | |
d2373 | train | I believe you can access the container instance from your legacy application like this
$kernel = new AppKernel('prod', true);
$kernel->loadClassCache();
$kernel->boot();
$request = Request::createFromGlobals();
$container = $kernel->getContainer();
$sc = $container->get('security.context');
A: Using Symfony's DIC as a standalone component is possible but you'd have to do many things "manually" (as you're not planning on using full Symfony Framework from the very beginning). You'll probably won't get much of using DIC with all that legacy stuff.
If you want to go this path I'd consider choosing another component first (like HttpFoundation and HttpKernel).
As @Cerad suggested you might wrap your legacy code in Symfony. Have a look at IngewikkeldWrapperBundle bundle. You can't use it as is but it might give you some ideas.
There's a third way.
You can decide to implement every new feature in a Symfony app. Than, you can make that both legacy and Symfony apps coexist. On a server level (i.e. Nginx), you might proxy legacy URLs to the legacy app and all the migrated URLs to a Symfony2 app. In my case this scenario was the best option and proved to be working. However, we were committed to abandon legacy app development (so every new feature or change had to be developed in a Symfony2 app).
Edit: here's how you could boot the Symfony kernel in a legacy app and dispatch an event (which is needed for the firewall):
$kernel = new \AppKernel('dev', true);
$kernel->boot();
$request = Request::createFromGlobals();
$request->attributes->set('is_legacy', true);
$request->server->set('SCRIPT_FILENAME', 'app.php');
$container = $kernel->getContainer();
$container->enterScope('request');
$container->get('request_stack')->push($request);
$container->set('request', $request);
$event = new GetResponseEvent($kernel, $request, HttpKernelInterface::MASTER_REQUEST);
$eventDispatcher = $container->get('event_dispatcher');
$eventDispatcher->dispatch('kernel.request', $event); | unknown | |
d2374 | train | Answering my own question. Should have done a bit more digging. All three example apps in the ngrx platfrom repo's projects folder have the strict flag enabled:
https://github.com/ngrx/platform/tree/master/projects | unknown | |
d2375 | train | Intel's VTune or AMD's CodeAnalyst are both very good tools. On Linux, Perf or OProfile will do the same thing.
A: While you are hunting around for a profiler, run the program in the debugger IDE and try this method.
Some programmers rely on it. There's an example here of how it is used.
In that example here's what happens. A series of problems are found and removed.
*
*The first iteration saved 33% of the time. (Speedup factor 1.5)
*Of the time remaining, the second iteration saved 17%. (Speedup factor 1.2)
*Of the time remaining, the third iteration saved 13%. (Speedup factor 1.15)
*Of the time remaining, the fourth iteration saved 66%. (Speedup factor 2.95)
*Of the time remaining, the fifth iteration saved 61%. (Speedup factor 2.59)
*Of the time remaining, the sixth iteration saved 98%. (Speedup factor 45.9)
All those big-percent changes were not big percents of the original time, but they became so after other problems were removed.
The total amount of time saved from the original program was over 99.8%.
The speedup was 730 times.
Most programs that have not gone through a process like this have lots of room for speedup, but you're not likely to realize it using only a profiler because all they do is make measurements. They don't always point out to you what you need to fix, and each problem you miss keeps you from getting the really significant speedup.
To put it another way, the final speedup factor is the product of all those individual factors, and if any one of them is missed, it is not only absent from the product, but it reduces the following factors.
That's why, in performance diagnosis, "good enough" is not good enough.
You have to find every problem. | unknown | |
d2376 | train | Yes. The HTML5 filesystems (both PERSISTENT and TEMPORARY) are shared between JavaScript and NaCl. You can, for example, write files in JavaScript and then read them them native code.
See: http://www.w3.org/TR/file-system-api/
And: https://developers.google.com/native-client/dev/devguide/coding/file-io
On the NaCl side you can also access the HTML5 filesystems with POSIX I/O operations by using the nacl_io library. | unknown | |
d2377 | train | Do you need the width to be dynamic or can it be fixed in size? I would remove the spans, float div.inner and hardcode its width. Something like this:
.container {
overflow: hidden;
}
.inner {
float: left;
padding: 7px;
width: 106px; /* you could use percentages to fix the widths if you'd like to keep things dynamic. */
}
You could just adjust the padding and avoid setting the border all together. Setting overflow to hidden on the container will force the container element to fit all of the floated elements inside of it. This allows you to avoid inserting a div to clear the floated elements.
You could also express this as a nested list as it's best to avoid unnecessary divs:
<ol id="examples_list">
<li>
<ul class="container">
<li class="box">...</li>
<li class="box">...</li>
<li class="inner">...</li>
</ul>
</li>
</ol>
with...
#examples_list, #examples_list ul {
list-style: none;
margin: 0;
padding: 0;
}
To style it in a similar fashion.
A: Ok, based on @b_benjamin response to a comment above, I think I might have one possible solution but I also think it will rely on some CSS that might not play well in older browsers, but it's a simple concept that can probably be adjusted with other tricks.
This seems to work in the latest FF, Chrome and IE9.
First, the HTML:
<div style="width:340px;">
<!-- a list of text, with some time's marked up -->
<ul class="sched">
<li><b>17:55</b><b>18:10</b> <a href="#">Lorem ipsum dolor</a> sit posuere.</li>
<li><b>18:20</b><b>18:30</b> <a href="#">Lorem ipsum dolor</a> sit amet orci aliquam.</li>
<li><b>18:20</b><b>18:30</b> <a href="#">Class aptent</a> taciti sociosqu ad sed ad.</li>
<li><b>19:05</b><b>19:17</b> <a href="#">Mauris et urna et</a> ante suscipit ultrices sed.</li>
<li><b>19:05</b><b>19:17</b> <a href="#">Proin vulputate pharetra tempus.</a> Quisque euismod tortor eget sapien blandit ac vehicula metus metus.</li>
</ul>
</div>
Now some CSS: (I used a simple color theme based on b_benjamin's sample photo)
/* reset default list styles */
.sched, .sched li{
list-style:none;
font-size:14px;
padding:0;
margin:0;
}
.sched li{
position:relative;
padding:0 10px;
margin:10px 0;
background:#631015;
color:#FFF;
}
.sched b{
position:relative;
left:-10px;
display:inline-block;
padding:2px 10px;
font-weight:none;
background:#FFF;
color:#666;
}
/* some light styling for effect */
body{
background:#cc222c;
}
.sched li a{
color:#FF9;
}
Explanation:
The box model requires a certain thought process on how to achieve padding on inline elements (text). One thing you can do is simply put padding around the entire containing box.
In my concept, I used a UL list and each LI element is the container. I used a 10px padding on the container.
.sched li{
padding:0 10px;
}
This will give us our padding, but it will cause our "time" elements to also have this padding. My "trick" is to "fix" this by using a negative relative position equal to the padding:
.sched b{
display:inline-block; /* make these items act like block level elements */
position:relative; /* give the b elements a relative position*/
left:-10px; /* offset them equal to the padding */
}
There's one last thing to do and that's to make sure the parent element is also position:relative so the child element will use it's containing dimensions:
.sched li{
position:relative; /* needed for B elements to be offset properly */
padding:0 10px;
}
Here's a snip of what it looks like on Chrome.
You can, of course, play around with padding. There's probably also some solutions to make the "B" elements float, but this seemed to work well.
I hope that helps!
A: Ben,
I don't understand why you would use two spans wrapped around the same element. Also, I rarely use spans because of their fickleness. From what I understand you want 3 blocks sitting side by side with the last element to be padded a little bit.
I would suggest simply adding an extra class to the padded div (or an id).
Try this...
[HTML]
<h2>double span with floated elements next to it</h2>
<div class="box">box #1</div>
<div class="box">box #2</div>
<div class="box boxPadded">
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus laoreet.
</div>
<div class="cleaner"></div>
[CSS]
.cleaner{clear: left; line-height: 0; height: 0;}
.box{margin-right: 5px; width: 100px; min-height: 25px; float: left;}
.boxPadded{padding: 3px 3px 9px 3px; word-wrap: break-word;}
So, now the third element has the attributes of the "box" class as well as it's own separate attribute "boxPadded".
The "min-height" is mainly for the third element. You can actually put it in the boxPadded class, but I'm a bit lazy. It will allow the element to stretch long if the text is larger than the element.
The "cleaner" is something I use after I float elements. Its' usually not needed unless you have elements in the floated element.
The "word-wrap" will allow a non space string that is longer than the element to wrap around the box. | unknown | |
d2378 | train | I installed the following NuGet package Manager for visual studio 2013 - which solved my problem -hopefully someone will find it helpfull .
Note : for visual studio 2010,2012 there are links available in the related page
https://visualstudiogallery.msdn.microsoft.com/4ec1526c-4a8c-4a84-b702-b21a8f5293ca | unknown | |
d2379 | train | Probably i've found a solution. Don't know if it is the best way to do it but it works.
I've added to the main class a field getter
public getAuthorName() {
return $this->author->name
}
in my context this getter will only be called by a serializer in certain conditions.
In my repository code i have a special method (that i refactored so that any method could implicitly call it) to impose the population of the Article->author field when queried. The method simply use the querybuilder to add LEFT JOIN to the Author class and temporarily set FetchMode to EAGER on Article->author.
At the end a simple repository method could be this
public findAllWithCustomHydration() {
$qb = $this->createQueryBuilder('obj');
$qb->leftJoin("obj.author", "a")
-> addSelect("a"); //add a left join and a select so the query automatically retrive all needed values to populate Article and Author entities
//you can chain left join for nested entities like the following
//->leftJoin("a.address", "a_address")
//-> addSelect("a_address");
$q = $qb->getQuery()
->setFetchMode(Article::class, "author", ClassMetadata::FETCH_EAGER);
//setFetchMode + EAGER tells Doctrine to prepopulate the entites NOW and not when the getter of $article->author is called.
//I don't think that line is strictly required because Doctrine could populate it at later time from the same query result or maybe doctrine automatically set the field as EAGER fetch when a LEFT JOIN is included in the query, but i've not yet tested my code without this line.
return $q->getResult();
}
The con is that you have to customize each query or better use a DQL / SQL / QueryBuilder for each method of the repo but with a good refactoring, for simple inclusion cases, you can write a generic method that inject that join on a field_name array basis.
Hope this help and add your answer if you find a better way.
PS. i've wrote the above code on the fly because now i'm not on my notebook, hope it works at first execution. | unknown | |
d2380 | train | You may create a change log by including a Git Changelog step in the Jenkins pipeline script.
This plugin provides a context object that contains all the information needed to create a changelog. It can also provide a string that is a rendered changelog, ready to be published.
Here is a screenshot of a sample Git changelog produced by this plugin:
More information about this plugin may be found in its wiki.
Hope, it helps. | unknown | |
d2381 | train | Taskkill (normally) sends WM_CLOSE. If your application is console only and has no window, while you can get CTRL_CLOSE_EVENT via a handler set by SetConsoleCtrlHandler (which happens if your controlling terminal window is closed) you can't receive a bare WM_CLOSE message.
If you have to stick with taskkill (rather than using a different program to send a Ctrl-C) one solution is to set the aforementioned handler and ensure your application has its own terminal window (e.g. by usingstart.exe "" <yourprog> to invoke it). See https://stackoverflow.com/a/23197789/4513656 for details an alternatives. | unknown | |
d2382 | train | You can store time in milliseconds and retrieve it to create a date instance from shared preferences and compare the dates.
private void saveClickTime() {
sp.edit().putLong("mTime", System.currentTimeMillis()).apply();
}
private boolean isTimeToClick() {
Date oldDate = new Date(sp.getLong("mTime", System.currentTimeMillis()));
GregorianCalendar oldCalendar = new GregorianCalendar();
oldCalendar.setTime(oldDate);
Calendar newCalendar = new GregorianCalendar();
return newCalendar.get(Calendar.DATE) != oldCalendar.get(Calendar.DATE) ||
newCalendar.get(Calendar.MONTH) != oldCalendar.get(Calendar.MONTH) ||
newCalendar.get(Calendar.YEAR) != oldCalendar.get(Calendar.YEAR);
} | unknown | |
d2383 | train | Assuming you will be able to get all the columns of the dataset then it would be a mix of features with Levels being the class labels. Formulating on the same lines:
cols = ["abc", "Level1", "Level2", "Level3"]
From this now let's take only levels because that is what we are interested in.
level_cols = [val for val in levels if "Lev" in val]
The above just check for the presence of "Lev" starts with these three characters.
Now, with level cols in place. I think you could do the following as a starting point:
1. Iterate only the level cols.
2. Take only the numbers 1,2,3,4....n
3. If step-2 is divisible by 2 then I do the prediction using the saved level model. Ideally, all the even ones.
4. Else train on other levels.
for level in level_cols:
if int(level[-1]) % 2 == 0:
# open the saved model at int(level[-1]) - 1
# Perform my prediction
else:
level_idx = int(level[-1])
model = naive_bayes_classifier.fit(x_train, y_train[level])
mf = open("model-x-"+level_idx, "wb")
pickle.dump(model, mf) | unknown | |
d2384 | train | After approx 4hrs research I found wordpress: how to add hierarchy to posts
which seem that wordpress need to implement such feature. | unknown | |
d2385 | train | After one day of no replies you're already disappointed in the 'ruby fanboys'... Not surprised it stays silent after such a comment.
Anyway, both Jekyll and Octopress are specifically aimed at generating static pages. You put the generated HTML files on a server and that's it. So there is no dynamic element at all. So if you want to add dynamic layers like a login system, you're looking at a totally different beast. You could create it, but you'd have to write the whole system yourself.
If you want use create CMS in Ruby, you might want to have a look at RefineryCMS | unknown | |
d2386 | train | Try something like this
@DatabaseSetup("/data/.../studentTestSample.xml")
Careful with your file path will help you resolve problem.
Reference document https://springtestdbunit.github.io/spring-test-dbunit/apidocs/com/github/springtestdbunit/annotation/DatabaseSetup.html section parameter value.
A: I finally found the issue. It was because the format of xml was not right!
Here is my xml name:
studentTestSample
It should be written like is:
studentTestSample.xml
I do remember I wrote "studentTestSample.xml" when I created this file. It looks like I have to emphasize the type of my xml file.
Finally, make sure you can jump to the location of file when you are typing command+B. | unknown | |
d2387 | train | str.replace() takes a 3rd argument, called count:
a.replace("8", "", 1)
By passing in 1 as the count only the first occurance of '8' is replaced:
>>> a = "843845ab38"
>>> a.replace("8", "", 1)
'43845ab38'
A: You don't have to use replace function.
Just
a[1:] will be enough
however if you want to replace all "8"s
then you may want to use replace | unknown | |
d2388 | train | This will not work that way. You will need to utilize the touch methods on the parent view that contains both of your subviews. Could look abstractly like this:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
if ([self touchOnCell:touches]) { //check to see if your touch is in the table
isDragging = YES; //view cell
}
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
//do your dragging code here
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
if (isDragging && [self touchOnDropView:touches]) {
//do your drop code here
}
}
now if this doesnt work like that alone you might want to implement the hitTest on the tableView and if the touch is in the dragging object's region, then return the parent view.
hope this helps | unknown | |
d2389 | train | I'm not sure why you use the StringUtils you can just directly replace words that match the bad words. This code works for me:
public static void main(String[] args) {
ArrayList<String> badWords = new ArrayList<String>();
badWords.add("test");
badWords.add("BadTest");
badWords.add("\\$\\$");
String test = "This is a TeSt and a $$ with Badtest.";
for(int i = 0; i < badWords.size(); i++) {
test = test.replaceAll("(?i)" + badWords.get(i), "****");
}
test = test.replaceAll("\\w*\\*{4}", "****");
System.out.println(test);
}
Output:
This is a **** and a **** with ****.
A: The problem is that these special characters e.g. $ are regex control characters and not literal characters. You'll need to escape any occurrence of the following characters in the bad word using two backslashes:
{}()\[].+*?^$|
A: My guess is that your list of bad words contains special characters that have particular meanings when interpreted in a regular expression (which is what the replaceAll method does). $, for example, typically matches the end of the string/line. So I'd recommend a combination of things:
*
*Don't use containsIgnoreCase to identify whether a replacement needs to be done. Just let the replaceAll run each time - if there is no match against the bad word list, nothing will be done to the string.
*The characters like $ that have special meanings in regular expressions should be escaped when they are added into the bad word list. For example, badwords.add("@\\$\\$");
A: Try something like this:
String stringToCheck = "This is b!d string with @$$";
List<String> badWords = asList("b!d","@$$");
for(int i = 0; i < badWords.size(); i++) {
if (StringUtils.containsIgnoreCase(stringToCheck,badWords.get(i))) {
stringToCheck = stringToCheck.replaceAll("["+badWords.get(i)+"]+","****");
}
}
System.out.println(stringToCheck);
A: Another solution: bad words matched with word boundaries (and case insensitive).
Pattern badWords = Pattern.compile("\\b(a|b|ĉĉĉ|dddd)\\b",
Pattern.UNICODE_CASE | Pattern.CASE_INSENSITIVE);
String text = "adfsa a dfs bb addfdsaf ĉĉĉ adsfs dddd asdfaf a";
Matcher m = badWords.matcher(text);
StringBuffer sb = new StringBuffer(text.length());
while (m.find()) {
m.appendReplacement(sb, stars(m.group(1)));
}
m.appendTail(sb);
String cleanText = sb.toString();
System.out.println(text);
System.out.println(cleanText);
}
private static String stars(String s) {
return s.replaceAll("(?su).", "*");
/*
int cpLength = s.codePointCount(0, s.length());
final String stars = "******************************";
return cpLength >= stars.length() ? stars : stars.substring(0, cpLength);
*/
}
And then (in comment) the stars with the correct count: one star for a Unicode code point giving two surrogate pairs (two UTF-16 chars). | unknown | |
d2390 | train | I would have expected the error to be StringIndexOutOfBoundsException as you are printing the first letter from the first line and the second letter from the second line, etc. As you don't check whether such a letter exists, there comes a point where the line is not that long.
If that is not the cause I would
*
*read the exception post it in the question.
*step through your code with your debugger to find the bug in your code.
A: you have confused number of lines, with number of characters in each line
File file = new File("tictactoe.dat");
Scanner scan = new Scanner(file);
String str = "";
int x;
int y;
for ( x = 0; x < numGames; x++) {
str = scan.nextLine();
for (y = 0; y<str.length(); y++)
{
out.println(str.charAt(y));
}
}
A: There's something strange in your logic. Assuming the DAT is:
12345
67890
abcde
fghij
klmno
Your code will print:
1
7
c
i
o
After all, you invoke "scanLine" numGames times and grab the X position on each new line.
A: Your loop isn't doing what you want it to do. You say:
... and then print each individual character from each of the strings.
But what it's actually doing is getting line 1, and printing only the first character from that line, then getting the second line and printing the second character, third line and third character, and so on. If you're printing each character on a separate line, you need to have a second loop inside your first one that iterates through the characters in the string in order to print all their individual characters.
char[] characters = str.toCharArray();
for (int i = 0; i < characters.length; i++) {
System.out.println(characters[i]);
}
Or even better, if you use a for-each loop:
for (char c : str.toCharArray()) {
System.out.println(c);
} | unknown | |
d2391 | train | Have a look at this. I am sure you could loosely modify this to your needs. I don't typically like inserting right off the output but it really depends on your data. Hope this example helps.
IF OBJECT_ID('tempdb..#ImportConfig') IS NOT NULL DROP TABLE #ImportConfig
IF OBJECT_ID('tempdb..#Config') IS NOT NULL DROP TABLE #Config
IF OBJECT_ID('tempdb..#Connection') IS NOT NULL DROP TABLE #Connection
GO
CREATE TABLE #ImportConfig (ImportConfigID INT PRIMARY KEY IDENTITY(1000,1), ImportConfigMeta VARCHAR(25))
CREATE TABLE #Config (ConfigID INT PRIMARY KEY IDENTITY(2000,1), ImportConfigID INT, ConfigMeta VARCHAR(25))
CREATE TABLE #Connection (ConnectionID INT PRIMARY KEY IDENTITY(3000,1), ConfigID INT, ConnectionString VARCHAR(50))
INSERT INTO #ImportConfig (ImportConfigMeta) VALUES
('IMPORT_ConfigMeta1'),('IMPORT_ConfigMeta2')
;MERGE
INTO #Config AS T
USING #ImportConfig AS S
ON T.ConfigID = S.ImportConfigID
WHEN NOT MATCHED THEN
INSERT (ImportConfigID, ConfigMeta) VALUES (
S.ImportConfigID,
REPLACE(S.ImportConfigMeta,'IMPORT_','')
)
OUTPUT INSERTED.ConfigID, 'CONNECTION_STRING: ' + INSERTED.ConfigMeta INTO #Connection;
SELECT 'IMPORT CONFIG' AS TableName, * FROM #ImportConfig
SELECT 'CONFIG' AS TableName, * FROM #Config
SELECT 'CONNECTION' AS TableName, * FROM #Connection | unknown | |
d2392 | train | Solution.
I added option -o StrictHostKeyChecking=no to scp.
sshpass -p 'PASSWORD' scp -o StrictHostKeyChecking=no ../xlsx/"${file_pdf%.*}-$i.xlsx" USER@HOST:/var/www/html/FOLDER 2>&1 | unknown | |
d2393 | train | A bit of history
Fetch is a standard created in 2015 by the Web Hypertext Application Technology Working Group (WHATWG). It was meant to replace the old and cumbersome XMLHttpRequest as means for issuing web requests. As it was meant to replace XMLHttpRequest the standard was clearly targeted at browsers rather than Node runtime, however due to it's wide adoption and for cross compatibility reasons, it was decided that it should also be implemented in Node.
Nonetheless, it took Node team roughly 3 years to implement experimental fetch in Node v16. Although still experimental it is now enabled by default in Node v18.
Because it took Node dev team so long to implement the Fetch standard, the community took matter in their own hands and created the node-fetch package which implements the Fetch standard.
The fetch package that you have mentioned is just coincidentally named the same as the standard but it has nothing to do with it other than that they both aim to "fetch"/"request" resources from the web.
What should you use?
In the past browsers used XMLHttpRequest API and Node used its own http.request. We now have the opportunity to bring those two ecosystems closer still by having them both use the Fetch API. This increases code interoperability and even allows code sharing between the browser and Node in certain cases.
Now, there are other popular packages out there such as axios or requests that still don't use Fetch under the hood but rather continue using Node's http library. Not using Fetch reduces inter-compatibility and therefore I don't think you should keep using either of them unless they convert, which is unlikely in the near future.
Instead, you should consider using Node's native fetch or node-fetch package . Which one though? Well, my opinion is that the Node's fetch is still in early phases but given it has the support from the core Node team I would bet on that. I suppose node-fetch has a wider adoption of the Fetch standard but I think over time it will become redundant as the Node's native fetch becomes fully implemented.
A: Both does the same thing, only difference what i see is node-fetch is compatible API on Node.js runtime,
fetch is more specific to browser. | unknown | |
d2394 | train | Try if this works, first enclose the <div class="off-canvas-wrap"> in another div
<div class="page">
<div class="off-canvas-wrap">
<div class="inner-wrap">
[..]
</div>
</div>
</div>
And then set the following css,
body,html{
height:100%;
width:100%;
}
.off-canvas-wrap,.inner-wrap{
height:100%;
}
If you want to block scrolling, say for a chat client, set .page height to 100%. And that would be
body,html{
height:100%;
width:100%;
}
.off-canvas-wrap,.inner-wrap{
height:100%;
}
.page{
height:100%;
}
A: This is the best way I've found and its pretty simple and non-hackish
NOTE: this only works on some css3 browsers. Compatible Browsers
Sass Version:
.off-canvas-wrap {
.inner-wrap{
min-height: 100vh;
}
}
CSS Version:
.off-canvas-wrap, .off-canvas-wrap > .inner-wrap {
min-height: 100vh;
}
Edit:
Foundation 6 sites version
.off-canvas-wrapper-inner, .off-canvas{
min-height: 100vh;
}
A: I had the same problems and this is what i've done:
i put .off-convas-wrapper , .inner-wrapper and aside out of my main content and just use .right(left)-off-canvas-toggle inside my body and my problem has solved.
with this way i dont need contents anymore.
BTW i put .exit-off-canvas at the end of my main content befor closing inner-wrapper tag
A: I had to hack the JS a bit, I found that depending on when the content is taller than the browser/device height or does not push to 100% height there were issues. Here’s my suggested fix: https://github.com/zurb/foundation/issues/3800 | unknown | |
d2395 | train | For anyone else who is trying to get defmacro! to work on SBCL, a temporary solution to this problem is to grope inside the unquote structure during the flatten procedure recursively flatten its contents:
(defun flatten (x)
(labels ((flatten-recursively (x flattening-list)
(cond ((null x) flattening-list)
((eq (type-of x) 'SB-IMPL::COMMA) (flatten-recursively (sb-impl::comma-expr x) flattening-list))
((atom x) (cons x flattening-list))
(t (flatten-recursively (car x) (flatten-recursively (cdr x) flattening-list))))))
(flatten-recursively x nil)))
But this is horribly platform dependant. If I find a better way, I'll post it.
A: In case anyone's still interested in this one, here are my three cents. My objection to the above modification of flatten is that it might be more naturally useful as it were originally, while the problem with representations of unquote is rather endemic to defmacro/g!. I came up with a not-too-pretty modification of defmacro/g! using features to decide what to do. Namely, when dealing with non-SBCL implementations (#-sbcl) we proceed as before, while in the case of SBCL (#+sbcl) we dig into the sb-impl::comma structure, use its expr attribute when necessary and use equalp in remove-duplicates, as we are now dealing with structures, not symbols. Here's the code:
(defmacro defmacro/g! (name args &rest body)
(let ((syms (remove-duplicates
(remove-if-not #-sbcl #'g!-symbol-p
#+sbcl #'(lambda (s)
(and (sb-impl::comma-p s)
(g!-symbol-p (sb-impl::comma-expr s))))
(flatten body))
:test #-sbcl #'eql #+sbcl #'equalp)))
`(defmacro ,name ,args
(let ,(mapcar
(lambda (s)
`(#-sbcl ,s #+sbcl ,(sb-impl::comma-expr s)
(gensym ,(subseq
#-sbcl
(symbol-name s)
#+sbcl
(symbol-name (sb-impl::comma-expr s))
2))))
syms)
,@body))))
It works with SBCL. I have yet to test it thoroughly on other implementations.
A: This is kind of tricky:
Problem: you assume that backquote/comma expressions are plain lists.
You need to ask yourself this question:
What is the representation of a backquote/comma expression?
Is it a list?
Actually the full representation is unspecified. See here: CLHS: Section 2.4.6.1 Notes about Backquote
We are using SBCL. See this:
* (setf *print-pretty* nil)
NIL
* '`(a ,b)
(SB-INT:QUASIQUOTE (A #S(SB-IMPL::COMMA :EXPR B :KIND 0)))
So a comma expression is represented by a structure of type SB-IMPL::COMMA. The SBCL developers thought that this representation helps when such backquote lists need to be printed by the pretty printer.
Since your flatten treats structures as atoms, it won't look inside...
But this is the specific representation of SBCL. Clozure CL does something else and LispWorks again does something else.
Clozure CL:
? '`(a ,b)
(LIST* 'A (LIST B))
LispWorks:
CL-USER 87 > '`(a ,b)
(SYSTEM::BQ-LIST (QUOTE A) B)
Debugging
Since you found out that somehow flatten was involved, the next debugging steps are:
First: trace the function flatten and see with which data it is called and what it returns.
Since we are not sure what the data actually is, one can INSPECT it.
A debugging example using SBCL:
* (defun flatten (x)
(inspect x)
(labels ((rec (x acc)
(cond ((null x) acc)
((atom x) (cons x acc))
(t (rec (car x) (rec (cdr x) acc))))))
(rec x nil)))
STYLE-WARNING: redefining COMMON-LISP-USER::FLATTEN in DEFUN
FLATTEN
Above calls INSPECT on the argument data. In Common Lisp, the Inspector usually is something where one can interactively inspect data structures.
As an example we are calling flatten with a backquote expression:
* (flatten '`(a ,b))
The object is a proper list of length 2.
0. 0: SB-INT:QUASIQUOTE
1. 1: (A ,B)
We are in the interactive Inspector. The commands now available:
> help
help for INSPECT:
Q, E - Quit the inspector.
<integer> - Inspect the numbered slot.
R - Redisplay current inspected object.
U - Move upward/backward to previous inspected object.
?, H, Help - Show this help.
<other> - Evaluate the input as an expression.
Within the inspector, the special variable SB-EXT:*INSPECTED* is bound
to the current inspected object, so that it can be referred to in
evaluated expressions.
So the command 1 walks into the data structure, here a list.
> 1
The object is a proper list of length 2.
0. 0: A
1. 1: ,B
Walk in further:
> 1
The object is a STRUCTURE-OBJECT of type SB-IMPL::COMMA.
0. EXPR: B
1. KIND: 0
Here the Inspector tells us that the object is a structure of a certain type. That's what we wanted to know.
We now leave the Inspector using the command q and the flatten function continues and returns a value:
> q
(SB-INT:QUASIQUOTE A ,B) | unknown | |
d2396 | train | Whenever you want avoid showing a row based on it's existence in another table you can do that using one of two ways
*
*Use not exists or not in condition
*or use a left join and add IS NULL for the joining condition.
The below query will provide you the desired result. It removes from the select query the ads that have already been visited by a particular member on a particular date.
SELECT
advertisements.Ads_ID,
advertisements.AdsName,
advertisements.code,
advertisements.Ad_Value,
advertisements.images,
advertisements.date
FROM advertisements
JOIN package_ads ON package_ads.Ads_ID=advertisements.Ads_ID
JOIN packages ON packages.Package_ID=package_ads.Package_ID
JOIN member_package ON member_package.Package_ID=packages.Package_ID
JOIN members2 ON members2.Mem_ID=member_package.Mem_ID
LEFT JOIN views ON (views.Mem_ID=members2.Mem_ID and date(views.clickeddate) = current_date and views.Ads_ID=advertisements.Ads_ID)
WHERE
member_package.Mem_ID="M100"
AND views.Ads_ID IS NULL | unknown | |
d2397 | train | OnClick of your button or any widget should append value like
JsonObject value = Json.createObjectBuilder()
.add("firstName", "John")
.add("lastName", "Smith")
.add("age", 25)
.add("address", Json.createObjectBuilder()
.add("streetAddress", "21 2nd Street")
.add("city", "New York")
.add("state", "NY")
.add("postalCode", "10021"))
.add("phoneNumber", Json.createArrayBuilder()
.add(Json.createObjectBuilder()
.add("type", "home")
.add("number", "212 555-1234"))
.add(Json.createObjectBuilder()
.add("type", "fax")
.add("number", "646 555-4567")))
.build();
A: Once you get the data in the request object in the controller from the form, take the values and append the values to JSON file through Java I/O.
Note: This is not suggestible. Generally, we make use of web applications only when we need to do CRUD operations with data from DB. | unknown | |
d2398 | train | Firstly it is worth also mentioning the PCRF, the Policy and Charging Rules Function, which is the entity that defines and manages the policies. It will often group sets of rules into profiles.
The TDF, Traffic Detection Function, is 'is a functional entity that performs application detection and reporting of detected application and its service data flow description to the PCRF'.
The PCEF 'encompasses service data flow detection, policy enforcement and flow based charging functionalities.'
From the above descriptions, all from the 3GPP spec the distinction seem quite clear - the PCRF is the brains, the TDF detects application flow and the PCEF enforces policy. However, the TDF definition goes on to say that a TDF:
For solicited application reporting, the PCRF can request the TDF to also perform enforcement actions and usage monitoring.
For those cases where service data flow description is not possible to be provided by the TDF to the PCRF, the TDF performs:
*
*Gating;
*Redirection;
*Bandwidth limitation.
for the detected applications.
and to also note that a PCEF can be extended to include TDF functionality:
NOTE: The PCEF can be enhanced with application detection and control feature as specified in clause 6.2.2.5
So, your question is a good one, there is clearly some potential for overlap, and it is quite common for vendors to actually offer a single combined TDF/PCEF product.
A: TDF was introduced as a functional entity from Release 11 and the related information are present in the specification. Let met touch upon few key points w.r.t TDF, PCEF and PCRF for setting the context for the differences between them.
TDF :
The TDF is a functional entity that performs application detection and
reporting of detected application and its service data flow
description to the PCRF. The TDF supports solicited application
reporting and/or unsolicited application reporting.
PCEF :
The PCEF encompasses service data flow detection, policy enforcement
and flow based charging functionalities. It also provides user plane traffic
handling, triggering control plane
session management (where the IP-CAN permits), QoS handling, and
service data flow measurement as well as online and offline charging
interactions.
Policy Control is enforced by PCEF as indicated by the PCRF
in two different ways: a.Gate enforcement and b.QoS enforcement.
Charging control is enforced by PCEF in the following way:
- For a service data flow (defined by an active PCC rule) that is subject to charging control, the PCEF shall allow the service data
flow to pass through the PCEF if and only if there is a corresponding
active PCC rule with and, for online charging the OCS has authorized credit for the charging key.
PCRF :
The PCRF that uses usage monitoring for making dynamic policy
decisions shall set and send the applicable thresholds to the PCEF or
TDF for monitoring. The usage monitoring thresholds shall be based on
volume. The PCEF or TDF shall notify the PCRF when a threshold is
reached and report the accumulated usage since the last report for
usage monitoring.
Other points to note :
*
*PCEF interacts with PCRF and OCS. The TDF interacts only with PCRF and not with charging system (Online or Offline CS).
*PCEF resides with-in the PDN GW. The TDF resides as a separate entity outside PGW.
*Interfaces : The Sd reference point enables a PCRF to have dynamic control over the ADC (Application Detection and Control) behaviour at a TDF. The Gx reference point enables a PCRF to have dynamic control over the PCC(Policy Charging and Control)/ADC (Application Detection and Control) behaviour at a PCEF.
ADC - This is present in TDF or in some scenario along with PCEF in which case, the PCEF is termed as PCEF enhanced with ADC.
In Application Detection and Control(ADC), two models may be applied,
depending on operator requirements: solicited and unsolicited
application reporting
Solicited application reporting: The PCRF shall instruct the TDF, or the PCEF enhanced with ADC, on which applications to detect and
whether to report start or stop event to the PCRF by activating the
appropriate ADC rules.
Unsolicited application reporting: The TDF is pre-configured on which applications to detect and report. The enforcement is done in
the PCEF
The report to the PCRF shall include the same information for solicited and unsolicited application reporting that is whether the report is for start or stop, the detected Application Identifier and, if deducible, the service data flow descriptions for the application user plane traffic.
The PCRF shall accept input for PCC decision-making from the PCEF, the
TDF if present and other entities.
High level Information obtained from the PCEF via the Gx reference point, e.g. IP-CAN bearer attributes, request type, subscriber related information, IP flow mobility routing rules (if IP flow mobility is supported) and detected application’s traffic information, if the PCEF supports Application Detection and Control feature (Detected Application Identifier, Allocated Application Instance Identifier, Detected service data flow descriptions.)
PCC procedures over Gx reference point
Request for PCC rules
Provisioning of PCC rules
Provisioning of Event Triggers
Provisioning of charging related information for the IP-CAN session
Provisioning and Policy Enforcement of Authorized QoS
Requesting Usage Monitoring Control
Reporting Accumulated Usage
ADC procedures over Gx reference point :
Request for ADC rules
Provisioning of ADC rules
Requesting Usage Monitoring Control for applications
Reporting applications' Accumulated Usage
Application Detection Information
High level Information obtained from the TDF via the Sd reference point, e.g. report on application’s traffic detection start/stop, Detected Application Identifier, Allocated Application Instance Identifier, Detected service data flow descriptions.
ADC procedures over Sd reference point for solicited application reporting :
Provisioning of ADC rules
Request for ADC rules
Provisioning of Event Triggers
Requesting Usage Monitoring Control
Reporting Accumulated Usage
Application Detection Information
ADC procedures over Sd reference point for unsolicited application reporting :
Provisioning of ADC rules
Application Detection Information
TDF session to Gx session linking | unknown | |
d2399 | train | I tested dynamic SOQL in the test method with a limit clause and it worked fine without any issues.
I suggest you to put some system.debug prior to assertion to check the size of the accounts list returned.
Hope this way you will come to know whats happening. | unknown | |
d2400 | train | You can split the string with "\s*Object . Statement:\s*"
import re
word="Object A Statement: There was a cat with a bag full of meat. It was a red cat with a blue hat. Object B Statement: There was a dog with a bag full of toys. It was a blue dog with a green hat. Object C Statement: There was a dolphin with a bag full of bubbles. It was a purple dolphin with an orange hat. Object D Statement: There was a zebra with a bag full of grass. It was a white zebra with a blue hat. Object E Statement: There was a bear with a bag full of wood. It was a brown bear with a black hat."
result = re.split(r"\s*Object . Statement:\s*", word)
result = [r for r in result if len(r) > 0]
print("\n".join(result))
I get the following result.
There was a cat with a bag full of meat. It was a red cat with a blue hat.
There was a dog with a bag full of toys. It was a blue dog with a green hat.
There was a dolphin with a bag full of bubbles. It was a purple dolphin with an orange hat.
There was a zebra with a bag full of grass. It was a white zebra with a blue hat.
There was a bear with a bag full of wood. It was a brown bear with a black hat. | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.