_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d6401 | train | You could check the CurrentTransaction property and do something like this:
var transaction = Database.CurrentTransaction ?? Database.BeginTransaction()
If there is already a transaction use that, otherwise start a new one...
Edit: Removed the Using block, see comments. More logic is needed for Committing/Rollback the transcaction though...
A: I am answering the question you asked "How to nest transactions in EF Core 6?"
Please note that this is just a direct answer, but not an evaluation what is best practice and what not. There was a lot of discussion going around best practices, which is valid to question what fits best for your use case but not an answer to the question (keep in mind that Stack overflow is just a Q+A site where people want to have direct answers).
Having said that, let's continue with the topic:
Try to use this helper function for creating a new transaction:
public CommittableTransaction CreateTransaction()
=> new System.Transactions.CommittableTransaction(new TransactionOptions()
{
IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted
});
Using the Northwind database as example database, you can use it like:
public async Task<int?> CreateCategoryAsync(Categories category)
{
if (category?.CategoryName == null) return null;
using(var trans = CreateTransaction())
{
await this.Context.Categories.AddAsync(category);
await this.Context.SaveChangesAsync();
trans.Commit();
return category?.CategoryID;
}
}
And then you can call it from another function like:
/// <summary>Create or use existing category with associated products</summary>
/// <returns>Returns null if transaction was rolled back, else CategoryID</returns>
public async Task<int?> CreateProjectWithStepsAsync(Categories category)
{
using var trans = CreateTransaction();
int? catId = GetCategoryId(category.CategoryName)
?? await CreateCategoryAsync(category);
if (!catId.HasValue || string.IsNullOrWhiteSpace(category.CategoryName))
{
trans.Rollback(); return null;
}
var product1 = new Products()
{
ProductName = "Product A1", CategoryID = catId
};
await this.Context.Products.AddAsync(product1);
var product2 = new Products()
{
ProductName = "Product A2", CategoryID = catId
};
await this.Context.Products.AddAsync(product2);
await this.Context.SaveChangesAsync();
trans.Commit();
return catId;
}
To run this with LinqPad you need an entry point (and of course, add the NUGET package EntityFramework 6.x via F4, then create an EntityFramework Core connection):
// Main method required for LinqPad
UserQuery Context;
async Task Main()
{
Context = this;
var category = new Categories()
{
CategoryName = "Category A1"
// CategoryName = ""
};
var catId = await CreateProjectWithStepsAsync(category);
Console.WriteLine((catId == null)
? "Transaction was aborted."
: "Transaction successful.");
}
This is just a simple example - it does not check if there are any product(s) with the same name existing, it will just create a new one. You can implement that easily, I have shown it in the function CreateProjectWithStepsAsync for the categories:
int? catId = GetCategoryId(category.CategoryName)
?? await CreateCategoryAsync(category);
First it queries the categories by name (via GetCategoryId(...)), and if the result is null it will create a new category (via CreateCategoryAsync(...)).
Also, you need to consider the isolation level: Check out System.Transactions.IsolationLevel to see if the one used here (ReadCommitted) is the right one for you (it is the default setting).
What it does is creating a transaction explicitly, and notice that here we have a transaction within a transaction.
Note:
*
*I have used both ways of using - the old one and the new one. Pick the one you like more.
A: Just don't call SaveChanges multiple times.
The problem is caused by calling SaveChanges multiple times to commit changes made to the DbContext instead of calling it just once at the end. It's simply not needed. A DbContext is a multi-entity Unit-of-Work. It doesn't even keep an open connection to the database. This allows 100-1000 times better throughput for the entire application by eliminating cross-connection blocking.
A DbContext tracks all modifications made to the objects it tracks and persists/commits them when SaveChanges is called using an internal transaction. To discard the changes, simply dispose the DbContext. That's why all examples show using a DbContext in a using block - that's actually the scope of the Unit-of-Work "transaction".
There's no need to "save" parent objects first. EF Core will take care of this itself inside SaveChanges.
Using the Blog/Posts example in the EF Core documentation tutorial :
public class BloggingContext : DbContext
{
public DbSet<Blog> Blogs { get; set; }
public DbSet<Post> Posts { get; set; }
public string DbPath { get; }
// The following configures EF to create a Sqlite database file in the
// special "local" folder for your platform.
protected override void OnConfiguring(DbContextOptionsBuilder options)
=> options.UseSqlServer($"Data Source=.;Initial Catalog=tests;Trusted_Connection=True; Trust Server Certificate=Yes");
}
public class Blog
{
public int BlogId { get; set; }
public string Url { get; set; }
public List<Post> Posts { get; } = new();
}
public class Post
{
public int PostId { get; set; }
public string Title { get; set; }
public string Content { get; set; }
public int BlogId { get; set; }
public Blog Blog { get; set; }
}
The following Program.cs will add a Blog with 5 posts but only call SaveChanges once at the end :
using (var db = new BloggingContext())
{
Blog blog = new Blog { Url = "http://blogs.msdn.com/adonet" };
IEnumerable<Post> posts = Enumerable.Range(0, 5)
.Select(i => new Post {
Title = $"Hello World {i}",
Content = "I wrote an app using EF Core!"
});
blog.Posts.AddRange(posts);
db.Blogs.Add(blog);
await db.SaveChangesAsync();
}
The code never specifies or retrieves the IDs. Add is an in-memory operation so there's no reason to use AddAsync. Add starts tracking both the blog and the related Posts in the Inserted state.
The contents of the tables after this are :
select * from blogs
select * from posts;
-----------------------
BlogId Url
1 http://blogs.msdn.com/adonet
PostId Title Content BlogId
1 Hello World 0 I wrote an app using EF Core! 1
2 Hello World 1 I wrote an app using EF Core! 1
3 Hello World 2 I wrote an app using EF Core! 1
4 Hello World 3 I wrote an app using EF Core! 1
5 Hello World 4 I wrote an app using EF Core! 1
Executing the code twice will add another blog with another 5 posts.
PostId Title Content BlogId
1 Hello World 0 I wrote an app using EF Core! 1
2 Hello World 1 I wrote an app using EF Core! 1
3 Hello World 2 I wrote an app using EF Core! 1
4 Hello World 3 I wrote an app using EF Core! 1
5 Hello World 4 I wrote an app using EF Core! 1
6 Hello World 0 I wrote an app using EF Core! 2
7 Hello World 1 I wrote an app using EF Core! 2
8 Hello World 2 I wrote an app using EF Core! 2
9 Hello World 3 I wrote an app using EF Core! 2
10 Hello World 4 I wrote an app using EF Core! 2
Using SQL Server XEvents Profiler shows that these SQL calls are made:
exec sp_executesql N'SET NOCOUNT ON;
INSERT INTO [Blogs] ([Url])
VALUES (@p0);
SELECT [BlogId]
FROM [Blogs]
WHERE @@ROWCOUNT = 1 AND [BlogId] = scope_identity();
',N'@p0 nvarchar(4000)',@p0=N'http://blogs.msdn.com/adonet'
exec sp_executesql N'SET NOCOUNT ON;
DECLARE @inserted0 TABLE ([PostId] int, [_Position] [int]);
MERGE [Posts] USING (
VALUES (@p1, @p2, @p3, 0),
(@p4, @p5, @p6, 1),
(@p7, @p8, @p9, 2),
(@p10, @p11, @p12, 3),
(@p13, @p14, @p15, 4)) AS i ([BlogId], [Content], [Title], _Position) ON 1=0
WHEN NOT MATCHED THEN
INSERT ([BlogId], [Content], [Title])
VALUES (i.[BlogId], i.[Content], i.[Title])
OUTPUT INSERTED.[PostId], i._Position
INTO @inserted0;
SELECT [i].[PostId] FROM @inserted0 i
ORDER BY [i].[_Position];
',N'@p1 int,@p2 nvarchar(4000),@p3 nvarchar(4000),@p4 int,@p5 nvarchar(4000),@p6 nvarchar(4000),@p7 int,@p8 nvarchar(4000),@p9 nvarchar(4000),@p10 int,@p11 nvarchar(4000),@p12 nvarchar(4000),@p13 int,@p14 nvarchar(4000),@p15 nvarchar(4000)',@p1=3,@p2=N'I wrote an app using EF Core!',@p3=N'Hello World 0',@p4=3,@p5=N'I wrote an app using EF Core!',@p6=N'Hello World 1',@p7=3,@p8=N'I wrote an app using EF Core!',@p9=N'Hello World 2',@p10=3,@p11=N'I wrote an app using EF Core!',@p12=N'Hello World 3',@p13=3,@p14=N'I wrote an app using EF Core!',@p15=N'Hello World 4'
The unusual SELECT and MERGE are used to ensure IDENTITY values are returned in the order the objects were inserted, so EF Core can assign them to the object properties. After calling SaveChanges all Blog and Post objects will have the correct database-generated IDs | unknown | |
d6402 | train | this is probably due to the fact that
<div ref="editor" v-html="value"></div> is inside a child component's slot v-tab-item which is conditionally rendered.
that means that the v-tab-item is mounted AFTER the parent's mounted() executes, so the content (including your refs) are not available.
If you can defer the initialization until the child has mounted then you can access the ref, but getting that to work is a complex endeavor.
Instead, I would opt to define a component that handles quill initialization and that can be nested in the tab.
ie:
<v-tab-item key="1" value="tab1">
<MyQuillComponent v-model="value"/>
</v-tab-item> | unknown | |
d6403 | train | Try with :
string cmdstring = "UPDATE table SET date='" + DateTime.Parse(datetxt.Text).ToString("dd/MM/yyy") +"' WHERE id ="+id;
A: Apparently it seems a problem in the date format . The solution indicated by Beldi Anouar should funcionarte .
Good luck | unknown | |
d6404 | train | We can use .SD to select to columns based on the logical vector
library(data.table)
a[, .SD[, colSums(.SD)>500, with = FALSE],.SDcols=setdiff(names(a),c("vs","am"))]
If we wanted to do rowSums, just use that as index
d <- a[, .SD[rowSums(.SD)>300],.SDcols=-c(8,9)]
Or with Reduce
a[, .SD[Reduce(`+`, .SD) > 300], .SDcols = -c(8, 9)]
If we need to get all the columns, use .I instead of .SD
a[a[, .I[Reduce(`+`, .SD) > 300], .SDcols = -c(8, 9)]] | unknown | |
d6405 | train | i just changed this password=user_register.cleaned_data.get('password') to this password=request.POST.get('password') and it worked | unknown | |
d6406 | train | For lists, you use [[ not [ to assign/get a single element ([ returns a sublist).
for (i in c(1:length(x))) {
similitud[[i]] <- agrep(x[i],x[-i],max=3,value=T)
}
Just change your similitud[i] to a similitud[[i]]. | unknown | |
d6407 | train | It turned out it was stuck due to a pre-push commit hook which was placed there (at <repository-root>/.git/hooks/pre-push) by a third-party tool.
To debug, I ran the command with GIT_TRACE on:
$ GIT_TRACE=1 git push -v origin xyz
11:47:11.950226 git.c:340 trace: built-in: git 'push' '-v' 'origin' ‘xyz’
Pushing to [email protected]:repo.git
11:47:11.951795 run-command.c:626 trace: run_command: 'ssh' '[email protected]' 'git-receive-pack ‘\’’repo.git'\'''
11:47:13.100323 run-command.c:626 trace: run_command: '.git/hooks/pre-push' 'origin' '[email protected]'
Deleting the pre-push file solved the problem. | unknown | |
d6408 | train | Using .$Country instead of Country should fix it
data = data.frame(Country = c('USA','USA','UK','UK'),
Year = c(1995,2000,1995,2000),
Incidence = c(20000,23000,16000,22000))
list_plot <- data %>%
group_split(Country) %>%
map(~ggplot(., aes(x = Year, y = Incidence) ) +
geom_line()+ geom_point() + labs(title = .$Country)) | unknown | |
d6409 | train | The using namespace X will simply tell the compiler "when looking to find a name, look in X as well as the current namespace". It does not "import" anything. There are a lot of different ways you could actually implement this in a compiler, but the effect is "all the symbols in X appear as if they are available in the current namespace".
Or put another way, it would appear as if the compiler adds X:: in front of symbols when searching for symbols (as well as searching for the name itself without namespace).
[It gets rather complicated, and I generally avoid it, if you have a symbol X::a and local value a, or you use using namespace Y as well, and there is a further symbol Y::a. I'm sure the C++ standard DOES say which is used, but it's VERY easy to confuse yourself and others by using such constructs.]
In general, I use explicit namespace qualifiers on "everything", so I rarely use using namespace ... at all in my own code.
A: No, it does not. It means that you can, from this line on, use classes and functions from std namespace without std:: prefix. It's not an alternative to #include. Sadly, #include is still here in C++.
Example:
#include <iostream>
int main() {
std::cout << "Hello "; // No `std::` would give compile error!
using namespace std;
cout << "world!\n"; // Now it's okay to use just `cout`.
return 0;
}
A: Nothing is "imported" into the file by a using directive. All it does is to provide shorter ways to write symbols that already exist in a namespace. For example, the following will generally not compile if it is the first two lines of a file:
#include <string>
static const string s("123");
The <string> header defines std::string, but string is not the same thing. You haven't defined string as a type, so this is an error.
The next code snippet (at the top of a different file) will compile, because when you write using namespace std, you are telling the compiler that string is an acceptable way to write std::string:
#include <string>
using namespace std;
static const string s("123");
But the following will not generally compile when it appears at the top of a file:
using namespace std;
static const string s("123");
and neither will this:
using namespace std;
static const std::string s("123");
That's because using namespace doesn't actually define any new symbols; it required some other code (such as the code found in the <string> header) to define those symbols.
By the way, many people will wisely tell you not to write using namespace std in any code. You can program very well in C++ without ever writing using namespace for any namespace. But that is the topic of another question that is answered at Why is "using namespace std" considered bad practice?
A: No, #include still works exactly the same in C++.
To understand using, you first need to understand namespaces. These are a way of avoiding the symbol conflicts which happen in large C projects, where it becomes hard to guarantee, for example, that two third-party libraries don't define functions with the same name. In principle everyone can choose a unique prefix, but I've encountered genuine problems with non-static C linker symbols in real projects (I'm looking at you, Oracle).
So, namespace allows you to group things, including whole libraries, including the standard library. It both avoids linker conflicts, and avoids ambiguity about which version of a function you're getting.
For example, let's create a geometry library:
// geo.hpp
struct vector;
struct matrix;
int transform(matrix const &m, vector &v); // v -> m . v
and use some STL headers too:
// vector
template <typename T, typename Alloc = std::allocator<T>> vector;
// algorithm
template <typename Input, typename Output, typename Unary>
void transform(Input, Input, Output, Unary);
But now, if we use all three headers in the same program, we have two types called vector, two functions called transform (ok, one function and a function template), and it's hard to be sure the compiler gets the right one each time. Further, it's hard to tell the compiler which we want if it can't guess.
So, we fix all our headers to put their symbols in namespaces:
// geo.hpp
namespace geo {
struct vector;
struct matrix;
int transform(matrix const &m, vector &v); // v -> m . v
}
and use some STL headers too:
// vector
namespace std {
template <typename T, typename Alloc = std::allocator<T>> vector;
}
// algorithm
namespace std {
template <typename Input, typename Output, typename Unary>
void transform(Input, Input, Output, Unary);
}
and our program can distinguish them easily:
#include "geo.hpp"
#include <algorithm>
#include <vector>
geo::vector origin = {0,0,0};
typedef std::vector<geo::vector> path;
void transform_path(geo::matrix const &m, path &p) {
std::transform(p.begin(), p.end(), p.begin(),
[&m](geo::vector &v) -> void { geo::transform(m,v); }
);
}
Now that you understand namespaces, you can also see that names can get pretty long. So, to save typing out the fully-qualified name everywhere, the using directive allows you to inject individual names, or a whole namespace, into the current scope.
For example, we could replace the lambda expression in transform_path like so:
#include <functional>
void transform_path(geo::matrix const &m, path &p) {
using std::transform; // one function
using namespace std::placeholders; // an entire (nested) namespace
transform(p.begin(), p.end(), p.begin(),
std::bind(geo::transform, m, _1));
// this ^ came from the
// placeholders namespace
// ^ note we don't have to qualify std::transform any more
}
and that only affects those symbols inside the scope of that function. If another function chooses to inject the geo::transform instead, we don't get the conflict back. | unknown | |
d6410 | train | Here is one explicit way:
where (date > xxxx or (date = xxxx and hour >= hhhh)) and
(date < yyyy or (date = yyyy and hour < hhhh))
Another method would use date arithmetic:
where date + hour * interval '1 hour' >= xxxx + hhhh * interval '1 hour' and
date + hour * interval '1 hour' < yyyy + hhhh * interval '1 hour'
A: You can cast your date + hour to timestamp and let database to compare them
select (requestdate ||' '|| requesthour ||':00:00')::timestamp between ('2016-10-10 05:00:00')::timestamp and ('2016-08-01 04:00:00')::timestamp from bbt | unknown | |
d6411 | train | If you don't like the way prototyping works in JavaScript in order to achieve a simple way of inheritance and OOP, I'd suggest taking a look at this: https://github.com/haroldiedema/joii
It basically allows you to do the following (and more):
// First (bottom level)
var Person = new Class(function() {
this.name = "Unknown Person";
});
// Employee, extend on Person & apply the Role property.
var Employee = new Class({ extends: Person }, function() {
this.name = 'Unknown Employee';
this.role = 'Employee';
this.getValue = function() {
return "Hello World";
}
});
// 3rd level, extend on Employee. Modify existing properties.
var Manager = new Class({ extends: Employee }, function() {
// Overwrite the value of 'role'.
this.role = this.role + ': Manager';
// Class constructor to apply the given 'name' value.
this.__construct = function(name) {
this.name = name;
}
// Parent inheritance & override
this.getValue = function() {
return this.parent.getValue().toUpperCase();
}
});
// And to use the final result:
var myManager = new Manager("John Smith");
console.log( myManager.name ); // John Smith
console.log( myManager.role ); // Manager
console.log( myManager.getValue() ); // HELLO WORLD | unknown | |
d6412 | train | When you do a redirect, the browser sends an entirely new request, so all of the data from the previous request is inaccessible. You probably don't want to be doing a redirect here; no amount of scope will help you when you're looking at separate runs through your controller.
Think about your design a little bit - what are you trying to do? If the selection is something sticky, maybe it should go in the session. If the change is only in a partial, maybe you should use an Ajax call. Maybe the solution is as simple as rendering the index template instead of redirecting to the index action. | unknown | |
d6413 | train | Doh, blindingly obvious. Wasn't paying attention to the function signature payment.create, it returns a new payment. Its that which you need to use, not the payment used to invoke create eg:
Payment paymentToUse = payment.Create(apiContext);
paymentToUse has all the goods im after. | unknown | |
d6414 | train | You can use unix4j
Unix4jCommandBuilder unix4j = Unix4j.builder();
List<String> testClasses = unix4j.find("./src/test/java/", "*.java").toStringList();
for(String path: testClasses){
System.out.println(path);
}
pom.xml dependency:
<dependency>
<groupId>org.unix4j</groupId>
<artifactId>unix4j-command</artifactId>
<version>0.3</version>
</dependency>
Gradle dependency:
compile 'org.unix4j:unix4j-command:0.2'
A: You probably do not have to re-invent the wheel because library named Finder already implements the functionality of Unix find command: https://commons.apache.org/sandbox/commons-finder/
A: Here's a java 8 snippet to get you started if you want to roll your own. You might want to read up on the caveats of Files.list though.
public class Find {
public static void main(String[] args) throws IOException {
Path path = Paths.get("/tmp");
Stream<Path> matches = listFiles(path).filter(matchesGlob("**/that"));
matches.forEach(System.out::println);
}
private static Predicate<Path> matchesGlob(String glob) {
FileSystem fileSystem = FileSystems.getDefault();
PathMatcher pathMatcher = fileSystem.getPathMatcher("glob:" + glob);
return pathMatcher::matches;
}
public static Stream<Path> listFiles(Path path){
try {
return Files.isDirectory(path) ? Files.list(path).flatMap(Find::listFiles) : Stream.of(path);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
} | unknown | |
d6415 | train | draw_dealer_card() needs to increase $total_dealer; otherwise the loop will go an forever.
A more elaborate answer
You only calculate the total once and never again in the while loop, that is why the dealers total will never increase and therefore will never be greater than 17.
Put the code that converts a card to its value in its own function, so you can use it anywhere
<?php
/**
* return the value of the card for the current total
* @param string $card the card to convert to count
* @param int $current_total the current total of the player/dealer
* @return int the value of $card
*/
function get_card_value($card, $current_total) {
switch($card) {
case "King":
case "Queen":
case "Jack":
return 10;
case "Ace":
return ($current_total > 10) ? 1 : 11;
case "10":
case "9":
case "8":
case "7":
case "6":
case "5":
case "4":
case "3":
case "2":
return (int) $card;
}
return 0; // this should not happen probably abort here
}
From here it is easy, edit your while loop like this:
<?php
while ($total_dealer < 17){
list_dealer_hand();
draw_dealer_card();
/* this is bad code using end(),
* which might not always get the last drawn card.
* Also calculation of total is wrong this way:
* What happens if dealer draws Ace, Ace, Ace, King?
* Should be 1+1+1+10 = 13 but will result in 11+1+1+10=23
*/
$total_dealer += get_card_value(end($_SESSION['dealer_hand']), $total_dealer);
}
Correct calculation of total
To make your code more robust add a function calc_total(array $cards) which calculates the total of an array of cards and use it instead in the while loop to recalculate the dealers total. A function like this could look like this
<?php
function calc_total(array $cards) {
//this is a little tricky since aces must be counted last
$total = 0;
$aces = array();
foreach($cards as $card) {
if($card === 'Ace') {
$aces[] = $card;
continue; // next $card
}
$total += get_card_value($card, $total);
}
// add aces values
if (($total + 10 + count($aces)) > 21) {
//all aces must count 1 or 21 will be exceeded
return $total + count($aces);
}
foreach($aces as $card) {
$total += get_card_value($card, $total);
}
return $total;
}
Now your while loop could lool like this
<?php
while ($total_dealer < 17){
list_dealer_hand();
draw_dealer_card();
// recalculate the dealers total
$total_dealer = calc_total($_SESSION['dealer_hand']);
}
Setting up the pile
Mixing of number and string keys is perfectly valid php, but also most of the time misleading. In your pile you only need the cards, the values are not imoprtant, you can get a cards value at all time by calling get_card_value($card, 0). So set up the pile like this:
<?php
if(!isset($_SESSION["dealer_pile"])) $_SESSION["dealer_pile"] = array(
'Jack', 'Queen', 'King', 'Ace', '10', '9', '8', '7', '6', '5', '4', '3', '2'
);
Also change the draw_dealer_card function
<?php
function draw_dealer_card() {
//get a key
$key = array_rand($_SESSION["dealer_pile"]);
// add the card to the hand
$_SESSION["dealer_hand"][] = $_SESSION["dealer_pile"][$key];
/*
* why are you removing it from pile, the pile might
* contain multiple cards of each type
*/
// unset($_SESSION["dealer_pile"][$dealer_card]);
}
Notice how the $_SESSION['dealer_hand'] is no longer associative. Take this into account whenever you are adding cards to it, just use, $_SESSION["dealer_hand"][] = $the_new_card
A: Your current code gets static value of $total_dealer and checks in while loop without incrementing, which results in infinite loop.So try putting foreach{} loop inside while loop, which will allow $total_dealer to increment value after each selection.
if(FORM_stand("Stand")){
$total_dealer = 0;
while ($total_dealer < 17){
list_dealer_hand();
draw_dealer_card();
$text_dealer = '';
foreach($_SESSION["dealer_hand"] as $dealer_card=>$dealer_points) {
switch($dealer_card)
{
case "King":
case "Queen":
case "Jack":
case "10":
$total_dealer += 10;
break;
case "Ace":
if($total_dealer >= 11)
{
$total_dealer += 1;
}else{
$total_dealer += 11;
}
break;
case "9":
$total_dealer += 9;
break;
case "8":
$total_dealer += 8;
break;
case "7":
$total_dealer += 7;
break;
case "6":
$total_dealer += 6;
break;
case "5":
$total_dealer += 5;
break;
case "4":
$total_dealer += 4;
break;
case "3":
$total_dealer += 3;
break;
case "2":
$total_dealer += 2;
break;
}
}
}
}
A: try this
$total_dealer=0;
if(FORM_stand("Stand")){
while ($total_dealer < 17){
list_dealer_hand();
draw_dealer_card();
$total_dealer =$total_dealer+1;
}
} | unknown | |
d6416 | train | Have you tried to add this line after the delete one?
_work.SaveChanges();
As far as i know, without it, you just delete locally, and savechanges edit into the DB as well. | unknown | |
d6417 | train | My guess is that the problem is that you are not using an alias for the update. How about this version:
update ml
set LowLimit = 1
from namefile as nf join
EntityName as en
on en.EntityName = nf.name join
MeasurementLimit as ml
on en.uid = ml.UID
where en.EntityName = nf.name;
A: The solution was in the bulk insert.. The original code (above) would read in a single column csv file but it also added a blank record to the table which screwed up any query you would run against it the corrected insert code is
bulk insert namefile
from 'f:\list.csv'
(
datafiletype = 'char',
fieldterminator = ',', <========= This was wrong
rowterminator = '\n',<====== and this was wrong
errorfile = 'f:\inp_err.log'
); | unknown | |
d6418 | train | You may use CSS media selectors to alter your styling for components when printing:
https://developer.mozilla.org/en-US/docs/Web/CSS/@media
It is perhaps the cleanest solution as it was meant for such situations. Ofc, it may require to address CSS for all the components on the page to make the outcome look as desired. | unknown | |
d6419 | train | identifier $ will usually won't be enabled by default in wordpress like cms
you have to use identifier jQuery
inorder to make identifier $ work, you can try this
(function($){
// inside this scope, $ can be used
$(document).ready(function(){
// other scripts
});
})(jQuery); | unknown | |
d6420 | train | those are 2 different names for the same thing. its actually an api call you can perform without the portal as well. just an inconsistency within UI.
Api call: https://learn.microsoft.com/en-us/rest/api/resources/resourcegroups/exporttemplate | unknown | |
d6421 | train | you can use try except and catch the error
from sqlalchemy import exc
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import event
from sqlalchemy.engine import Engine
class MultiTenantSQLAlchemy(SQLAlchemy): # type: ignore
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@event.listens_for(Engine, 'handle_error')
def receive_handle_error(exception_context):
print("listen for the 'handle_error' event")
print(exception_context)
db = MultiTenantSQLAlchemy(flask_app)
try:
db.session.execute('SELECT * FROM `table_that_does_not_exit`').fetchone()
except exc.SQLAlchemyError:
pass # do something intelligent here | unknown | |
d6422 | train | I don't think to_s should have an argument (because the definition in the parent class (probablyObject) doesn't.). You can either use to_s as it is (no arguments) or create a new method which takes an argument but isn't called to_s
In other words, if you want to override a method you have to keep the exact same method signature (that is, its name and the number of arguments it takes).
What if you try:
class String
def to_s_currency(x)
number_to_currency(x)
end
end
A: First, the to_s method has no argument. And it's dangerous to call other methods in to_s when you don't know if that method also calls the to_s. (It seems that the number_to_currency calls the number's to_s indeed) After several attempts, this trick may work for your float and fixnum numbers:
class Float
include ActionView::Helpers::NumberHelper
alias :old_to_s :to_s
def to_s
return old_to_s if caller[0].match(':number_to_rounded')
number_to_currency(self.old_to_s)
end
end
class Fixnum
include ActionView::Helpers::NumberHelper
alias :old_to_s :to_s
def to_s
return old_to_s if caller[0].match(':number_to_rounded')
number_to_currency(self.old_to_s)
end
end
Note that in this trick, the method uses match(':number_to_rounded') to detect the caller and avoid recursive call. If any of your methods has the name like "number_to_rounded" and calls to_s on your number, it will also get the original number.
A: As, you want to print all int and float variables in number_to_currency, you have to overwrite to_s function in Fixnum/Integer and Float class, something like following:
As pointed out by Stefan, Integer and Float have a common parent class: Numeric, you can just do:
class Numeric
def to_s(x)
number_to_currency(x)
end
end | unknown | |
d6423 | train | I am having similar issue for menu toggle.
I added below code for my page.
Header html code:
<ion-header>
<ion-navbar text-center color="navBar">
<ion-buttons right>
<button class="menu" ion-button menuToggle="right" icon-only>
<ion-icon name="menu"></ion-icon>
</button>
</ion-buttons>
<ion-title>Password Reset</ion-title>
</ion-navbar>
</ion-header>
Header css code:
.menu {
display: block !important;
} | unknown | |
d6424 | train | The security of a classical Cryptographic Pseudo-Random Number Generator (CPRNG) is always based on some hardness assumption, such as "factoring is hard" or "colliding the SHA-256 function is hard".
Quantum computers make some computational problems easier. That violates some of the old hardness assumptions. But not all of them.
For example, blum blum shub is likely broken by quantum computers, but no one knows how to break lattice-based cryptography with quantum computers. Showing you can break all classical CPRNGs with quantum computers is tantamount to showing that BQP=NP, which is not expected to be the case.
Even if quantum computers did break all classical CPRNGs, they happen to also fill that hole. They enable the creation of "Einstein-certified" random numbers. | unknown | |
d6425 | train | By referring to the making modals 50% of size and round icon css. I have build a sample below with your requirements. You can find the working version here
Hope it helps and let me know if you have any issues.
Modal.html
<ion-content padding class="main-view">
<div class="overlay" (click)="dismiss()"></div>
<div class="modal_content">
<div class="circle"></div>
<div class="modal-content">
<h2>Welcome to Ionic!</h2>
<p>
This starter project comes with simple tabs-based layout for apps
that are going to primarily use a Tabbed UI.
</p>
<p>
Take a look at the <code>pages/</code> directory to add or change tabs,
update any existing page or create new pages.
</p>
</div>
</div>
</ion-content>
Modal.scss
modal-wrapper {
position: absolute;
width: 100%;
height: 100%;
}
@media not all and (min-height: 600px) and (min-width: 768px) {
ion-modal ion-backdrop {
visibility: hidden;
}
}
@media only screen and (min-height: 0px) and (min-width: 0px) {
.modal-wrapper {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
}
.main-view{
background: transparent;
}
.overlay {
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
z-index: 1;
opacity: .5;
background-color: #333;
}
.modal_content {
display: block;
position: relative;
top: calc(50% - (50%/2));
left: 0;
right: 0;
width: 100%;
height: 50%;
padding: 10px;
z-index: 1;
margin: 0 auto;
padding: 10px;
color: #333;
background: #e8e8e8;
background: -moz-linear-gradient(top, #fff 0%, #e8e8e8 100%);
background: -webkit-linear-gradient(top, #fff 0%, #e8e8e8 100%);
background: linear-gradient(to bottom, #fff 0%, #e8e8e8 100%);
border-radius: 5px;
box-shadow: 0 2px 3px rgba(51, 51, 51, .35);
box-sizing: border-box;
-moz-box-sizing: border-box;
-webkit-box-sizing: border-box;
//overflow: hidden;
}
.circle{
position:absolute;
height:100px;
width:100px;
border-radius:50%;
border:3px solid white;
left:50%;
margin-left:-55px;
top: -40px;
background: #d33;
z-index: 10000;
}
.modal-content{
padding-top: 5rem;
} | unknown | |
d6426 | train | Just pass --timestamping to your wget command.
Alternatively if you are more familiar with PHP's ways you can check this question for a usable method.
Use a curl HEAD request to get the file's headers and parse out the Last-Modified: header.
To use a php script as a regular command line executable use this as a starting point:
#!/bin/env php
<?php
echo "Hello World\n";
Save the file without the .php and tuck it somewhere that your server won't serve it.
Next, set the executable bit so that you can execute the script like a regular program
(u+x in the following command means grant the [u]ser e[x]ecute privileges for helloworld, and chmod is the command that unix variants use to set file permissions)
Omit the $ in the following sequence, as it represents the command prompt
$ chmod u+x helloworld
now you can execute your commandline script by calling it in the bash prompt:
$ ls
helloworld
$ ./helloworld
Hello World
$
From here you can get the full path of the executable script:
$ readlink -f helloworld
/home/SPI/helloworld
And now you can install the cronjob using the path to your executable script. | unknown | |
d6427 | train | Entities has the option of setters. It performs the validation or conversion in your case whenever you perform save an entity. Lets say you want to change the form of deadline in that case you have to set the Setter for your deadline as follow :
public function setDeadline(string $dateString)
{
$this->attributes['deadline'] = $dateString;
return $this;
}
In the following line : $this->attributes['deadline'] = $dateString; You will use some library like Carbon to format the $dateString and then reassign your variable deadline. Reference link :
https://codeigniter4.github.io/userguide/models/entities.html | unknown | |
d6428 | train | There isn't enough information here to pinpoint what's going on.
The most common cause of memory leaks in Rails applications (especially in asynchronous background jobs) is a failure to iterate through large database collections incrementally. For example, loading all User records with a statement like User.all
For example, if you have a background job that is going through every User record in the database, You should use User.find_each() or User.find_in_batches() to process these records in chunks (default is 1000 for ActiveRecord).
This limits the working set of objects loaded into memory while still processing all of the records.
You should look for un-bounded database lookups that could be loading huge numbers of objects. | unknown | |
d6429 | train | To do this with just foldl, we need to consider what state we need to keep while traversing the list.
In this case, we need the index of the current item (which starts at 0) and the current sum (which also starts at 0). We can store both of them in a tuple.
On every step, we add the current index multiplied by current value to the sum, and increment the index by 1.
After the foldl is done, we can discard the index and return the sum.
Prelude> prodSum = fst . foldl (\(sum, i) x -> (sum + x * i, i + 1)) (0, 0)
Prelude> prodSum [1..2]
2
Prelude> prodSum [1..5]
40
Prelude> prodSum [1..8]
168 | unknown | |
d6430 | train | This is expected behavior and it's not a matter of which graph is "more accurate" since you're looking at different segments of users in each case.
In the first case, all users who have made a purchase in 1/2017-10/2017 (10 months) are included in the segment. In the second case, all users who have made a purchase in 1/2016-10/2017 (22 months) are included in the segment.
So if you're looking at the number of DAU on, say, 2/1/2017, in the first case, all the users who used your app that day who made a purchase in a 10-month period are included. Whereas in the first case, all the users who used your app that day who made a purchase in a 22-month period are included. This explains why the numbers of users are more than doubled.
If you just want to see the number of unique users who made a purchase each day, you can do so in the Revenue, Dashboards or Events section by selecting the "purchase" event and "unique users" metric. These numbers will be stable regardless of the date range you select, since no event-based segment is applied. | unknown | |
d6431 | train | I think the standard way of doing this is as follows:
import axios from 'axios';
The UMD build (axios.min.js) can be helpful when you need to include axios in a <script> tag:
<script src="https://npmcdn.com/axios/dist/axios.min.js"></script> | unknown | |
d6432 | train | I think your problem is that you need to escape the dot in domain.com, so domain\.com.
# Redirect subdomains to https
RewriteCond %{SERVER_PORT} =80
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteCond %{HTTP_HOST} !^domain\.com [NC]
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
You used a 301 (permanent) and not a 302, so you may have your browser not even trying to send requests to the http domain until you close. You should use 302 while testing and put the 301 only wen it's ok.
A: Try this:
RewriteCond %{HTTPS} =off
RewriteCond %{HTTP_HOST} !^(www\.)?domain\.com [NC]
RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] | unknown | |
d6433 | train | Not overly clean, but you can try this:
SELECT * FROM transactions t JOIN
(
SELECT 'payment_provider_a' AS name,* FROM payment_provider_a
UNION
SELECT 'payment_provider_b' AS name,* FROM payment_provider_b
) p ON t.payment_provider = p.name AND t.trans_id=p.trans_id
Note that all payment_provider_x tables must have the same number and types of columns. Otherwise you'll need to select only the fields that are actually common (there are ways around this if needed). | unknown | |
d6434 | train | Bitmap have array of every pixel which is an object that contains information about pixels. You can use it for your advantage.
Pixel have 3 channels which contains informations about intensity of red, green, blue as channels.
public Color GetPixel(int x, int y)
{
Color clr = Color.Empty;
// Get color components count
int cCount = Depth / 8;
// Get start index of the specified pixel
int i = ((y * Width) + x) * cCount;
if (i > Pixels.Length - cCount)
throw new IndexOutOfRangeException();
if (Depth == 32) // For 32 bpp get Red, Green, Blue and Alpha
{
byte b = Pixels[i];
byte g = Pixels[i + 1];
byte r = Pixels[i + 2];
byte a = Pixels[i + 3]; // a
clr = Color.FromArgb(a, r, g, b);
}
if (Depth == 24) // For 24 bpp get Red, Green and Blue
{
byte b = Pixels[i];
byte g = Pixels[i + 1];
byte r = Pixels[i + 2];
clr = Color.FromArgb(r, g, b);
}
if (Depth == 8)
// For 8 bpp get color value (Red, Green and Blue values are the same)
{
byte c = Pixels[i];
clr = Color.FromArgb(c, c, c);
}
return clr;
} | unknown | |
d6435 | train | Register PageIndexChanging event
onpageindexchanging="gvSearch_PageIndexChanging"
Then in event handler do your font changing logic like
Sub gvSearch_PageIndexChanging(ByVal sender As Object, ByVal e As GridViewPageEventArgs)
{
For Each row As GridViewRow In gvSearch.Rows
If row.Cells(8).Text.Trim = "Used" Then
row.Cells(8).CssClass = "CautionRow"
End If
Next
}
A: Actually found my own answer thanks to help from my fellow programmer. Here's what really works:
In the style sheet (.css) add this:
.CautionRow {
color: red;
}
... then add this to your code:
Protected Sub gvSearch_RowDataBound(sender As Object, e As System.Web.UI.WebControls.GridViwRowEventArgs) Handles gvSearch.RowDatabound
If e.Row.Cells.Count > 1 Then
If e.Row.Cells(8).Text.ToString.ToLower.Trim = "used" Then
e.Row.Cells(8).CssClass = "CautionRow"
End If
End If
End Sub | unknown | |
d6436 | train | You probably need to set the includeKey property on your class, which fetched related objects, e.g.
var query = PFQuery(className:"Client")
// Retrieve the most recent ones
query.orderByDescending("createdAt")
// Include the user
query.includeKey("user")
Source: Parse iOS Guide
A: try this
query.whereKey("user", equalTo: PFUser.currentUser()!)
or
query.whereKey("user", equalTo: PFObject(withoutDataWithClassName: "_User", objectId: "\(PFUser.currentUser()!.objectId!)")) | unknown | |
d6437 | train | Maybe this can be help you
def mean2(x):
y = np.sum(x) / np.size(x);
return y
def corr2(a,b):
a = a - mean2(a)
b = b - mean2(b)
r = (a*b).sum() / math.sqrt((a*a).sum() * (b*b).sum());
return r
A: import numpy
print numpy.corrcoef(x,y)
Where x and y can be 1-d or 2-d like arrays.
Take a look at the docs here. | unknown | |
d6438 | train | I believe you need something like this:
for v in data:
plt.violinplot(v)
Plots this:
Since the example dataset has only a few points, you will not see much of distribution but more like flat dashes/points. But try with more data points and it will do the needed.
A: I needed to re-format my data:
df_Vi = pd.DataFrame({'Z' : data[0][0],
'Y' : data[1][0]}, index=range(len(data[0][0])))
plt.violinplot(df_Vi)
Or, a version that works with more data:
di_DFs = {}
groups = [1,2,0,7]
for grp in groups:
di_DFs[grp] = pd.DataFrame({'A' : [grp-1],
'B' : [grp],
'C' : [grp+1]})
data = []
for k in di_DFs:
data.append(di_DFs[k].iloc[[0]].values)
Indexes = range(len(groups))
df_Vi = pd.DataFrame()
for inD in Indexes:
df_Po = pd.DataFrame({inD : data[inD][0]},
index=range(len(data[0][0])))
df_Vi = pd.concat([df_Vi, df_Po], axis=1)
plt.violinplot(df_Vi) | unknown | |
d6439 | train | You keep recalculating the size of the batch you are creating. So you are recalculating the size of some data items a lot.
It would help if you would calculate the data size of each data item and simply add that to a variable to keep track of the current batch size.
Try something like this:
long batchSizeLimitInBytes = 1048576;
var batches = new List<List<T>>();
var currentBatch = new List<T>();
var currentBatchLength = 0;
for (int i = 0; i < data.Count; i++)
{
var currentData = data[i];
var currentDataLength = GetObjectSizeInBytes(currentData);
if (currentBatchLength + currentDataLength > batchSizeLimitInBytes)
{
batches.Add(currentBatch);
currentBatchLength = 0;
currentBatch = new List<T>();
}
currentBatch.Add(currentData);
currentBatchLength += currentDataLength;
}
As a sidenote, I would probably only want to convert the data to byte streams only once, since this is an expensive operation. You currently convert to streams just to check the length, you may want ot have this method actually return the streams batched, instead of List<List<T>>.
A: I think that your approach can be enhanced using the next idea: we can calculate an approximate size of the batch as sum of sizes of data objects; and then use this approximate batch size to form an actual batch; actual batch size is a size of list of data objects. If we use this idea we can reduce the number of invocations of the method GetObjectSizeInBytes.
Here is the code that implements this idea:
private static List<List<T>> SliceLogsIntoBatches<T>(List<T> data) where T : Log
{
const long batchSizeLimitInBytes = 1048576;
var batches = new List<List<T>>();
var currentBatch = new List<T>();
// At first, we calculate size of each data object.
// We will use them to calculate an approximate size of the batch.
List<long> sizes = data.Select(GetObjectSizeInBytes).ToList();
int index = 0;
// Approximate size of the batch.
long dataSize = 0;
while (index < data.Count)
{
dataSize += sizes[index];
if (dataSize <= batchSizeLimitInBytes)
{
currentBatch.Add(data[index]);
index++;
}
// If approximate size of the current batch is greater
// than max batch size we try to form an actual batch by:
// 1. calculating actual batch size via GetObjectSizeInBytes method;
// and then
// 2. excluding excess data objects if actual batch size is greater
// than max batch size.
if (dataSize > batchSizeLimitInBytes || index >= data.Count)
{
// This loop excludes excess data objects if actual batch size
// is greater than max batch size.
while (GetObjectSizeInBytes(currentBatch) > batchSizeLimitInBytes)
{
index--;
currentBatch.RemoveAt(currentBatch.Count - 1);
}
batches.Add(currentBatch);
currentBatch = new List<T>();
dataSize = 0;
}
}
return batches;
}
Here is complete sample that demostrates this approach. | unknown | |
d6440 | train | First of all, could you please verify if the API is working fine? To do so, please run kubectl get --raw /apis/metrics.k8s.io/v1beta1.
If you get an error similar to:
“Error from server (NotFound):”
Please follow these steps:
1.- Remove all the proxy environment variables from the kube-apiserver manifest.
2.- In the kube-controller-manager-amd64, set --horizontal-pod-autoscaler-use-rest-clients=false
3.- The last scenario is that your metric-server add-on is disabled by default. You can verify it by using:
$ minikube addons list
If it is disabled, you will see something like metrics-server: disabled.
You can enable it by using:
$minikube addons enable metrics-server
When it is done, delete and recreate your HPA.
You can use the following thread as a reference. | unknown | |
d6441 | train | Fred,
The FileImportQueue method being an async void is the source of your problem.
Update it to return a Task:
public class Functions
{
private readonly IMessageProcessor _fileImportQueueProcessor;
public Functions(IMessageProcessor fileImportQueueProcessor)
{
_fileImportQueueProcessor = fileImportQueueProcessor;
}
public async Task FileImportQueue([QueueTrigger("%fileImportQueueKey%")] string item)
{
await _fileImportQueueProcessor.ProcessAsync(item);
}
}
The reason for the dequeue count to be over 50 is because when _fileImportQueueProcessor.ProcessAsync(item) threw an exception it will crash the whole process. Meaning the WebJobs SDK can't execute the next task that will move the message to the poison queue.
When the message is available again in the queue the SDK will process it again and so on. | unknown | |
d6442 | train | The (currently combined) EF documentation starts with Compare EF Core & EF6.x section which contains the very "useful" topic Which One Is Right for You. Well, looks like EF Core is not for you (yet). The following applies to latest at this time EF Core v1.1.0.
First, GroupBy (even by simple primitive property) is always processed in memory.
Second, there are a lot of internal bugs causing exceptions when processing pretty valid LINQ queries like yours.
Third, after some trial and error, the following equivalent construct works for your case (at least does not generate exceptions):
.GroupBy(p => p.Member.MemberToSection.Select(m => m.Section.Name).FirstOrDefault())
(btw, irrelevant to the issue, the s => s.MemberId == p.MemberId condition inside the p.Member.MemberToSection.Any call is redundant because it is enforced by the relationship, so simple Any() would do the same.)
But now not only the GroupBy is performed in memory, but also the query is causing N + 1 SQL queries similar to EF Core nested Linq select results in N + 1 SQL queries. Congratulations :( | unknown | |
d6443 | train | Ideally you would just have one redirect. Though Google will follow more than one and suggests 2. Maximum 3. So you could be okay with your plan.
https://youtu.be/r1lVPrYoBkA | unknown | |
d6444 | train | Oops! I should have remembered how javascript does it.
Turns out you use the apply function, as in:
(apply #'format format-args) | unknown | |
d6445 | train | private void btnGenerateStats_Click(object sender, EventArgs e)
{
//...
dgvReadWrites.DataSource = dtJobReadWrite;
// etc...
}
That's a problem, you are updating dtJobReadWrite in the BGW. That causes the bound grid to get updated by the worker thread. Illegal, controls are not thread-safe and may only be updated from the thread that created them. This is normally checked, producing an InvalidOperationException while debugging but this check doesn't work for bound controls.
What goes wrong next is all over the place, you are lucky that you got a highly repeatable deadlock. The more common misbehavior is occasional painting artifacts and a deadlock only when you are not close. Fix:
dgvReadWrites.DataSource = null;
and rebinding the grid in the RunWorkerCompleted event handler, like you already do.
A: Because you unscubscribe from those events
bgw.RunWorkerCompleted -= new RunWorkerCompletedEventHandler(bgw_RunWorkerCompleted);
bgw.DoWork -= new DoWorkEventHandler(bgw_DoWork);
Remove those lines
A: Why are you creating a new BackgroundWorker every time you want to run it? I would like to see what happens with this code if you use one instance of BackgroundWorker (GetReadWriteWorker or something along those lines), subscribe to the events only once, and then run that worker Async on btnGenerateStats_Click. | unknown | |
d6446 | train | You need to apply css to the DOM object containing the text no the text
j$('path, tspan').mouseover(function(e) {
j$(this).children().css('font-size', 15);//reset to default size font
j$(e.target).css('font-size', newSize);
});
A: You didn't mentioned whether the dom is an id or a class
var sliceText = j$(this).text();
j$("#"+sliceText).css('font-size', newSize); --- if the dom element is an id
j$("."+sliceText).css('font-size', newSize); --- if the dom element is a class
A:
var newSize = '30px';
var originalSize = '14px';
$("span").hover(function(){
$(this).css('font-size', newSize);
}, function(){
$(this).css('font-size', originalSize);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<span> Jan </span><br/>
<span>Feb</span><br/>
<span>March</span>
A: As i understand you want to change the font size of hover element
so try this one
function funtest()
{
var oldSize = parseFloat(j$('text').css('font-size'));
var newSize = oldSize * 2;
j$('path, tspan').mouseover(function () {
j$(this).css('font-size', newSize);
});
} | unknown | |
d6447 | train | You set a breakpoint on Xcode (probably by mistake when trying to access to a specific line).
Breakpoints are represented by blue tabs and allow you to stop the execution of your code to check some variable states for instance. Just click on it again to deactivate it (it will turn light blue).
A: You've set some sort of user-breakpoint.
All user-set breakpoints show up in the Breakpoint Navigator:
If your Xcode doesn't say "No Breakpoints" here, you have breakpoints. Good news is, you can manage all your breakpoints from this tab. You can delete them, or create new ones.
Click the plus sign at the bottom lets you add different sorts of breakpoints here. | unknown | |
d6448 | train | %s works only with null terminated char *
char* playPass(char* s, int n) {
…
for() {
…
}
pass[i] = '\0'; //Null terminate here.
return pass;
}
A: so figured it out.
the end where i assined the new value to the new array
pass[length - i] = a;
made it to where it never wrote a value to the first element so
pass[0]= NULL;
had to change it to
pass[length - (i-1)] = a;
thanks for the help everyone, I also cleaned up the code from the magic numbers Great tip @phuclv! | unknown | |
d6449 | train | Your UI doesn't change because
StuffCards('https://static.toiimg.com/thumb/60892473.cms?imgsize=159129&width=800&height=800', false),
Will never change. When you call setstate in the StuffCards class, the widget gets a rebuild, but with the same parameters.
So you have two options here
*
*you make a function in the OtherStuffState class that toggles the value, and you pass that function on to the StuffCards class, en you call that function when the ontap event occurs in the InkWell.
*you use provider to store the data and you make a function in the modelclass to toggle the card, so you just have to call in the StuffCards class context.read<ModelClass>().toggleCard(cardNumber) | unknown | |
d6450 | train | In WordPress you need to send your response back using WP_Ajax_Response. Example:
$response = array(
'action'=>'handle_file_upload',
'data'=> array('status' => 'success', 'message' => 'File uploaded successfully', 'attachment_ids' => $attachment_ids)
);
$xmlResponse = new WP_Ajax_Response($response);
$xmlResponse->send(); | unknown | |
d6451 | train | Without seeing some code it's a little tricky to know what you exactly want, however it sounds like you want some sort of fixture/mock capability added to your tests. If you check out this other answer to a very similar problem you will see that it tells you to keep the test as a "unit".
Similar post with Answer
What this means is that we're not really concerned with testing the Window object, we'll assume Chrome or Firefox manufacturers will do this just fine for us. In your test you will be able to check and respond to your mock object and investigate that according to your logic.
When running in live code - as shown - the final step of actually handing over the location is dealt with by the browser.
In other words you are just checking your location setting logic and no other functionality. I hope this can work for you. | unknown | |
d6452 | train | Use xml parameter android:firstDayOfWeek with value from Calendar. 2 - is Monday.
<CalendarView
android:id="@+id/calendarView1"
android:firstDayOfWeek="2"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_alignParentBottom="true"
android:layout_centerHorizontal="true"
android:layout_marginBottom="157dp" />
Or you can specify it from code
CalendarView calendarView = findViewById(R.id.calendarView1);
calendarView.setFirstDayOfWeek(Calendar.MONDAY);
A: String[] days = null;
DateFormatSymbols names = new DateFormatSymbols();
days = names.getWeekdays();
for (int i=1; i<8; ++i) {
system.out.println(days[i]);
} | unknown | |
d6453 | train | If you think carefully, what you're asking for doesn't really make sense. What if you did this?
var request = RequestFactory.Create(FormParams.CopyBook);
request.Form = new Book();
If the underlying type of request was Request<CopyBook>, then its Form property would have the type of CopyBook, and trying to set its value to a Book wouldn't make sense.
If you determine that the above use-case should never happen, you can formalize that fact by using an interface that doesn't allow the Form property to be set. Then you can make that interface covariant.
public class Request<T> : IRequest<T>
where T : Form
{
public T Form { get; set; }
}
public interface IRequest<out T> where T : Form
{
T Form { get; }
}
...
public static IRequest<Form> Create(FormParams formParams)
But in that case you may find there's no reason to have IRequest be generic at all.
public class Request<T> : IRequest
where T : Form
{
public T Form { get; set; }
Form IRequest.Form => this.Form;
}
public interface IRequest
{
Form Form { get; }
}
...
public static IRequest Create(FormParams formParams)
A: You need to add an Interface to your hierarchy so you can flag the generic parameter as covariant.
public class Request<T> : IRequest<T> where T : Form
{
public T Form { get; set; }
}
public interface IRequest<out T>
{
public T Form
{
get;
}
}
And then you need to change the return type of your Create method to an IRequest<Form>.
public static class RequestFactory
{
public static IRequest<Form> Create(FormParams formParams)
{
if (formParams == FormParams.Book)
{
return new Request<Book>();
}
if (formParams == FormParams.Copybook)
{
return new Request<Copybook>();
}
return new Request<Notebook>();
}
} | unknown | |
d6454 | train | #include is for other header files not variables. If you want to conditionally include header files you can use precompiler commands:
//config.h
#define USE_HEADER_CODE_H
//other.h
#include <config.h>
#if defined(USE_HEADER_CODE_H)
#include <code.h>
#else
#include <other_code.h>
#endif | unknown | |
d6455 | train | You can use Object.keys(Your_JSON_Response) method, to get array of keys.
And then Array.sort() method...
A: Edit :: One-Liner
This should do the job :
function foo(dataString) {
return Object.keys(JSON.parse(dataString)).map(parseFloat).sort(function(a,b) {return a[0]-b[0]}).map(String); //datastring -> Your JSON string which you get from the server
}
A: This one line will do this job:
function foo(dataString) {
return Object.keys(dataString).sort();} | unknown | |
d6456 | train | Add new column in to your database
after In your ApplicationController.rb Add
before_action :configure_permitted_parameters, if: :devise_controller?
private
def configure_permitted_parameters
devise_parameter_sanitizer.permit(:sign_up, keys:[:profile_pic,:fname,:mobile,:gender])
end
Here use keys => your added fields | unknown | |
d6457 | train | It turns out that Apache was not able write session files to a directory(in my case, /var/lib/php/session) specified in the php.ini.
Granting the write permission for this directory to Apache has solved the problem. | unknown | |
d6458 | train | I just tried the same steps:
*
*Added a new group 'Developers' to Azure AD
*Assigned user A to this group
*Assigned this group to Readers role of a website
*Logged in with user A to portal.azure.com
User A can now see the website.
So it works as expected.
Update: when using a Microsoft Account (Microsoft Live Id), I have the same issue and that user is not able to see the website. Looks like a bug. A workaround would be to use an Azure AD user. | unknown | |
d6459 | train | Try this Transact-SQL query
ALTER TABLE dbo.CustomerTable ADD column_b VARCHAR(20) NULL, column_c INT NULL ;
This query will add 2 columns in your table -> First, b (VARCHAR[20]) & Second, c (INT).
To read more,
ALTER TABLE (Transact-SQL)
The query will not remove any existing column because it is an alter query that means it alter the table as you mention. Adding existing column doesn't alter table. So, no changes. | unknown | |
d6460 | train | From your following reply,
there really is no relationship between the 3. When I scrape with IMPORTHTML into Google sheets, those are just Tables at the locations 0,1, and 2. I'm basically just trying to have an output of each table on a separate tab
I understood that you wanted to retrieve the values with pd.read_html(requests.get('http://magicseaweed.com' + x).text)[2].iloc[:9, [0, 1, 2, 3, 4, 6, 7, 12, 15]] from id_list, and wanted to put the values to a sheet in Google Spreadsheet.
In this case, how about the following modification?
At append_rows, it seems that JSON data cannot be directly used. In this case, it is required to use a 2-dimensional array. And, I'm worried about the value of NaN in the datafarame. When these points are reflected in your script, how about the following modification?
Modified script 1:
In this sample, all values are put into a sheet.
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('152qSpr-4nK9V5uHOiYOWTWUx4ojjVNZMdSmFYov-n50')
waveData = sh.get_worksheet(0)
id_list = [
"/Belmar-Surf-Report/3683/",
"/Manasquan-Surf-Report/386/",
"/Ocean-Grove-Surf-Report/7945/",
"/Asbury-Park-Surf-Report/857/",
"/Avon-Surf-Report/4050/",
"/Bay-Head-Surf-Report/4951/",
"/Belmar-Surf-Report/3683/",
"/Boardwalk-Surf-Report/9183/",
]
# I modified the below script.
res = []
for x in id_list:
df = pd.read_html(requests.get("http://magicseaweed.com" + x).text)[2].iloc[:9, [0, 1, 2, 3, 4, 6, 7, 12, 15]].fillna("")
values = [[x], df.columns.values.tolist(), *df.values.tolist()]
res.extend(values)
res.append([])
waveData.append_rows(res, value_input_option="USER_ENTERED")
*
*When this script is run, the retrieved values are put into the 1st sheet as follows. In this sample modification, the path and a blank row are inserted between each data. Please modify this for your actual situation.
Modified script 2:
In this sample, each value is put into each sheet.
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('152qSpr-4nK9V5uHOiYOWTWUx4ojjVNZMdSmFYov-n50')
id_list = [
"/Belmar-Surf-Report/3683/",
"/Manasquan-Surf-Report/386/",
"/Ocean-Grove-Surf-Report/7945/",
"/Asbury-Park-Surf-Report/857/",
"/Avon-Surf-Report/4050/",
"/Bay-Head-Surf-Report/4951/",
"/Belmar-Surf-Report/3683/",
"/Boardwalk-Surf-Report/9183/",
]
obj = {e.title: e for e in sh.worksheets()}
for e in id_list:
if e not in obj:
obj[e] = sh.add_worksheet(title=e, rows="1000", cols="26")
for x in id_list:
df = pd.read_html(requests.get("http://magicseaweed.com" + x).text)[2].iloc[:9, [0, 1, 2, 3, 4, 6, 7, 12, 15]].fillna("")
values = [df.columns.values.tolist(), *df.values.tolist()]
obj[x].append_rows(values, value_input_option="USER_ENTERED")
*
*When this script is run, the sheets are checked and created with the sheet names of the values in id_list, and each value is put to each sheet.
Reference:
*
*append_rows | unknown | |
d6461 | train | The @SuppressWarnings annotation can only be used at the point of a declaration. Even with the Java 8 annotation enhancements that allow annotations to occur in other syntactic locations, the @SuppressWarnings annotation can't be used where you need it in this case, that is, at the point where a deprecated interface occurs in the implements clause.
You're right to want to avoid putting @SuppressWarnings on the class declaration, since that will suppress possibly unrelated warnings throughout the entire class.
One possibility for dealing with this is to create an intermediate interface that extends the deprecated one, and suppress the warnings on it. Then, change the uses of the deprecated interface to the sub-interface:
@SuppressWarnings("deprecation")
interface SubBaz extends DeprecatedBaz { }
public class Foo ... implements SubBaz ...
This works to avoid the warnings because class annotations (in this case, @Deprecated) are not inherited.
A: The purpose of the @Deprecated annotation is to trigger the warning.
If you don't want to trigger the warning, don't use the annotation. | unknown | |
d6462 | train | this works for me.
<template>
<h1>Dynamic Component</h1>
<div v-html="COMMENT"></div>
</template>
<script>
export default {
setup() {
const COLOR = "#FF0000";
const COMMENT = `<span style="background: ${COLOR}">Comment</span>`;
return {
COMMENT,
};
},
};
</script>
A: The v-html should be mounted in real html tag not the virtual template :
<div v-html="COMMENT" ></div>
You could see the comment inside the inspected DOM
A: You may also use dynamic component with the <component is>
Vue JS Dynamic Components
And put all your comment component code in a different file: comment.vue
<template>
<h1>Dynamic Component</h1>
<component :is='currentComponent'>
</template>
<script>
import CommentComponent from './comment.vue'
export default {
computed: {
currentComponent() {
// do some logic then
return CommentComponent
}
}
};
</script>
A: Create a component Comment.vue that uses render function to create a comment
<script>
import { createCommentVNode } from 'vue'
export default {
props: ['content'],
setup(props) {
return () => createCommentVNode(props.content || '')
}
}
</script>
Usage:
<script setup>
import Comment from "@/components/Comment.vue";
</script>
<template>
<Comment content="Hi, I'm just a comment!"/>
</template>
Rendered comment:
<!--Hi, I'm just a comment!-->
The vue internal api is not designed for these kinda hacks. So, please use carefully. | unknown | |
d6463 | train | @ian-roberts and @bohuslav-burghardt already answered your question: you have created a structure that doesn't conform to the structure that Maven project must have.
To fix it you should put all contents of WebContent directory to src/main/webapp | unknown | |
d6464 | train | The two computers have different regional settings. You are converting string "12/25/2011" to a DateTime value. If in Control Panel/Regional Settings short date format is dd/MM/yyyy, then 25 is interpreted as month number and the string is considered invalid, since we only have twelve. As for longitude/latitude values, my guess would be that decimal separator is set to comma on your college server. Consider using the versions of Convert.ToDateTime/ToDouble with the second IFromatProvider parameter. | unknown | |
d6465 | train | You have a margin-top property set to 200px on the button. That property stays after the button expands. | unknown | |
d6466 | train | I love @stefreak's question and his solution. Bearing in mind @dfri's excellent answer about Swift's runtime introspection, however, we can simplify and generalise @stefreak's "type tagging" approach to some extent:
protocol AnySequenceType {
var anyElements: [Any?] { get }
}
extension AnySequenceType where Self : SequenceType {
var anyElements: [Any?] {
return map{
$0 is NilLiteralConvertible ? Mirror(reflecting: $0).children.first?.value : $0
}
}
}
extension Array : AnySequenceType {}
extension Set : AnySequenceType {}
// ... Dictionary, etc.
Use:
let things: Any = [1, 2]
let maybies: Any = [1, nil] as [Int?]
(things as? AnySequenceType)?.anyElements // [{Some 1}, {Some 2}]
(maybies as? AnySequenceType)?.anyElements // [{Some 1}, nil]
See Swift Evolution mailing list discussion on the possibility of allowing protocol extensions along the lines of:
extension<T> Sequence where Element == T?
In current practice, however, the more common and somewhat anticlimactic solution would be to:
things as? AnyObject as? [AnyObject] // [1, 2]
// ... which at present (Swift 2.2) passes through `NSArray`, i.e. as if we:
import Foundation
things as? NSArray // [1, 2]
// ... which is also why this fails for `mabyies`
maybies as? NSArray // nil
At any rate, what all this drives home for me is that once you loose type information there is no going back. Even if you reflect on the Mirror you still end up with a dynamicType which you must switch through to an expected type so you can cast the value and use it as such... all at runtime, all forever outside the compile time checks and sanity.
A: As an alternative to @milos and OP:s protocol conformance check, I'll add a method using runtime introspection of something (foo and bar in examples below).
/* returns an array if argument is an array, otherwise, nil */
func getAsCleanArray(something: Any) -> [Any]? {
let mirr = Mirror(reflecting: something)
var somethingAsArray : [Any] = []
guard let disp = mirr.displayStyle where disp == .Collection else {
return nil // not array
}
/* OK, is array: add element into a mutable that
the compiler actually treats as an array */
for (_, val) in Mirror(reflecting: something).children {
somethingAsArray.append(val)
}
return somethingAsArray
}
Example usage:
/* example usage */
let foo: Any = ["one", 2, "three"]
let bar: [Any?] = ["one", 2, "three", nil, "five"]
if let foobar = getAsCleanArray(foo) {
print("Count: \(foobar.count)\n--------")
foobar.forEach { print($0) }
} /* Count: 3
--------
one
2
three */
if let foobar = getAsCleanArray(bar) {
print("Count: \(foobar.count)\n-------------")
foobar.forEach { print($0) }
} /* Count: 5
-------------
Optional("one")
Optional(2)
Optional("three")
nil
Optional("five") */
A: The only solution I came up with is the following, but I don't know if it's the most elegant one :)
protocol AnyOptional {
var anyOptionalValue: Optional<Any> { get }
}
extension Optional: AnyOptional {
var anyOptionalValue: Optional<Any> {
return self
}
}
protocol AnyArray {
var count: Int { get }
var allElementsAsOptional: [Any?] { get }
}
extension Array: AnyArray {
var allElementsAsOptional: [Any?] {
return self.map {
if let optional = $0 as? AnyOptional {
return optional.anyOptionalValue
}
return $0 as Any?
}
}
}
Now you can just say
if let array = something as? AnyArray {
print(array.count)
print(array.allElementsAsOptional)
}
A: This works for me on a playground:
// Generate fake data of random stuff
let array: [Any?] = ["one", "two", "three", nil, 1]
// Cast to Any to simulate unknown object received
let something: Any = array as Any
// Use if let to see if we can cast that object into an array
if let newArray = something as? [Any?] {
// You now know that newArray is your received object cast as an
// array and can get the count or the elements
} else {
// Your object is not an array, handle however you need.
}
A: I found that casting to AnyObject works for an array of objects. Still working on a solution for value types.
let something: Any = ["one", "two", "three"]
if let aThing = something as? [Any] {
print(aThing.dynamicType) // doesn't enter
}
if let aThing = something as? AnyObject {
if let theThing = aThing as? [AnyObject] {
print(theThing.dynamicType) // Array<AnyObject>
}
} | unknown | |
d6467 | train | @Lop Castro, I would suggest you use regex in your case. You can check the documentation of Django how to use regex with ORM.
https://docs.djangoproject.com/en/dev/ref/models/querysets/#regex | unknown | |
d6468 | train | You need to obtain coordinates of the item. For that you need to first obtain its handle. And when you get the rect, you must translate it to form coordinates.
Private Declare Function SendMessage Lib "user32.dll" Alias "SendMessageA" (ByVal hwnd As Long, ByVal wMsg As Long, ByVal wParam As Long, ByRef lParam As Any) As Long
Private Declare Function MapWindowPoints Lib "user32.dll" (ByVal hwndFrom As Long, ByVal hwndTo As Long, ByRef lppt As Any, ByVal cPoints As Long) As Long
Private Type RECT
Left As Long
Top As Long
Right As Long
Bottom As Long
End Type
Private Type RECTF
Left As Single
Top As Single
Right As Single
Bottom As Single
End Type
Private Const TV_FIRST As Long = &H1100&
Private Const TVM_GETITEMRECT As Long = (TV_FIRST + 4)
Private Const TVM_GETNEXTITEM As Long = (TV_FIRST + 10)
Private Const TVGN_CARET As Long = &H9&
Private Function GetSelectedItemRect(ByVal tv As TreeView, ByRef outRect As RECTF) As Boolean
Dim hItem As Long
hItem = SendMessage(tv.hwnd, TVM_GETNEXTITEM, TVGN_CARET, ByVal 0&)
If hItem Then
Dim r As RECT
r.Left = hItem
If SendMessage(tv.hwnd, TVM_GETITEMRECT, 1, r) Then
MapWindowPoints tv.hwnd, Me.hwnd, r, 2
outRect.Left = Me.ScaleX(r.Left, vbPixels, Me.ScaleMode)
outRect.Top = Me.ScaleY(r.Top, vbPixels, Me.ScaleMode)
outRect.Right = Me.ScaleX(r.Right, vbPixels, Me.ScaleMode)
outRect.Bottom = Me.ScaleY(r.Bottom, vbPixels, Me.ScaleMode)
GetSelectedItemRect = True
End If
End If
End Function
Usage:
Dim r As RECT
If GetSelectedItemRect(TreeView1, r) Then
PopupMenu whatever, , r.Right, r.Top
End If | unknown | |
d6469 | train | You need to pass the entity in to the Type's constructor and then use it to get the parameter.
Class YourType extends AbstractType
{
private $account;
public function __construct($account)
{
$this->account = $account;
}
public function buildForm(FormBuilderInterface $builder, array $options)
{
$accountId = $account->getAccountId();
$builder->add('addressId',
'entity',
array('class' => 'YourBundle:Address',
'query_builder' => function(EntityRepository $er) use ($accountId) {
return $er->createQueryBuilder('a')
->where('a.accountId = ?1')
->setParameter(1, $accountId)));
}
} | unknown | |
d6470 | train | If you're interacting with something well defined (i.e. the vast majority of APIs out there), then you're much better off creating a strongly typed object(s) instead of dynamic or dictionary.
In Visual Studio if you go Edit>Paste Special>Paste JSON as Classes then it will generate all the objects you need.
public class Rootobject
{
public int status { get; set; }
public int offset { get; set; }
public int limit { get; set; }
public int count { get; set; }
public int total { get; set; }
public string url { get; set; }
public Result[] results { get; set; }
}
public class Result
{
public string[] datasets { get; set; }
public string headword { get; set; }
public int homnum { get; set; }
public string id { get; set; }
public string part_of_speech { get; set; }
public Pronunciation[] pronunciations { get; set; }
public Sens[] senses { get; set; }
public string url { get; set; }
}
public class Pronunciation
{
public Audio[] audio { get; set; }
public string ipa { get; set; }
}
public class Audio
{
public string lang { get; set; }
public string type { get; set; }
public string url { get; set; }
}
public class Sens
{
public string[] definition { get; set; }
public Example[] examples { get; set; }
public Gramatical_Examples[] gramatical_examples { get; set; }
public string signpost { get; set; }
}
public class Example
{
public Audio1[] audio { get; set; }
public string text { get; set; }
}
public class Audio1
{
public string type { get; set; }
public string url { get; set; }
}
public class Gramatical_Examples
{
public Example1[] examples { get; set; }
public string pattern { get; set; }
}
public class Example1
{
public Audio2[] audio { get; set; }
public string text { get; set; }
}
public class Audio2
{
public string type { get; set; }
public string url { get; set; }
} | unknown | |
d6471 | train | This worked for me (adapted from thorndeux's answer):
import logging.config
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'prepend_date': {
'format': '{asctime} {levelname}: {message}',
'style': '{',
},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'prepend_date',
},
},
'root': {
'handlers': ['console'],
'level': 'INFO',
},
}
logging.config.dictConfig(LOGGING)
logging.info('foo')
logging.warning('bar')
prints
2021-11-28 16:05:13,469 INFO: foo
2021-11-28 16:05:13,469 WARNING: bar
A: Add a formatter to your logger. For example:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'prepend_date': {
'format': '{asctime} {message}',
'style': '{',
},
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'prepend_date',
},
},
'root': {
'handlers': ['console'],
'level': 'INFO',
},
}
A: The answer to your question can be found in the official documentation, please read the full article.
import logging
FORMAT = '%(asctime)s %(message)s'
logging.basicConfig(format=FORMAT)
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.info("Doing some logging here.")
This would print:
2021-11-29 14:06:59,825 Doing some logging here.
A: What is missing in your example is formatters (add to LOGGING dict following):
LOGGING = {
...
"formatters": {"console": {"format": "%(asctime)s %(message)s"}},
# and add it to handlers
"handlers": {"console": {"class": "logging.StreamHandler", "formatter": "console"}},
...
}
Another thing that you may want to change logging setup in classes to something like
import logging
logger = logging.getLogger(__name__)
...
class TransactionService:
def my_method(self, arg1, arg2):
...
logger.info("Doing some logging here.")
You can also find full example in official Django documentation (last of examples in that subsection):
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '{levelname} {asctime} {module} {process:d} {thread:d} {message}',
'style': '{',
},
'simple': {
'format': '{levelname} {message}',
'style': '{',
},
},
'filters': {
'special': {
'()': 'project.logging.SpecialFilter',
'foo': 'bar',
},
'require_debug_true': {
'()': 'django.utils.log.RequireDebugTrue',
},
},
'handlers': {
'console': {
'level': 'INFO',
'filters': ['require_debug_true'],
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler',
'filters': ['special']
}
},
'loggers': {
'django': {
'handlers': ['console'],
'propagate': True,
},
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': False,
},
'myproject.custom': {
'handlers': ['console', 'mail_admins'],
'level': 'INFO',
'filters': ['special']
}
}
} | unknown | |
d6472 | train | Something like this would work:
/^(?=.*a)(?=.*p.*p)(?=.*l)(?=.*e)[aple]{5}$/i
*
*/^ - start the regex and use a start string anchor
*(?=.*a) - ensure that an a exists anywhere in the string
*(?=.*p.*p) - ensure that two ps exist anywhere in the string
*(?=.*l) - ensure that an l exists anywhere in the string
*(?=.*e) - ensure that an e exists anywhere in the string
*[aple]{5} - get 5 chars of a, p, l, or e
*$ - end string anchor
*/i - case-insensitive modifier
https://regex101.com/r/DJlWAL/1/
You can ignore the /gm in the demo as they are used just for show with all the words at one time.
A: You can assert a single instance of a l and e by using a character class and optionally matching all allowed chars except the one that you want to assert, preventing unnecessary backtracking.
Then match 2 times a p char between optionally matching the other chars (as they are already asserted for).
^(?=[ple]*a[ple]*$)(?=[ape]*l[ape]*$)(?=[apl]*e[apl]*$)[ale]*p[ale]*p[ale]*$
Explanation
*
*^ Start of string
*(?=[ple]*a[ple]*$) Assert an a
*(?=[ape]*l[ape]*$) Assert an l
*(?=[apl]*e[apl]*$) Assert an e
*[ale]*p[ale]*p[ale]* Match 2 times a p
*$ End of string
Regex demo
Or a shorter version using a quantifier based of the answer of @MonkeyZeus.
^(?=[ple]*a)(?=[ape]*l)(?=[apl]*e)(?=[ale]*p[ale]*p)[aple]{5}$
Regex demo
A: ^(?=^[^a]*a[^a]*$)(?=^[^l]*l[^l]*$)(?=^[^e]*e[^e]*$)[aple]{5}$
This works by restricting the number of a, l, and e characters to exactly 1 of each, and then restricting the overall string to exactly 5 characters to enforce 2 p characters.
You asked about regex, but in case it helps, an algorithmic version would be to sort the characters of APPLE and compare to the upper-cased version of your string sorted as above. | unknown | |
d6473 | train | In C there's really no difference between a char and an int. Even character literals, like e.g. 'A', are almost universally promoted to int.
But the most important thing is that EOF is an int value. If char is unsigned (it's implementation-specific if char is signed or unsigned) then when the char value -1 is promoted to int it becomes 0x000000ff, which is very different from the int value -1 which is 0xffffffff (assuming normal two's complement systems). | unknown | |
d6474 | train | Your browser is not allow CORS Origin api access. So you can add the plugin
CORS Plugin for chrome | unknown | |
d6475 | train | If these dataframes all have the same structure, you will save considerable time by using the 'colClasses' argument to the read.table or read.csv steps. The lapply function can pass this to read.* functions and if you used Dason's guess at what you were really doing, it would be:
x <- do.call(rbind, lapply(file_names, read.csv,
colClasses=c("numeric", "Date", "character")
)) # whatever the ordered sequence of classes might be
The reason that rbind cannot take your character vector is that the names of objects are 'language' objects and a character vector is ... just not a language type. Pushing character vectors through the semi-permeable membrane separating 'language' from 'data' in R requires using assign, or do.call eval(parse()) or environments or Reference Classes or perhaps other methods I have forgotten. | unknown | |
d6476 | train | For checking if app is lunched first time use SharedPreferences and for displaying images you have to use Bitmap, because without it you will get memory errors.
Add this code in your activity class.(Not in onCreate method)
public static int calculateInSampleSize(
BitmapFactory.Options options, int reqWidth, int reqHeight) {
// Raw height and width of image
final int height = options.outHeight;
final int width = options.outWidth;
int inSampleSize = 1;
if (height > reqHeight || width > reqWidth) {
final int halfHeight = height / 2;
final int halfWidth = width / 2;
// Calculate the largest inSampleSize value that is a power of 2 and keeps both
// height and width larger than the requested height and width.
while ((halfHeight / inSampleSize) > reqHeight
&& (halfWidth / inSampleSize) > reqWidth) {
inSampleSize *= 2;
}
}
return inSampleSize;
}
public static Bitmap decodeSampledBitmapFromResource(Resources res, int resId,
int reqWidth, int reqHeight) {
// First decode with inJustDecodeBounds=true to check dimensions
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeResource(res, resId, options);
// Calculate inSampleSize
options.inSampleSize = calculateInSampleSize(options, reqWidth, reqHeight);
// Decode bitmap with inSampleSize set
options.inJustDecodeBounds = false;
return BitmapFactory.decodeResource(res, resId, options);
}
After in your onCreate method check if app lunched first time and add images to imageView widget.
Boolean isFirstRun = getSharedPreferences("PREFERENCE", MODE_PRIVATE)
.getBoolean("isFirstRun", true);
if (isFirstRun) {
ImageView imageView = (ImageView) findViewById(R.id.imageView);
ImageView imageView2 = (ImageView) findViewById(R.id.imageView2);
imageView.setImageBitmap(decodeSampledBitmapFromResource(getResources(),R.drawable.image1,350,350));
imageView2.setImageBitmap(decodeSampledBitmapFromResource(getResources(),R.drawable.image2,350,350));
getSharedPreferences("PREFERENCE", MODE_PRIVATE).edit()
.putBoolean("isFirstRun", false).commit();
}
A: Save a flag in sharedPreferences the first time you access the tutorial, then check it on launch.
If no flag:
LauncherActivity -> TutorialActivity (shows four images) save flag -> MainActivity
If flagged:
Launcher Activity -> MainActivity
Check android dev guide for sharepreferences help.
Also Im not sure what you mean about showing during installation. If you mean during loading, then just go ahead and display the images during loading?
A: This is called Appintro. Which runs first time when app launches
import android.content.Context;
import android.content.Intent;
import android.content.SharedPreferences;
import android.os.Bundle;
import android.preference.PreferenceManager;
import android.support.v7.app.AppCompatActivity;
public class MainActivity extends AppCompatActivity {
public boolean isFirstStart;
Context mcontext;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Thread t = new Thread(new Runnable() {
@Override
public void run() {
// Intro App Initialize SharedPreferences
SharedPreferences getSharedPreferences = PreferenceManager
.getDefaultSharedPreferences(getBaseContext());
// Create a new boolean and preference and set it to true
isFirstStart = getSharedPreferences.getBoolean("firstStart", true);
// Check either activity or app is open very first time or not and do action
if (isFirstStart) {
// Launch application introduction screen
Intent i = new Intent(MainActivity.this, MyIntro.class);
startActivity(i);
SharedPreferences.Editor e = getSharedPreferences.edit();
e.putBoolean("firstStart", false);
e.apply();
}
}
});
t.start();
}
}
http://www.viralandroid.com/2016/10/android-appintro-slider-example.html
http://www.androidhive.info/2016/05/android-build-intro-slider-app/
https://github.com/apl-devs/AppIntro | unknown | |
d6477 | train | You need to make the following changes:
*
*Make the last argument 1 in the call to read.
read(pA[0], buff, 1);
*Put the above call in a while loop and increment nChar for every successful attempt at read.
while ( read(pA[0], buff, 1) == 1 )
{
++nChars;
}
*Close the file descriptor from the parent process once you are done writing to it.
Here's a working version of main.
int main(int argc, char **argv)
{
// set up pipe
int pA[2];
char buff[50];
pipe(pA);
// call fork()
pid_t childId = fork();
if (childId == 0) {
// -- running in child process --
int nChars = 0;
// close the output side of pipe
close(pA[1]);
// Receive characters from parent process via pipe
// one at a time, and count them.
while ( read(pA[0], buff, 1) == 1 )
{
++nChars;
}
return nChars;
}
else {
// -- running in parent process --
int nChars = 0;
int size = 0;
printf("CS201 - Assignment 3 - Timothy Jensen\n");
// close the input side of the pipe
close(pA[0]);
// Send characters from command line arguments starting with
// argv[1] one at a time through pipe to child process.
for (int i = 1; i < argc; i++)
{
size = strlen(argv[i]);
for (int z = 0; z < size; z++)
{
write(pA[1], &argv[i][z], 1);
}
}
close(pA[1]);
// Wait for child process to return. Reap child process.
// Receive number of characters counted via the value
// returned when the child process is reaped.
wait(&nChars);
printf("child counted %d chars\n", nChars/256);
return 0;
}
}
A: It seems a little silly, but you could change:
nChars = read(pA[0], buff, sizeof(buff));
to:
char ch;
nChars = read(pA[0], &ch, 1);
Of course, you would put the above into a loop to assemble a string 'one character at a time' back into buff. | unknown | |
d6478 | train | As the error message says, AADSTS50012 indicates that an invalid client secret was provided. Check if your current key is expired in the Azure Portal and try generating a new one. | unknown | |
d6479 | train | RTSP, as suggested by its name, should suit better for real-life streaming applications. To reduce the lag you can play with bitrate and dropping frames. There is proprietary pvServer though, which is capable of HTTP-streaming. | unknown | |
d6480 | train | Check eclipse settings there is a "installed JRE" preference:
(linux, juno)
Also, make sure you have a "JRE System Library" in your project classpath: | unknown | |
d6481 | train | If you want to do partial matches you need the LIKE operator:
SELECT * FROM table WHERE type LIKE 'Test%'
A: Use the Like keyword
select * from table where type LIKE 'Test%'
A: The equality (=) operator doesn't accept wild cards. You should use the like operator instead:
SELECT * FROM table WHERE type LIKE 'Test%'
A: Sooo close! You want to start with the string, then don't use % in the start, and use LIKE instead of =
select * from table where type LIKE 'Test%' | unknown | |
d6482 | train | I don't think there are any skills that you can learn in C but not C++, but I would definitely suggest learning C first still. Nobody can fit C++ in their head; it may be the most complex non-esoteric language ever created. C, on the other hand, is quite simple. It is relatively easy to fit C in your head. C will definitely help you get used to things like pointers and manual memory management much quicker than C++. C++ will help you understand OO.
Also, when I say learn C, it's okay to use a C++ compiler and possibly use things like iostreams if you want, just try to restrict yourself to mostly C features at first. If you go all out and learn all the weird C++ features like templates, RAII, exceptions, references, etc., you will be thoroughly confused.
A: Compare the following code examples (apologies, my C++ is a little rusty).
C++:
int x;
std::cin >> x;
C:
int x;
scanf("%d", &x);
So, what do you see above? In C++, a value is plugged into a variable. That's cool, but what does it teach us? In C, we notice a few things. The x has a funny & in front of it, which means it's address is being passed. That must mean that the scanf function actually needs to know where in memory x is. Not only that, but it needs a format string, so it must not really hvae any idea of what address you're giving it and what format it's getting information in?
C lets you discover all these wonderful things about how the Computer and the Operating System actually work, if you will put effort towards it. I think that's pretty cool.
A: There is certainly one thing: memory management. Take following code for example:
struct foo {
int bar;
char **box;
};
struct foo * fooTab = (struct foo *) malloc( sizeof(foo) * 100 );
for(int i = 0; i<100; i++) {
fooTab[i].bar = i;
fooTab[i].box = (char **) malloc( sizeof(char *) * 10 );
for(int j = 0; j<10; j++) {
fooTab[i].box[j] = (char *) malloc( sizeof(char) * 10 );
memset(fooTab[i].box[j], 'x', 10);
}
}
Using C++ you simply do not deal with memory allocation in such way. As a result, such skills may be left untrained. This will certainly result in e.g. lower debugging skills. Another drawback may be: lower optimizing skills.
C++ code would look like:
struct foo {
int bar;
static int idx;
vector< vector<char> > box;
foo() : bar(idx), box(10) {
idx++;
for ( auto it = box.begin(); it != box.end(); it++) {
it->resize(10,'x');
}
}
};
int foo::idx = 0;
foo fooTab[100];
As you can see there is simply no way to learn raw memory management with C++-style code.
EDIT: by "C++-style code" I mean: RAII constructors/destructors, STL containers. Anyway I had probably exaggerated, it would be better to say: It is far more difficult to learn raw memory management with C++-style code.
A: I find it's better to learn memory management in C before attempting to dive into C++.
In C, you just have malloc() and free(). You can typecast things around as pointers. It's pretty straightfoward: multiply the sizeof() by the # items needed, typecast appropriately, and use pointer arithmetic to jump all over the place.
In C++, you have your various reinterpret_cast<> (vs. dynamic_cast, etc) kind of things, which are important, but only make sense when you're realizing that the various castings are meant to trap pitfalls in multiple inheritance and other such lovely C++-isms. It's kind of a pain to understand (learn) why you need to put the word "virtual" on a destructor on objects that do memory management, etc.
That reason alone is why you should learn C. I also think it's easier to (at first) understand printf() vs. the potentially overloaded "<<" operator, etc, but to me the main reason is memory management.
A: I wouldn't say you missed anything fundamental. Check out Compatibility of C and C++ for a few non-fundamental examples, though.
Several additions of C99 are not supported in C++ or conflict with C++ features, such as variadic macros, compound literals, variable-length arrays, and native complex-number types. The long long int datatype and restrict qualifier defined in C99 are not included in the current C++ standard, but some compilers such as the GNU Compiler Collection[3] provide them as an extension. The long long datatype along with variadic templates, with which some functionality of variadic macros can be achieved, will be in the next C++ standard, C++0x. On the other hand, C99 has reduced some other incompatibilities by incorporating C++ features such as // comments and mixed declarations and code.
A: The simplicity of C focuses the mind wonderfully.
With fewer things to learn in C, there is a reasonable chance that the student will learn those things better. If one starts with C++ there is the danger of coming out the other end knowing nothing about everything.
A: If all you've ever used is object-oriented programming languages like C++ then it would be worthwhile to practice a little C. I find that many OO programmers tend to use objects like a crutch and don't know what to do once you take them away. C will give you the clarity to understand why OO programming emerged in the first place and help you to understand when its useful versus when its just overkill.
In the same vein, you'll learn what it's like to not rely on libraries to do things for you. By noting what features C++ developers turned into libraries, you get a better sense of how to abstract your own code when the time comes. It would be bad to just feel like you need to abstract everything in every situation.
That said, you don't have to learn C. If you know C++ you can drag yourself through most projects. However, by looking at the lower-level languages, you will improve your programming style, even in the higher-level ones.
A: It really depends on which subset of C++ you learned. For example, you could get by in C++ using only the iostreams libraries; that would leave you without experience in the C stdio library. I'm not sure I would call stdio a fundamental skill, but it's certainly an area of experience one should be familiar with.
Or perhaps manually dealing with C-style null-terminated strings and the str*() set of functions might be considered a fundamental skill. Practice avoiding buffer overflows in the face of dangerous API functions is definitely worth something.
A: This is just a shot in the dark, but perhaps you use function pointers more in C than in C++, and perhaps utilizing function pointers as a skill might be boosted by starting to learn C. Of course all of these goodies are in C++ as well, but once you get your hands on classes it might be easy to just keep focusing on them and miss out on other basic stuff.
You could for example construct your own inheritance and polymorphism "system" in pure C, by using simple structs and function pointers. This needs a more inventive thinking and I think builds up a greater understanding of what is happening "in the box".
If you start with C++ there is a chance that you miss out on these little details that are there in C++ but you never see them, since the compiler does it for you.
Just a few thoughts.
A: Depends on how you approach C++. I would normally recommend starting at a high level, with STL containers and smart pointers, and that won't make you learn the low-level details. If you want to learn those, you probably should just jump into C.
A: Playing devil's advocate, because I think C is a good starting point...
Personally, I started learning with C, algorithms, data structures, memory allocation, file manipulation, graphics routines... I would call these the elementary particles of programming.
I next learned C++. To over-simplify, C++ adds the layer of object-oriented programming - you have the same objectives as you do regardless of language, but the approach and constructs you build to achieve them are different: classes, overloading, polymorphism, encapsulation, etc...
It wasn't any educated decision on my part, this is simply how my programming course was structured, and it worked out to be a good curriculum.
Another simplification... C is basically a subset of C++. You can "do C" with C++, by avoiding to use the language features of C++. From the perspective of language features. Libraries are different matter. I don't think you will get past more than just programming 101 without beginning to use and build libraries, and there is enough out there to keep you busy for a lifetime.
If your goal is to learn C++ ultimately, then beginning with "C" could be a logical start, the language is "smaller" - but there is so much in "C" that you would probably want to narrow your focus. You are tackling a bigger beast, but if you get guidance, I see no compelling reason not to, despite the path I took.
A: Most C code can be compiled as C++ code with a few minimal changes. For examples of a very few things that can't be changed see Why artificially limit your code to C?. I don't think any of these could be classed as fundamental though.
A: C is essentially a subset of C++. There are differences as highligted by some of the other answers, but essentially most C compiles as C++ with only a little modification.
The benefits of learning C is that it's a much more compact language and learning it is therefore quicker. C also requires you to work on a relatively low abstraction level which is good for understanding how the computer actually works. I personally grew up with assembly and moved to C from there, which I now see as a definitive advantage in understanding how compilers actually map high abstraction level contstructs to actual machine code.
In C++ you can find yourself quickly entrenched in higher levels of abstraction. It's good for productivity but not necessarily so for understanding.
So my advice is: study C++ if you have the motivation, but first concentrate on the core low level constructs that are common with C. Only then move to the higher level things such as object orientation, templates, extensive class libraries and so on.
A: optimization theory
primarily because the asm output of a C compiler is generally mush more predictable and usually well correlated to the C. This lets you teach/learn why its better to structure code a certain way by showing the actual instructions generated.
A: You should learn data structures in C. First, it will force you to really understand pointers, which will help you to deal with leaky abstractions in higher level languages. Second it will make obvious the benefits of OO.
A: Coding Sytle
By coding style I don't mean {} and whitespace, I mean how you organizing data, classes, files, etc.
I started off learning c++, which usually favors creating classes and organizing things a certain way. I could write c code just fine when needed, but it was never like the c style code that others wrote who started off with c.
One of my OS classes required us to write a file system in C. My partner was originally a C guy and I was amazed at the difference in the code we wrote. Mine was obviously trying to use c++ style coding in c.
It's kindof like when you first learn perl or ruby or something and you are just trying to write the same c# code except translated into that new language rather than using that languages features.
While C is a subset of C++, C programs and C++ programs are so different as to not really be comparable.
A: Yes, simple structured programming.
With OOP everywhere, it is easy to forget that you can build sound system just by using modules and functions. C's module system is pretty weak, but still...
This is also one of the advantages of functional programming, in my book. Many functional programming languages have a strong module system. When you need more genericity than simple functions can offer, there are always higher order functions.
A: if you want to learn new skills and you have good fundamentals of structured programing(SP) you should go for C++. learn to think a given problem in an object oriented(OOP) way (define objects, methods, Inheritance, etc) its sometimes the most difficult part. I was in the same situation as you a few years ago, and i choose C++, because learning a new paradigm, a new way of thinking and design software, sometimes its more difficult that the code itself, and if you don't like C++ or OOP you can go back to C and SP, but at least you will have some OOP skills.
A: I would choose C to learn low-level primitives of programming.
For OOP, there are so many other languages to learn before C++ that are much easier to learn. This isn't 1990 when the only OOP languages were Smalltalk and C++, and only C++ was used outside of academia. Almost every new programming language invented since has objects, and almost all are less complex than C++. | unknown | |
d6483 | train | self-signed certificate ... net::ERR_CERT_REVOKED ... MacOS
You probably run into the new requirements for certificates in MacOS 10.15 and iOS 13 which seem to be enforced also for self-signed certificates. While you don't provide any details about your specific certificate I guess it is valid for more than 825 days. It might of course also be any other of the new requirements - see Requirements for trusted certificates in iOS 13 and macOS 10.15 for the details. | unknown | |
d6484 | train | DOM context
CasperJS a sandboxed DOM context (page context). It is only there where you can access DOM elements directly. The page context is inside of the casper.evaluate() callback. Everything else about a DOM element is only a representation of it, because DOM nodes cannot be passed to the outside context.
Accessing input value
There are a lot of ways to get information out of the DOM, but DOM nodes cannot be printed to the console as-is. Here are two ways to get the input value.
*
*You can use casper.evaluate(). Since you want to use an XPath expression, you can use __utils__.getElementByXPath() helper function that is injected by CasperJS into the page:
var value = casper.evaluate(function(xpathexpr){
return __utils__.getElementByXPath(xpathexpr).value;
}, '//*[@id="ifldf14"]/input');
casper.echo("value: " + value);
*If the input field is inside of a form and if the element has a name attribute, then you can use casper.getFormValues() and get the value using the name of the field.
casper.echo(casper.getFormValues('form').nameHere);
XPath helper utility
var x = require('casper').selectXPath;
Is only a little helper that transforms a string to an object with the properties type and path. It's only used in CasperJS internally to represent XPaths. The distinction is necessary, because CasperJS supports both CSS selectors and XPath expressions. Both of them are simple strings initially, but are executed differently.
A: The problem was I was not using jQuery to get my value out. Thanks for everyone's input! | unknown | |
d6485 | train | For node v 4.0.0 and later:
fs.stat("/dir/file.txt", function(err, stats){
var mtime = stats.mtime;
console.log(mtime);
});
or synchronously:
var stats = fs.statSync("/dir/file.txt");
var mtime = stats.mtime;
console.log(mtime);
A: Just adding what Sandro said, if you want to perform the check as fast as possible without having to parse a date or anything, just get a timestamp in milliseconds (number), use mtimeMs.
Asynchronous example:
require('fs').stat('package.json', (err, stat) => console.log(stat.mtimeMs));
Synchronous:
console.log(require('fs').statSync('package.json').mtimeMs);
A: With Async/Await:
const fs = require('fs').promises;
const lastModifiedDate = (await fs.stat(filePath)).mtime;
A: You should use the stat function :
According to the documentation :
fs.stat(path, [callback])
Asynchronous stat(2). The callback gets two arguments (err, stats) where stats is a fs.Stats object. It looks like this:
{ dev: 2049
, ino: 305352
, mode: 16877
, nlink: 12
, uid: 1000
, gid: 1000
, rdev: 0
, size: 4096
, blksize: 4096
, blocks: 8
, atime: '2009-06-29T11:11:55Z'
, mtime: '2009-06-29T11:11:40Z'
, ctime: '2009-06-29T11:11:40Z'
}
As you can see, the mtime is the last modified time.
A: Here you can get the file's last modified time in seconds.
fs.stat("filename.json", function(err, stats){
let seconds = (new Date().getTime() - stats.mtime) / 1000;
console.log(`File modified ${seconds} ago`);
});
Outputs something like "File modified 300.9 seconds ago" | unknown | |
d6486 | train | Even though you normalise the input, you don't normalise the output. The LSTM by default has a tanh output which means you will have a limited feature space, ie the dense layer won't be able to regress to large numbers.
You have a fixed length numerical input (50,), directly pass that to Dense layers with relu activation and will perform better on regression tasks, something simple like:
model = Sequential()
model.add(Dense(64, input_dim=50, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(1))
For regression it is also preferable to use l2 regularizers instead of Dropout because you are not really feature extracting for classification etc. | unknown | |
d6487 | train | You can create this one rule in root .htaccess to block those directories listings:
RewriteEngine On
RewriteRule ^(css|images|includes|js)/?$ - [NC,F] | unknown | |
d6488 | train | Looking at the MSDN pages for the string.Contains and string.IndexOf methods clearly shows that neither of these methods ever throws a FormatException.
I can only conclude that it must be another part of the code (possibly a call to string.Format?) throwing this exception. Perhaps posting the relevant section of code would help? | unknown | |
d6489 | train | You should avoid deleting if you're just inserting, or deleting then inserting. If something goes wrong, you'd be left with no data similar to a truncate. What you really should be doing is inserting or updating.
You can do this manually, selecting, then either inserting or updating, separately. Or you can use firstOrCreate, firstOrNew, updateOrCreate or similar approaches.
Since you're working on multiple records at once, you might try an upsert for updating/inserting multiple records at once:
https://laravel.com/docs/9.x/eloquent#upsert
$toInsertOrUpdate = [];
foreach ($factpelanggan4 as $key => $value) {
// I don't quite follow your looping logic.
// What you wrote with $key++ after each loop before
// will be completely ignored since each loop overwrites it.
// This is a factored version of what you wrote before.
// Please double check it to make sure it does what you want.
$toInsertOrUpdate[] = [
'id_tahun' => $tahun_id[$key],
'id_lokasi' => $lokasi_id[$key],
'jml_pelanggan' => $jml[$key],
];
}
// these columns together identify duplicate records.
$uniqueIdentifyingColumns = ['id_tahun','id_lokasi','jml_pelanggan'];
// when a duplicate record is found, only these columns will be updated
$columnsToUpdateOnDuplicate = ['id_tahun','id_lokasi','jml_pelanggan'];
$tbl = DB::connection('clickhouse')->table('fakta_pelanggan');
$tbl->upsert(
$toInsertOrUpdate,
$uniqueIdentifyingColumns,
$columnsToUpdateOnDuplicate
); | unknown | |
d6490 | train | You're trying to set your AppDelegate as a UNUserNotificationCenterDelegate, but your AppDelegate does not implement that protocol yet. Find out more information about protocols here. The spec for UNUserNotificationCenterDelegate can be found here. Something like this will work:
extension AppDelegate: UNUserNotificationCenterDelegate {
optional func userNotificationCenter(_ center: UNUserNotificationCenter,
willPresent notification: UNNotification,
withCompletionHandler completionHandler: @escaping (UNNotificationPresentationOptions) -> Void) {
// TODO: Implement
}
optional func userNotificationCenter(_ center: UNUserNotificationCenter,
didReceive response: UNNotificationResponse,
withCompletionHandler completionHandler: @escaping () -> Void) {
// TODO: Implement
}
}
The second error means the property does not exist. The documentation is most likely out of date with the framework. | unknown | |
d6491 | train | SELECT * FROM wpps_posts p
INNER JOIN wp_postmeta wp ON wp.post_ID = p.ID
AND wp.meta_key='price'
WHERE p.post_type = 'zoacres-property'
ORDER BY wp.meta_value asc
A: You could do something like this, depends what other type of meta type records you have.
SELECT * FROM wpps_posts
LEFT JOIN wp_postmeta ON wp_postmeta.post_id = wpps_posts.ID AND wp_postmeta.meta_key = 'price'
WHERE wpps_posts.post_type = 'zoacres-property'
ORDER BY wp_postmeta.meta_value | unknown | |
d6492 | train | yes You can do that in two ways:
*
*create a function with an ajax call to your servlet.
*point your href to the servlet link (as JB Nizat mentioned)
for the first method, you can follow the below way (if you use jquery) :
function callServer(){
$.ajax({
url: 'ServletName',
type: 'POST',
data: 'parameter1='+parameter1,
cache: false,
success: function (data) {
//console.log("SERVLET DATA: " + data);
if (typeof (data) !== 'undefined' && data !== '' && data !== null) {
var response = JSON.parse(data);
console.log(response);
}
},error: function(data){
};
});
}
and call this function in your tag like below:
<a href="javascript:callServer();"> </a>
or the much better way like :
<a href="#" onclick="callServer();"> </a>
you can select the better approach !
A: Using Anchor tag without Ajax like this you can try
<a href="servletName?paramName1=value1¶mName2=value2">click me to send parameter to servlet</a> | unknown | |
d6493 | train | How about something like this. This command will start it if it isn't already running. No need to check in advance.
Dim shell
Set shell = CreateObject("WScript.Shell")
shell.Run "NET START spooler", 1, false
A: strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set colRunningServices = objWMIService.ExecQuery _
("select State from Win32_Service where Name = 'Spooler'")
For Each objService in colRunningServices
If objService.State <> "Running" Then
errReturn = objService.StartService()
End If
Next
Note you can also use objService.started to check if its started.
A: Just for the completeless, here's an alternative variant using the Shell.Application object:
Const strServiceName = "Spooler"
Set oShell = CreateObject("Shell.Application")
If Not oShell.IsServiceRunning(strServiceName) Then
oShell.ServiceStart strServiceName, False
End If
Or simply:
Set oShell = CreateObject("Shell.Application")
oShell.ServiceStart "Spooler", False ''# This returns False if the service is already running | unknown | |
d6494 | train | version 1.4+ of the JSON toolkit includes functions that you can call from a custom.
This version must be downloaded from Github, as it is not yet included in the Streams product.
Download the latest version from Github, which is 1.4.4.
Build the toolkit: cd com.ibm.streamsx.json and run ant.
Then you can use the extractJSON function:
public T extractFromJSON(rstring jsonString, T value)
Pass the JSON string you want to parse, and a mutable tuple that will contain the parsed result as parameters.
For example:
composite ExtractFromJSON {
type
//the target output type. Nested JSON arrays are not supported.
Nested_t = tuple<rstring name, int32 age, list<rstring> relatives> person, tuple<rstring street, rstring city> address;
graph
() as Test = Custom() {
logic
onProcess : {
rstring jsonString = '{"person" : {"name" : "John", "age" : 42, "relatives" : ["Jane","Mike"]}, "address" : {"street" : "Allen Street", "city" : "New York City"}}';
mutable Nested_t nestedTuple = {};
println( extractFromJSON(jsonString, nestedTuple));
}
}
}
This is based on the ExtractFromJSON sample
that you can find in the repo on Github.
Hope this helps. | unknown | |
d6495 | train | First of all, in your use case, there's no need for the async { } block. Async.AwaitTask returns an Async<'T>, so your async { } block is just unwrapping the Async object that you get and immediately re-wrapping it.
Now that we've gotten rid of the unnecessary async block, let's look at the type you've gotten, and the type you wanted to get. You got an Async<'a>, and you want an object of type 'a. Looking through the available Async functions, the one that has a type signature like Async<'a> -> 'a is Async.RunSynchronously. It takes two optional parameters, an int and a CancellationToken, but if you leave those out, you've got the function signature you're looking for. And sure enough, once you look at the docs it turns out that Async.RunSynchronously is the F# equivalent of C#'s await, which is what you want. sort of (but not exactly) like C#'s await. C#'s await is a statement you can use inside an async function, whereas F#'s Async.RunSynchronously takes an async object blocks the current thread until that async object has finished running. Which is precisely what you're looking for in this case.
let readEventFromEventStore<'a when 'a : not struct> (eventStore:IEventStoreRepository) (streamName:string) (position:int) =
eventStore.ReadEventAsync(streamName, position)
|> Async.AwaitTask
|> Async.RunSynchronously
That should get you what you're looking for. And note that technique of figuring out the function signature of the function you need, then looking for a function with that signature. It'll help a LOT in the future.
Update: Thank you Tarmil for pointing out my mistake in the comments: Async.RunSynchronously is not equivalent to C#'s await. It's pretty similar, but there are some important subtleties to be aware of since RunSynchronously blocks the current thread. (You don't want to call it in your GUI thread.)
Update 2: When you want to await an async result without blocking the current thread, it's usually part of a pattern that goes like this:
*
*Call some async operation
*Wait for its result
*Do something with that result
The best way to write that pattern is as follows:
let equivalentOfAwait () =
async {
let! result = someAsyncOperation()
doSomethingWith result
}
The above assumes that doSomethingWith returns unit, because you're calling it for its side effects. If instead it returns a value, you'd do:
let equivalentOfAwait () =
async {
let! result = someAsyncOperation()
let value = someCalculationWith result
return value
}
Or, of course:
let equivalentOfAwait () =
async {
let! result = someAsyncOperation()
return (someCalculationWith result)
}
That assumes that someCalculationWith is NOT an async operation. If instead you need to chain together two async operations, where the second one uses the first one's result -- or even three or four async operations in a sequence of some kind -- then it would look like this:
let equivalentOfAwait () =
async {
let! result1 = someAsyncOperation()
let! result2 = nextOperationWith result1
let! result3 = penultimateOperationWith result2
let! finalResult = finalOperationWith result3
return finalResult
}
Except that let! followed by return is exactly equivalent to return!, so that would be better written as:
let equivalentOfAwait () =
async {
let! result1 = someAsyncOperation()
let! result2 = nextOperationWith result1
let! result3 = penultimateOperationWith result2
return! (finalOperationWith result3)
}
All of these functions will produce an Async<'T>, where 'T will be the return type of the final function in the async block. To actually run those async blocks, you'd either do Async.RunSynchronously as already mentioned, or you could use one of the various Async.Start functions (Start, StartImmediate, StartAsTask, StartWithContinuations, and so on). The Async.StartImmediate example talks a little bit about the Async.SwitchToContext function as well, which may be something you'll want to read about. But I'm not familiar enough with SynchronizationContexts to tell you more than that.
A: An alternative to using the async computation expression for this situation (F# calling C# Task-based XxxAsync method) is to use the task computation expression from:
https://github.com/rspeele/TaskBuilder.fs
The Giraffe F# web framework uses task for more or less the same reason:
https://github.com/giraffe-fsharp/Giraffe/blob/develop/DOCUMENTATION.md#tasks | unknown | |
d6496 | train | I'm afraid this is not an approach you can tweak to get the result you want. WM_NCLBUTTONDOWN is essentially how the window manager decides how to handle mouse events on a top-level window. It allows you to make windows that have custom chrome, but it doesn't allow you to do anything about non-top-level windows (such as your panel).
You can get a reasonably nice-working dragging doing the obvious thing - in MouseMove, handle moving the control yourself. It's not going to be as well behaved as moving the entire window, though. | unknown | |
d6497 | train | Can you pass the search term via the URL?
<script>
(function() {
var cx = 'YOURID';
var gcse = document.createElement('script');
gcse.type = 'text/javascript';
gcse.async = true;
gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
'//www.google.com/cse/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(gcse, s);
})();
</script>
<gcse:searchbox queryParameterName="term"></gcse:searchbox>
<gcse:searchresults></gcse:searchresults>
If you call your "search" page via yourdomain.com/search?term=searchword the search results appear immediately.
A: <gcse:search gname='abcd'></gcse:search>
And when the page loaded:
google.search.cse.element.getElement('abcd').execute(query);
A: I've got it working with the gcse callback option (I also changed my layout in the CSE Control Panel to prevent the default overlay).
<script>
function gcseCallback() {
if (document.readyState != 'complete')
return google.setOnLoadCallback(gcseCallback, true);
google.search.cse.element.render({gname:'gsearch', div:'results', tag:'searchresults-only', attributes:{linkTarget:''}});
var element = google.search.cse.element.getElement('gsearch');
element.execute('this is my query');
};
window.__gcse = {
parsetags: 'explicit',
callback: gcseCallback
};
(function() {
var cx = 'YOUR_ENGINE_ID';
var gcse = document.createElement('script');
gcse.type = 'text/javascript';
gcse.async = true;
gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
'//www.google.com/cse/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(gcse, s);
})();
</script>
<div id="results"></div> | unknown | |
d6498 | train | The problem is that sovlePnP requires vector<Point2/3f> as input, instead of vector<vector<Point2/3f> >. In your code, "point_list" is vector<vector<Point3f> >, and "corner_list" is vector<vector<Point2f> >.
The documentation of solvePnP can be found here: http://docs.opencv.org/3.0-beta/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html | unknown | |
d6499 | train | The point is that method filter returns new stream. The old one is terminated after filtering.
/**
* Returns a stream consisting of the elements of this stream that match
* the given predicate.
*
* <p>This is an <a href="package-summary.html#StreamOps">intermediate
* operation</a>.
*
* @param predicate a <a href="package-summary.html#NonInterference">non-interfering</a>,
* <a href="package-summary.html#Statelessness">stateless</a>
* predicate to apply to each element to determine if it
* should be included
* @return the new stream
*/
Stream<T> filter(Predicate<? super T> predicate); | unknown | |
d6500 | train | Similar to your other question:
First convert to datetimes:
df.loc[:, ["Start", "End"]] = (df.loc[:, ["Start", "End"]]
.transform(pd.to_datetime, format="%m/%d/%Y"))
df
identity Start End week
0 E 2020-06-18 2020-07-02 1
1 E 2020-06-18 2020-07-02 2
2 2D 2020-07-18 2020-08-01 1
3 2D 2020-07-18 2020-08-01 2
4 A1 2020-09-06 2020-09-20 1
5 A1 2020-09-06 2020-09-20 2
Your identity is in groups of two, so I'll use that when selecting dates from the date_range:
from itertools import chain
result = df.drop_duplicates(subset="identity")
date_range = (
pd.date_range(start, end, freq="7D")[:2]
for start, end in zip(result.Start, result.End)
)
date_range = chain.from_iterable(date_range)
End = lambda df: df.Start.add(pd.Timedelta("7 days"))
Create new dataframe:
df.assign(Start=list(date_range), End=End)
identity Start End week
0 E 2020-06-18 2020-06-25 1
1 E 2020-06-25 2020-07-02 2
2 2D 2020-07-18 2020-07-25 1
3 2D 2020-07-25 2020-08-01 2
4 A1 2020-09-06 2020-09-13 1
5 A1 2020-09-13 2020-09-20 2 | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.