_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d17601 | val | TypeScript will warn you if you try to pass unknown props.
Eslint will warn you if you have unused variables, imports ...
with rules like:
*
*no-unused-expressions
*no-unused-vars
*react/no-unused-prop-types
*unused-imports | unknown | |
d17602 | val | Changing the table
The approach:
*
*Add a column with a substitute with a correct type (date recommended instead of datetime2(7)
*Update this column with Convert( date, LAST_UPDATE, 101 )
*Drop the original column
*Rename the new column to the name of the original column
Important note: Check all the import scripts to this table to fix the functions used to set LAST_UPDATE.
Alternative
*
*Add a column with name LAST_UPDATE_DATE type date as derived column
*Derived column formula: AS Convert( date, LAST_UPDATE, 101 ) [PERSISTED]
*Keep both values as imported and as needed
Important note: If you get any other date format other than US then this formula breaks as it explicitly expects the 101 US format.
View as crazy alternative
Build a view on top of this table that does the transformation. In SQL Server 2008 there is no TRY_CAST function to fail graciously.
Use the view for downstream work.
Why date?
Type date costs 3 bytes and is perfect for date only values.
datetime2(0) costs 6 bytes, the default datetime2(7) costs 8 bytes.
References:
Cast and Convert https://learn.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql?view=sql-server-ver15
Datetime2 https://learn.microsoft.com/en-us/sql/t-sql/data-types/datetime2-transact-sql?view=sql-server-ver15
Try_Cast https://learn.microsoft.com/en-us/sql/t-sql/functions/try-cast-transact-sql?view=sql-server-ver15
A: Since this you are using the 2008 version try_cast will be not helpful.
One safe method will be to implement the update with a loop and go through the entire table row by row with a try catch block within the loop where you can handle in case of any failure at the time of casting , in catch block you can give the query to update the value as NULL to identify if casting failed. | unknown | |
d17603 | val | Update: To answer your question about type inference:
The initializer list constructor of vector<string> takes an initializer_list<string>. It is not templated, so nothing happens in terms of type inference.
Still, the type conversion and overload resolution rules applied here are of some interest, so I'll let my initial answer stand, since you have accepted it already:
Original answer:
At first, the compiler only sees the initializer list {"one","two","three"}, which is only a list of initializers, not yet an object of the type std::initializer_list.
Then it tries to find an appropiate constructor of vector<string> to match that list.
How it does that is a somewhat complicated process you would do best to look up in the standard itself if you are interested in the exact process.
Therefore, the compiler decides to create an actual object of std::initializer_list<string> from the initializer list, since the implicit conversion from the char*'s to std::strings makes that possible.
Another, maybe more interesting example:
std::vector<long> vl1{3};
std::vector<string> vs1{3};
std::vector<string> vs2{0};
What do these do?
*
*The first line is relatively easy. The initializer list {3} can be converted into a std::initializer_list<long> analogous to the {"onm", "two", "three"} example above, so you get a vector with a single element, which has value 3.
*The second line is different. It constructs a vector of 3 empty strings. Why? Because an initializer list {3} can by no means be converted into an std::initializer_list<string>, so the "normal" constructor std::vector<T>::vector(size_t, T = T()) kicks in and gives three default-constructed strings.
*Well this one should be roughly the same as the second, right? It should give an empty vector, in other words, with zero default-constructed strings. WRONG!. The 0 can be treated as a nullpointer constant, and validates the std::initializer_list<string>. Only this time the single string in that list gets constructed by a nullpointer, which is not allowed, so you get an exception.
A: There is no type inference because vector provide only a fully specialized constructor with the initializer list. We could add a template indirection to play with type deduction. The example below show that a std::initializer_list<const char*> is an invalid argument to the vector constructor.
#include <string>
#include <vector>
std::string operator"" _s( const char* s, size_t sz ) { return {s, s+sz}; }
template<typename T>
std::vector<std::string> make_vector( std::initializer_list<T> il ) {
return {il};
}
int main() {
auto compile = make_vector<std::string>( { "uie","uieui","ueueuieuie" } );
auto compile_too = make_vector<std::string>( { "uie"_s, "uieui", "ueueuieuie" } );
//auto do_not_compile = make_vector( { "uie","uieui","ueueuieuie" } );
}
Live demo
A: From http://en.cppreference.com/w/cpp/language/string_literal:
The type of an unprefixed string literal is const char[]
Thus things go this way:
#include <iostream>
#include <initializer_list>
#include <vector>
#include <typeinfo>
#include <type_traits>
using namespace std;
int main() {
std::cout << std::boolalpha;
std::initializer_list<char*> v = {"one","two","three"}; // Takes string literal pointers (char*)
auto var = v.begin();
char *myvar;
cout << (typeid(decltype(*var)) == typeid(decltype(myvar))); // true
std::string ea = "hello";
std::initializer_list<std::string> v2 = {"one","two","three"}; // Constructs 3 std::string objects
auto var2 = v2.begin();
cout << (typeid(decltype(*var2)) == typeid(decltype(ea))); // true
std::vector<std::string> vec(v2);
return 0;
}
http://ideone.com/UJ4a0i | unknown | |
d17604 | val | using $("select#areaCode").find(":selected").val() should work.
EDIT
You should change:
<ul class="dropdown-menu" role="menu">
<li><a href="#" id="1">US: +1</a>
</li>
<li><a href="#" id="44">UK: +44</a>
</li>
</ul>
to:
<select id="areaCodes">
<option value="1">US: +1</option>
<option value="44">UK: +44</option>
</select>
now using:
$("#areaCodes").find(":selected").val();
Will return the selected option value. | unknown | |
d17605 | val | A thread pool is built around the idea that, since creating threads over and over again is time-consuming, we should try to recycle them as much as possible. Thus, a thread pool is a collection of threads that execute jobs, but are not destroyed when they finish a job, but instead "return to the pool" and either take another job or sit idle if there is nothing to do.
Usually the underlying implementation is a thread-safe queue in which the programmer puts jobs and a bunch of threads managed by the implementation keep polling (I'm not implying busy-spinning necessarily) the queue for work.
In Java the thread pool is represented by the ExecutorService class which can be:
*
*fixed - create a thread pool with a fixed number of threads
*cached - dynamically creates and destroys threads as needed
*single - a pool with a single thread
Note that, since thread pool threads operate in the manner described above (i.e. are recycled), in the case of a fixed thread pool it is not recommended to have jobs that do blocking I/O operations, since the threads taking those jobs will be effectively removed from the pool until they finish the job and thus you may have deadlocks.
As for the array of threads, it's as simple as creating any object array:
Thread[] threads = new Thread[10]; // array of 10 threads | unknown | |
d17606 | val | You could store such a list inside of the /my_favorites/USER_ID document as an array of currently favorited product IDs. You could maintain this list using a Cloud Function as each product is added and removed from the /my_favorites/USER_ID/products collection, but it's arguably simpler to just make use of a batched write along with the array field transforms, arrayUnion() and arrayRemove().
_saveFavorite(Product product) async {
await _loadCurrentUser();
Firestore db = Firestore.instance;
WriteBatch batch = db.batch();
// queue adding the product's ID to the products array
batch.update(
db.collection("my_favorites")
.document(_userID),
{
products: FieldValue.arrayUnion([product.id])
}
);
// queue uploading a copy of the product's data to this user's favorites
batch.set(
db.collection("my_favorites")
.document(_userID)
.collection("products")
.document(product.id),
product.toMap()
);
return batch.commit();
}
_removeFavorite(String productID) async {
await _loadCurrentUser();
Firestore db = Firestore.instance;
WriteBatch batch = db.batch();
// queue removing product.id from the products array
batch.update(
db.collection("my_favorites")
.document(_userID),
{
products: FieldValue.arrayRemove([product.id])
}
);
// queue deleting the copy of /products/PRODUCT_ID in this user's favorites
batch.delete(
db.collection("my_favorites")
.document(_userID),
.collection("products")
.document(product.id)
);
return batch.commit();
}
To get the list of product IDs, you would use something similar to:
_getFavoriteProductIDs() async {
await _loadCurrentUser();
Firestore db = Firestore.instance;
return db.collection("my_favorites")
.document(_userID)
.get()
.then((querySnapshot) {
return querySnapshot.exists ? querySnapshot.get("products") : []
});
}
You could even convert it to work with lists instead:
_saveFavorite(List<Product> products) async {
if (products.length == 0) {
return; // no action needed
}
await _loadCurrentUser();
Firestore db = Firestore.instance;
WriteBatch batch = db.batch();
// queue adding each product's ID to the products array
batch.update(
db.collection("my_favorites")
.document(_userID),
{
products: FieldValue.arrayUnion(
products.map((product) => product.id).toList()
)
}
);
// queue uploading a copy of each product to this user's favorites
for (var product in products) {
batch.set(
db.collection("my_favorites")
.document(_userID)
.collection("products")
.document(product.id),
product.toMap()
);
}
return batch.commit();
}
_removeFavorite(List<String> productIDs) async {
if (productIDs.length == 0) {
return; // no action needed
}
await _loadCurrentUser();
Firestore db = Firestore.instance;
WriteBatch batch = db.batch();
// queue removing each product ID from the products array
batch.update(
db.collection("my_favorites")
.document(_userID),
{
products: FieldValue.arrayRemove(productIDs)
}
);
// queue deleting the copy of each product in this user's favorites
for (var product in products) {
batch.delete(
db.collection("my_favorites")
.document(_userID)
.collection("products")
.document(product.id)
);
}
return batch.commit();
}
Additional note: With your current implementation, a favourited product is copied from /products/PRODUCT_ID to /my_favorites/USER_ID/products/PRODUCT_ID. Remember that with this structure, if /products/PRODUCT_ID is ever updated, you will have to update every copy of that product. I suggest renaming products to favorited-products so that you can achieve this using a Cloud Function and a Collection Group Query (see this answer for more info). | unknown | |
d17607 | val | From BigQuery docs which says it seems that no error is returned when table exists:
The CREATE TABLE IF NOT EXISTS DDL statement creates a table with the
specified options only if the table name does not exist in the
dataset. If the table name exists in the dataset, no error is
returned, and no action is taken.
Answering your question, DDL is supported from API which is also stated in doc, to do this:
Call the jobs.query method and supply the DDL statement in the request
body's query property. | unknown | |
d17608 | val | Hit this issue with VS2017, what it happened is we converted a dotnet core project back to use .Net frameworks. The old project.assets.json was left in obj folder. And it caused this error. When the file or the obj folder is removed, it builds fine.
A: I resolved this by not using NuGet for this project anymore.
*
*Removed all NuGet packages from the project.
*
*Right-click the project in Visual Studio
*Manage NuGet Packages...
*Uninstall packages one by one until there are none
*Deleted project.json file in the root directory of the project.
*Restarted the Visual Studio. This step might not be necessary, but sometimes when you remove project.json file, you get the NuGet related error when building the project. If this happens, restart the Visual Studio. | unknown | |
d17609 | val | Simplify your code:
*
*avoid unnecessary globals, pass parameters to the corresponding functions instead
*avoid reimplementing a thread pool (it hurts readability and it misses convience features accumulated over the years).
The simplest way to capture stderr is to use stderr=PIPE and .communicate() (blocking call):
#!/usr/bin/env python3
from configparser import ConfigParser
from datetime import datetime
from multiprocessing.dummy import Pool
from subprocess import Popen, PIPE
def backup_db(item, conf): # config[item] == conf
"""Run `mysqldump ... | gpg ...` command."""
genfile = '{conf[DBName]}-{now:%Y%m%d}-{conf[PubKey]}.sql.gpg'.format(
conf=conf, now=datetime.now())
# ...
args = ['mysqldump', '-u', conf['UserNm'], ...]
with Popen(['gpg', ...], stdin=PIPE) as gpg, \
Popen(args, stdout=gpg.stdin, stderr=PIPE) as db_dump:
gpg.communicate()
error = db_dump.communicate()[1]
if gpg.returncode or db_dump.returncode:
error
def main():
config = ConfigParser()
with open('backup.cfg') as file: # raise exception if config is unavailable
config.read_file(file)
with Pool(2) as pool:
pool.starmap(backup_db, config.items())
if __name__ == "__main__":
main()
NOTE: no need to call db_dump.terminate() if gpg dies prematurely: mysqldump dies when it tries to write something to the closed gpg.stdin.
If there are huge number of items in the config then you could use pool.imap() instead of pool.starmap() (the call should be modified slightly).
For robustness, wrap backup_db() function to catch and log all exceptions. | unknown | |
d17610 | val | If you moved files from the Controllers folder or the VIews folder in the root of the project into Controllers or View folders contained in the {AreaName} folder, then all of those files moved need their namespaces changed from {ProjectName}.{*etCetera} to:
{ProjectName}.Areas.{AreaName}.{*etCetera}
A:
Turns out what was wrong was that the order or routing was incorrect, it was processing {controller}/{action}/{id} first which was breaking my areas. Moving the
AreaRegistration.RegisterAllAreas();
to the top of my application_start fixed my problem.
http://haacked.com/archive/2011/04/13/routedebugger-2.aspx Link to the tool I used to find the problem, unfortunately the current version of the tool doesn't work on the default 404 pages, so I also had to create a custom 404 page for it to work. | unknown | |
d17611 | val | (Question answered in the comments. See Question with no answers, but issue solved in the comments (or extended in chat) )
@WeloSefer wrote:
maybe this can help you get started ... I have never worked with jsoup nor pdfbox so I am no help but I sure will try pdfbox since I've been testing itextpdf reader for extracting texts.
The OP wrote:
Thanks, that is what I was looking for - it works now :)
this problem is solved - working code is here http://thottingal.in/blog/2009/06/24/pdfbox-extract-text-from-pdf/ | unknown | |
d17612 | val | *
*Is the webpage from another domain?
*Does the webpage of the iframe start with http while the parent page is https? Make sure the protocols are the same. | unknown | |
d17613 | val | Okay.. The code in your question seems right! However, you can still try these configuration directives in your .htaccess file:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^(www\.)?example\.com$
RewriteRule ^(.*)$ http://theexample.com/$1
But first, make sure that there's an Apache HTTP Server with mod_rewrite in your web host. | unknown | |
d17614 | val | The --format option was only added to docker ps in version 1.8.0 so unless you are able to upgrade then you won't be able to use it.
It would be quite handy if this was made clear in the documentation page you linked to but I think probably docker just expect you to use the latest version - they are not known for providing long term support for older versions. | unknown | |
d17615 | val | Quick answer
Your Main method and your task run in parallel, and the Main method does not wait the task to finish.
In release mode, the task is "lucky". It finishes before Main. Not in debug mode. In both case the execution is random.
The fact they run in parallel explain why your can't predict the order of the printed lines.
Explanations
A Task is a thread that comes from the thread pool, so they are background threads.
The process running your code (which consist of all your threads) does not wait for background threads to finish in order to terminate. The process only wait that foreground threads have finished.
Then you may want to use the Thread class because they are foreground by default. But using Task is easier. So @John Wu's comment is totally relevant:
A task is not guaranteed to finish unless you await it or call Wait or
Result or do something else to wait for it
You simply want to add at the end of your code:
task.Wait();
However you'll never be able to predict the order of the printed lines, because the threads run in parallel. | unknown | |
d17616 | val | You bound the input field value to the state property dueDate. Now if you want to modify it, you have to refresh the state property on input field change, therefore:
onChange={event => this.setState({dueDate: event.target.value})}
A: You wrote a controlled component. You set a state value to input element. If the state changes, your input value change. So you change the code below like,
// input element
<input value={this.state.dueDate} onChange={this.handleDueDate}/>
// handleDueDate method
handleDueDate(event){
this.setState({
dueDate: event.target.value
})
}
If you change your code looks like, its works fine. | unknown | |
d17617 | val | if image uploads to '/uploads' folder then try like
app.use('/uploads', express.static(process.cwd() + '/uploads'))
A: __dirname gives you the directory name of entry point file. I don't know where is your entry point file in your application, but that's where you have to start.
By the way, I advise you to use the "join" function of the "path" module to concatenate the path so as it works on linux or window filesystem.
A: I found a solution by Quora here, thanks to everyone who helped :) :
link | unknown | |
d17618 | val | The issue is because the name property of the resource is only one (for storing local binaries), and it does not iterate over the attributes passed as array.
For this foreach loop to work, you need to use the loop variable path in the resource.
Example of using it as "resource name":
[
"#{node.default['user_home']}/.local",
"#{node.default['user_home']}/.local/bin"
].each do |path|
directory path do
owner 'chefuser'
group 'chefuser'
mode '0755'
action :create
end
end | unknown | |
d17619 | val | If you can call MyScript (as opposed to ./MyScript), obviously the current directory (".") is part of your PATH. (Which, by the way, isn't a good idea.)
That means you can call MyScript in your script just like that:
#!/bin/bash
mydir=My/Folder/
cd $mydir
echo $(pwd)
MyScript
As I said, ./MyScript would be better (not as ambiguous). See Michael Wild's comment about directory separators.
Generally speaking, Bash considers everything that does not resolve to a builtin keyword (like if, while, do etc.) as a call to an executable or script (*) located somewhere in your PATH. It will check each directory in the PATH, in turn, for a so-named executable / script, and execute the first one it finds (which might or might not be the MyScript you are intending to run). That's why specifying that you mean the very MyScript in this directory (./) is the better choice.
(*): Unless, of course, there is a function of that name defined.
A: #!/bin/bash
mydir=My/Folder/
cd $mydir
echo $(pwd)
MyScript
A: I would rather put the name in quotes. This makes it easier to read and save against mistakes.
#!/bin/bash
mydir="My Folder"
cd "$mydir"
echo $(pwd)
./MyScript
A: Your nickname says it all ;-)
When a command is entered at the prompt that doesn't contain a /, Bash first checks whether it is a alias or a function. Then it checks whether it is a built-in command, and only then it starts searching on the PATH. This is a shell variable that contains a list of directories to search for commands. It appears that in your case . (i.e. the current directory) is in the PATH, which is generally considered to be a pretty bad idea.
If the command contains a /, no look-up in the PATH is performed. Instead an exact match is required. If starting with a / it is an absolute path, and the file must exist. Otherwise it is a relative path, and the file must exist relative to the current working directory.
So, you have two acceptable options:
*
*Put your script in some directory that is on your PATH. Alternatively, add the directory containing the script to the PATH variable.
*Use an absolute or relative path to invoke your script. | unknown | |
d17620 | val | Instead of replace, you need to generate a new name with the original extension, I think? If not, please give us more detail.
Dim sName
Dim fso
Dim fol
Dim fil
Dim ext
Set fso = WScript.CreateObject("Scripting.FileSystemObject")
Set fol = fso.GetFolder("F:\Downloads")
For Each fil In fol.Files
'may need to specify a comparison
If InStr(1, fil.Name, "[wizardry] tv show bob - 13", vbTextCompare) <> 0 Then
ext = fso.GetExtensionName(fil)
If Len(ext) > 0 Then ext = "." & ext
sName = "tv show bob S03E13" & ext
fil.Name = sName
End If
Next
WScript.Echo "Completed!" | unknown | |
d17621 | val | Your print_r($dataxml = simplexml_load_file('data.php')); is reading you raw PHP file, not the script execution result!
data.php file have a PHP code that outputs a XML file, not a really XML file.
You should use print_r($dataxml = simplexml_load_file('http://localhost/data.php')); for example. (Assuming that http://localhost/data.php is the url to access your file.)
using only 'data.php' as parameter will get the raw file from server, not processed by PHP. | unknown | |
d17622 | val | Use PDO::fetchAll, for example;
$stmt->execute();
$arrResults = $stmt->fetchAll();
//$arrResults will be multidimensional
//This will echo the first sideimage
echo $arrResults[0]['sideimage'];
If you want to echo all values of sideimage (ie: all rows), you'd have to iterate through the results;
foreach($arrResults as $arrRow) {
echo $arrRow['sideimage'] . PHP_EOL;
}
Links
*
*pdo::fetchAll
A: You should use loop pdo::fetch()
while($abc = $stmt->fetch())
{
print_r($abc);
}
If you don't want to use loop try pdo::fetchAll()
$data = $stm->fetchAll();
A: You could use FETCH_ASSOC
$res = $stmt->fetchAll(PDO::FETCH_ASSOC);
echo $res[0]['sideimage'];
or
foreach($res as $key=>$value) {
$image_val = $value['sideimage'];
} | unknown | |
d17623 | val | def _cleanup():
# clean it up
return
cleanup = _cleanup
try:
# stuff
except:
# handle it
else:
cleanup = lambda: None
cleanup()
A: The most clear way I can think of is do exactly the opposite of else:
do_cleanup = True
try:
fn()
except ErrorA as e:
... do something unique ...
except ErrorB as e:
... do something unique ...
except ErrorC as e:
... do something unique ...
else:
do_cleanup = False
if do_cleanup:
cleanup()
If the code is enclosed and lets itself be done, you can simplify it by returning or breaking in the else.
A: How about catching all the exceptions with one except clause and dividing up the different parts of your handling with if/elif blocks:
try:
fn()
except (ErrorA, ErrorB, ErrorC) as e:
if isinstance(e, ErrorA):
... do something unique ...
elif isinstance(e, ErrorB):
... do something unique ...
else: # isinstance(e, ErrorC)
... do something unique ...
cleanup() | unknown | |
d17624 | val | I think this should be:
SCHEDULER.every '30s' do
var = File.open("/dashing/abhi/sample.txt", "r")
var.each_line do |line|
puts line
send_event('polarion', { value: line })
end
end | unknown | |
d17625 | val | The type-checking is a bit weak, the annotations works as long you annotate your code but a more robust way can be achieved by using inspect from the standard library:
it provides full access to frame, ... and everything you may need. In this case with inspect.signature can be used to fetch the signature of the original function to get a the original order of the parameters. Then just regroup the parameters and pass the final group back to the original function. More details in the comments.
from inspect import signature
def wrapper(func):
def f(*args, **kwargs):
# signature object
sign = signature(func)
# use order of the signature of the function as reference
order = order = dict.fromkeys(sign.parameters)
# update first key-values
order.update(**kwargs)
# update by filling with positionals
free_pars = (k for k, v in order.items() if v is None)
order.update(zip(free_pars, args))
return func(**order)
return f
@wrapper
def foo(a, b, c, d):
print(f"{a} {b} {c} {d}")
foo(10, 12.5, 14, 5.2)
#10 12.5 14 5.2
foo(10, 12.5, d=5.2, c=14)
#10 12.5 14 5.2
The code is annotations compatible:
@wrapper
def foo(a: int, b: float, c: int, d: float) -> None:
print(f"{a} {b} {c} {d}")
The annotation's way, no imports required:
It is a copy past of the above code but using __annotations__ attribute to get the signature. Remember that annotations may or may not have an annotation for the output
def wrapper(func):
def f(*args, **kwargs):
if not func.__annotations__:
raise Exception('No clue... inspect or annotate properly')
params = func.__annotations__
# set return flag
return_has_annotation = False
if 'return' in params:
return_has_annotation = True
# remove possible return value
return_ = params.pop('return', None)
order = dict.fromkeys(params)
order.update(**kwargs)
free_pars = (k for k, v in order.items() if v is None)
order.update(zip(free_pars, args))
# update with return annotation
if return_has_annotation:
func.__annotations__ = params | {'return': return_}
return func(**order)
return f
@wrapper
def foo(a: int, b: float, c: int, d: float) -> None:
print(f"{a} {b} {c} {d}")
A: The first thing to be careful of is that key word arguments are implemented because order does not matter for them and are intended to map a value to a specific argument by name at call-time. So enforcing any specific order on kwargs does not make much sense (or at least would be confusing to anyone trying to use your decorater). So you will probably want to check for which kwargs are specified and remove the corresponding argument types.
Next if you want to be able to check the argument types you will need to provide a way to tell your decorator what types you are expected by passing it an argument (you can see more about this here). The only way to do this is to pass a dictionary mapping each argument to the expected type:
@wrapper({'a': int, 'b': int, c: float, d: int})
def f(a, b, c=6.0, d=5):
pass
def wrapper(types):
def inner(func):
def wrapped_func(*args, **kwargs):
# be careful here, this only works if kwargs is ordered,
# for python < 3.6 this portion will not work
expected_types = [v for k, v in types.items() if k not in kwargs]
actual_types = [type(arg) for arg in args]
# substitute these in case you are dead set on checking for key word arguments as well
# expected_types = types
# actual_types = [type(arg) for arg in args)] + [type(v) for v in kwargs.items]
if expected_types != actual_types:
raise TypeError(f"bad argument types:\n\tE: {expected_types}\n\tA: {actual_types}")
func(*args, **kwargs)
return wrapped_func
return inner
@wrapper({'a': int, 'b': float, 'c': int})
def f(a, b, c):
print('good')
f(10, 2.0, 10)
f(10, 2.0, c=10)
f(10, c=10, b=2.0)
f(10, 2.0, 10.0) # will raise exception
Now after all of this, I want to point out that this is functionality is probably largely unwanted and unnecessary in python code. Python was designed to be dynamically typed so anything resembling strong types in python is going against the grain and won't be expected by most.
Next, since python 3.5 we have had access to the built-in typing package. This lets you specify the type that you expect to be receiving in a function call:
def f(a: int, b: float, c: int) -> int:
return a + int(b) + c
Now this won't actually do any type assertions for you, but it will make it plainly obvious what types you are expecting, and most (if not all) IDEs will give you visual warnings that you are passing the wrong type to a function. | unknown | |
d17626 | val | One way to do this is to output a custom object after collecting the properties you want. Example:
Get-WmiObject -Class Win32_Service | foreach-object {
$displayName = $_.DisplayName
$processID = $_.ProcessID
$process = Get-Process -Id $processID
new-object PSObject -property @{
"DisplayName" = $displayName
"Name" = $process.Name
"CPU" = $process.CPU
}
}
A: A couple of other ways to achieve this:
Add a note property to the object returned by Get-Process:
Get-WmiObject -Class Win32_Service |
Select DisplayName,@{Name="PID";Expression={$_.ProcessID}} |
% {
$displayName = $_.DisplayName;
$gp = Get-Process;
$gp | Add-Member -type NoteProperty -name DisplayName -value $displayName;
Write-Output $gp
} |
Select DisplayName, Name,CPU
Set a script scoped variable at one point in the pipeline, and use it at a later point in the pipeline:
Get-WmiObject -Class Win32_Service |
Select @{n='DisplayName';e={($script:displayName = $_.DisplayName)}},
@{Name="PID";Expression={$_.ProcessID}} |
Get-Process |
Select @{n='DisplayName';e={$script:displayName}}, Name,CPU
A: Using a pipelinevariable:
Get-CimInstance -ClassName Win32_Service -PipelineVariable service |
Select @{Name="PID";Expression={$_.ProcessID}} |
Get-Process |
Select Name,CPU,@{Name='DisplayName';Expression={$service.DisplayName}} | unknown | |
d17627 | val | I don't think you can really avoid using a loop here, unless you want to invoke jq via sh. See this answer
Anyways, using your full sample, I managed to parse it into a multiindexed dataframe, which I assume is what you want.
import datetime
import re
import json
data=None
with open('datasample.txt', 'r') as f:
data=f.readlines()
# There's only one line
data=data[0]
# Replace single quotes to double quotes: I did that in the .txt file itself, you could do it using re
# Fix the datetime problem
cleaned_data = re.sub(r'(datetime.datetime\(.*?\))', lambda x: '"'+ str(eval(x.group(0)).isoformat())+'"', data)
Now that the string from the file is valid json, we can load it:
json_data = json.loads(cleaned_data)
And we can process it into a dataframe:
# List to store the dfs before concat
all_ = []
for n, night in enumerate(json_data):
for s, station in enumerate(night):
events = pd.DataFrame(station)
# Set index to the event number
events = events.set_index('###')
# Prepend night number and station number to index
events.index = pd.MultiIndex.from_tuples([(n, s, x) for x in events.index])
all_.append(events)
df_all = pd.concat(all_)
# Rename the index levels
df_all.index.names = ['Night','Station','Event']
# Convert to datetime
df_all.DateTime = pd.to_datetime(df_all.DateTime)
df_all
(Truncated) Result: | unknown | |
d17628 | val | Here is working, simplified and refactored answer for your issue:
struct ContentView: View {
var body: some View {
SliderOverviewView()
}
}
struct SliderOverviewView: View {
@State private var overview: OverviewModel = OverviewModel(full: false)
var body: some View {
VStack {
Text("[Overview] full: \(overview.full.description)")
.onTapGesture {
overview.full.toggle()
}
SliderDetailView(overview: $overview)
}
}
}
struct SliderDetailView: View {
@Binding var overview: OverviewModel
var body: some View {
VStack {
Text("[Detail] percentFull: \(tellValue(value: overview.full))")
Slider(value: Binding(get: { () -> Double in
return tellValue(value: overview.full)
}, set: { newValue in
if newValue == 1 { overview.full = true }
else if newValue == 0 { overview.full = false }
}))
}
}
func tellValue(value: Bool) -> Double {
if value { return 1 }
else { return 0 }
}
}
struct OverviewModel {
var full: Bool
}
Update:
struct SliderDetailView: View {
@Binding var overview: OverviewModel
@State private var sliderValue: Double = Double()
var body: some View {
VStack {
Text("[Detail] percentFull: \(sliderValue)")
Slider(value: $sliderValue, in: 0.0...1.0)
}
.onAppear(perform: { sliderValue = tellValue(value: overview.full) })
.onChange(of: overview.full, perform: { newValue in
sliderValue = tellValue(value: newValue)
})
.onChange(of: sliderValue, perform: { newValue in
if newValue == 1 { overview.full = true }
else { overview.full = false }
})
}
func tellValue(value: Bool) -> Double {
value ? 1 : 0
}
}
A: I present here a clean alternative using 2 ObservableObject, a hight level OverviewModel that
only deal with if slider went to 0% or 100%, and a DetailModel that deals only with the slider percentage.
Dragging the slider correctly communicates upwards when the slider changes from full to empty, and
tapping the [Overview] full: text communicates downwards that the slider should change to full/empty.
import Foundation
import SwiftUI
@main
struct TestApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
struct ContentView: View {
@StateObject var overview = OverviewModel()
var body: some View {
SliderOverviewView().environmentObject(overview)
}
}
// Top level View. It doesn't know anything about specific slider percentages,
// it only cares if the slider got moved to full/empty
struct SliderOverviewView: View {
@EnvironmentObject var overview: OverviewModel
var body: some View {
VStack {
Text("[Overview] full: \(overview.state.rawValue)")
.onTapGesture {
switch overview.state {
case .full, .between: overview.state = .empty
case .empty: overview.state = .full
}
}
SliderDetailView()
}
}
}
// Bottom level View. It knows about specific slider percentages and only
// communicates upwards when percentage goes to 0% or 100%.
struct SliderDetailView: View {
@EnvironmentObject var overview: OverviewModel
@StateObject var details = DetailModel()
var body: some View {
VStack {
Text("[Detail] percentFull: \(details.percentFull)")
Slider(value: $details.percentFull).padding(.horizontal, 48)
.onChange(of: details.percentFull) { newVal in
switch newVal {
case 0: overview.state = .empty
case 1: overview.state = .full
default: break
}
}
}
// listen for the high level OverviewModel changes
.onReceive(overview.$state) { theState in
details.percentFull = theState == .full ? 1.0 : 0.0
}
}
}
enum OverviewState: String {
case empty
case between
case full
}
// Top level model that only knows if slider went to 0% or 100%
class OverviewModel: ObservableObject {
@Published var state: OverviewState = .empty
}
// Lower level model that knows full slider percentage
class DetailModel: ObservableObject {
@Published var percentFull = 0.0
} | unknown | |
d17629 | val | I think you should try setting it back to itself email = email.Replace(";", ",");
A: String.Replace method returns new string. It doesn't change existing one.
Returns a new string in which all occurrences of a specified Unicode
character or String in the current string are replaced with another
specified Unicode character or String.
As Habib mentioned, using foreach with the current list gets a foreach iteration variable error. It is a read-only iteration. Create a new list and then add replaced values to it instead.
Also you can use for loop for modifying existing list which keyboardP explained on his answer.
List<string> newemailAddresses = new List<string>();
foreach (string email in emailAddresses)
{
newemailAddresses.Add(email.Replace(";", ","));
}
return newemailAddresses;
Be aware since strings are immutable types, you can't change them. Even if you think you change them, you actually create new strings object.
A: As others have already mentioned that strings are immutable (string.Replace would return a new string, it will not modify the existing one) and you can't modify the list inside a foreach loop. You can either use a for loop to modify an existing list or use LINQ to create a new list and assign it back to existing one. Like:
emailAddresses = emailAddresses.Select(r => r.Replace(";", ",")).ToList();
Remember to include using System.Linq;
A: Strings are immutable so another string is returned. Try
for(int i = 0; i < emailAddress.Count; i++)
{
emailAddress[i] = emailAddress[i].Replace(";", ",");
}
A foreach loop would not compile here because you're trying to change the iteration variable. You'd run into this issue.
A: You should use somthing like:
var tmpList = new List();
add each modified email address to the tmplist
and when you are done, return the TmpList.
In .NET strings are inmutable, that's why your code doesnt work. | unknown | |
d17630 | val | You can set the tableview's rowHeight equal to UITableViewAutomaticDimension in your viewDidLoad method:
self.yourTableView.rowHeight = UITableViewAutomaticDimension
self.yourTableView.estimatedRowHeight = 42.0
Here you are telling your tableview to calculate the dimension of the row.
Then you are saying that you estimate that the row will have a height of 42, basically setting a minimum height.
I think this is a great example using a demo app. | unknown | |
d17631 | val | There are two concepts of multitasking in a single process multiple thread environment.
*
*A single thread execute in time slice of the process. And that thread takes care of scheduling of other threads.
*OS takes scheduling decision of process threads and might run them in parallel on different core.
You are talking about approach 1. Yes It has no advantage of multi-threading; but it let many threads / programs run one by one and give you "multitasking" (virtually). | unknown | |
d17632 | val | I'm working on this too, and it's a nightmare.
For Each f As Field In oDoc.Fields 'notice fields not content controls
Console.WriteLine(f.OLEFormat.Object.Name) 'notice properties, not methods...
Next
Here's the MSDN reference | unknown | |
d17633 | val | Modeling one-to-many relationships (e.g. Users to Courses bought) within a single item is a common pattern. However, if the many side of the relationship can grow large, you will likely want a different approach. It sounds like you're use case ins't a good fit for this particular pattern.
One way around this limitation is to model the relationship in an item collection. For example, you could model the user and the courses bought within the same partition. Keeping the data together makes it easier to fetch the data in a single query operation.
In this data model, I created a global secondary index named courses using the attribute GSIPK as the primary key for the secondary index. This would let you fetch all courses with a single query of the courses GSI.
Keep in mind that this is just one of many approaches you could take to model your data. Check out this talk from AWS Re:Invent about DynamoDB data modeling. It gives a fantastic walkthrough of some of the key concepts that will help you design your data model. | unknown | |
d17634 | val | make a function and then return from that when condition matches:
def loopBreakExample():
for i in range(5):
for j in range(3):
if j == 2:
return
print('I, J => ', i, j)
loopBreakExample() | unknown | |
d17635 | val | You may need to adjust the values to higher numbers for larger files.
Try doing following steps:
Open Cpanel -> File manager
Click search and type 'php.ini' -> Right click this file and choose edit.
change value of following
memory_limit
post_max_size
upload_max_filesize
Adjust the values to higher numbers for larger files. Save and try again uploading.
A: I also had this problem
First you need to install a file manager plugin. Open file manager, search for a file with the name: .htaccess
Edit this file, at the bottom add the following:
php_value upload_max_filesize 1000M
php_value post_max_size 2000M
php_value memory_limit 3000M
php_value max_execution_time 180
php_value max_input_time 180
Save and close
Try uploading again, in my case worked.
I followed the instructions from this video:
https://www.youtube.com/watch?v=TnI_h-QjrWo | unknown | |
d17636 | val | You can get a more accurate count by phrasing the query like this:
SELECT page_id, COUNT(distinct user_id_hash)
from user_likes ul
GROUP BY page_id LIMIT 0,30;
Speeding it up in MySQL is tricky, because of the group by. You might try the following. Create an index on user_likes(page_id, user_id_hash). Then try this:
select p.page_id,
(select count(distinct user_id_hash)
from user_likes ul
where ul.page_id = p.page_id
)
from (select distinct page_id
from user_likes ul
) p
The idea behind this query is to avoid group by -- a poorly implemented operator in MySQL. The inner query should use the index to get the list of unique page_ids. The subquery in the select should use the same index for the count. With the index-based operations, the count should go faster. | unknown | |
d17637 | val | You can always write your own module to do it, but my recommendation is using the Rules module, and using several user roles.
*
*Any new user gets a "trial" role he registers.
*Create the needed fields in the user profile
*Create a rule which will change the user's role in case the field is filled (rule triggeres whenever user profile is updated).
*Create a rule with cron that executes once a day, to suspend user account, and probably to send him a notification before doing so. | unknown | |
d17638 | val | For the results you want, I don't see why the cars table is needed. Then, you seem to need an additional key for the join to categories based on which table it is referring to.
So, I suggest:
SELECT tt.*, c.category_name
FROM ((SELECT b.battery_category_id AS category_id,
b.car_id AS car_id, b.value AS value,
'battery' as which
FROM BATTERY b
WHERE b.battery_category_id IN (1)
) UNION ALL
(SELECT td.technical_category_id AS category_id,
td.car_id AS car_id, td.value AS value,
'technical' as which
FROM TECHNICAL_DATA td
WHERE td.technical_category_id IN (3)
)
) tt LEFT JOIN
CATEGORIES c
ON c.id = tt.category_id AND
c.category_type = tt.which;
That said, you seem to have a problem with your data model, if the join to categories requires "hidden" data such as the type. However, that is outside the scope of the question. | unknown | |
d17639 | val | You have this set up as two different classes, each with their own "main" method. Presumably you only want to be running one of them. The thing to do, from what I can see, would be to define "Bars" as an inner class (or at least a separate class that "BarGraph" has a dependency on) and move all of the code you have in its "main" method to a constructor instead (or maybe some sort of "init" method if you prefer.)
Once that's done, you add code in the "main" method of BarGraph, after you're done parsing your file, to actually create one of these "Bars" objects and initialize it. Once it's initialized, you can create a method in "Bars" to add data to the graph from your "hash" data structure and use that method from within BarGraph's main method. | unknown | |
d17640 | val | Instead of making your own datetime format parser, you should use the one already available for you. DateTime.TryParseExact is your tool to convert a string in a date when you know the exact format.
Converting back the date, in the string format that you like, is another task easily solved by the override of ToString() specific for a datetime
string[] values = lines1[i].Split(',');
if (values.Length >= 3)
{
DateTime dt;
if (DateTime.TryParseExact(values[0], "d-MMM-yyyy",
System.Globalization.CultureInfo.CurrentCulture,
System.Globalization.DateTimeStyles.None, out dt))
{
values[0] = dt.ToString("yyyyMMdd");
lines1[i] = String.Join(",", values);
}
}
A: I would parse the string into a date and then write it back using a custom date format. From this link we can write this code:
String pattern = "dd-MMM-yyyy";
DateTime dt;
if (DateTime.TryParseExact(values[0], pattern, CultureInfo.InvariantCulture,
DateTimeStyles.None,
out dt)) {
// dt is the parsed value
String sdt = dt.ToString("yyyyMMdd"); // <<--this is the string you want
} else {
// Invalid string, handle it as you see fit
} | unknown | |
d17641 | val | Try DBMS_METADATA_GET_DDL.
enter link description here | unknown | |
d17642 | val | there are couple of ways to do it.
the first one would be to store all argumens in a variable then do destruct it
function foo(...args){
const { arg1, arg2 } = args
this.model = arg1;
this.model = arg2;
// and so on...
};
Or
function foo({ arg1, arg2 }){
this.model = arg1;
this.model = arg2;
// and so on...
}; | unknown | |
d17643 | val | I think you're using a different shell (tcsh) rather than sh or bash. Most probably you have to adapt your source code to make it load using tcsh. Under sh/bash works just fine
root@pve1:~# echo $0
-bash
A: In bash, your script is syntactically correct. But if you use sh, then there are a few errors. Check the shellcheck output:
$ shellcheck script.sh
In script.sh line 3:
function addvar () {
^-- SC2112: 'function' keyword is non-standard. Delete it.
In script.sh line 4:
local tmp="${!1}" ;
^-- SC2039: In POSIX sh, 'local' is undefined.
^-- SC2039: In POSIX sh, indirect expansion is undefined.
In script.sh line 5:
tmp="${tmp//:${2}:/:}" ; tmp="${tmp/#${2}:/}" ; tmp="${tmp/%:${2}/}" ;
^-- SC2039: In POSIX sh, string replacement is undefined.
^-- SC2039: In POSIX sh, string replacement is undefined.
^-- SC2039: In POSIX sh, string replacement is undefined.
In summary:
*
*function keyword is not needed (or even recommended)
*local isn't supported in POSIX sh
*string replacement ${//} is not supported in sh. | unknown | |
d17644 | val | There are two sets of properties.
The "Frequency Domain" -- the amplitudes of overtones in a specific sample. This is the amplitudes of each overtone.
The "Time Domain" -- the sequence of amplitude samples through time.
You can, using Fourier Transforms, convert between the two.
The time domain is what sound "is" -- a sequence of amplitudes. The frequency domain is what we "hear" -- a set of overtones and pitches that determine instruments, harmonies, and dissonance.
A mixture of the two -- frequencies varying through time -- is the perception of melody.
A: The Echo Nest has easy-to-use analysis apis to find out all you might want to know about a piece of music.
You might find the analyze documentation (warning, pdf link) helpful.
A: Any and all properties of sound can be represented / computed - you just need to know how. One of the more interesting is spectral analysis / spectrogramming (see http://en.wikipedia.org/wiki/Spectrogram).
A: Any properties you want can be measured or represented in code. What do you want?
Do you want to test if two samples came from the same instrument? That two samples of different instruments have the same pitch? That two samples have the same amplitude? The same decay? That two sounds have similar spectral centroids? That two samples are identical? That they're identical but maybe one has been reverberated or passed through a filter?
A: Ignore all the arbitrary human-created terms that you may be unfamiliar with, and consider a simpler description of reality.
Sound, like anything else that we perceive is simply a spatial-temporal pattern, in this case "of movement"... of atoms (air particles, piano strings, etc.). Movement of objects leads to movement of air that creates pressure waves in our ear, which we interpret as sound.
Computationally, this is easy to model; however, because this movement can be any pattern at all -- from a violent random shaking to a highly regular oscillation -- there often is no constant identifiable "frequency", because it's often not a perfectly regular oscillation. The shape of the moving object, waves reverberating through it, etc. all cause very complex patterns in the air... like the waves you'd see if you punched a pool of water.
The problem reduces to identifying common patterns and features of movement (at very high speeds). Because patterns are arbitrary, you really need a system that learns and classify common patterns of movement (i.e. movement represented numerically in the computer) into various conceptual buckets of some sort. | unknown | |
d17645 | val | I'd suggest a polymorphic many-to-many approach here so that icons are reusable and don't require a bunch of pivot tables, should you want icons on something other than a page.
Schema::create('icons', function(Blueprint $table) {
$table->increments('id');
$table->string('name');
});
Schema::create('iconables', function(Blueprint $table) {
$table->integer('icon_id');
$table->integer('iconables_id');
$table->integer('iconables_type');
});
Now you just need to determine if the pages have an existing Icon. If they do, then hold reference to them so you can insert them:
$pagesWithIcons = Page::whereNotNull('icon')->get();
At this point you need to define the polymorphic relations in your models:
// icon
class Icon extends Model
{
public function pages()
{
return $this->morphedByMany(Page::class, 'iconable');
}
}
// page
class Page extends Model
{
public function pages()
{
return $this->morphToMany(Icon::class, 'iconable');
}
}
Now you just need to create the icons (back in our migration), and then attach them if they exist:
$pagesWithIcons->each(function(Page $page) {
$icon = Icon::firstOrNew([
'name' => $page->icon
});
$icon->pages()->attach($page);
});
The above is creating an Icon if it doesn't exist, or querying for it if it does. Then it's attaching the page to that icon. As polymorphic many-to-many relationships just use belongsToMany() methods under the hood, you have all of the available operations at your leisure if this doesn't suite your needs.
Finally, drop your icons column from pages, you don't need it.
Schema::table('pages', function(Blueprint $table) {
$table->dropColumn('icon');
});
And if you need to backfill support for only an individual icon (as the many-to-many will now return an array relationship), you may add the following to your page model:
public function icon()
{
return $this->icons()->first();
}
Apologies if typos, I did this on my phone so there may be some mistakes. | unknown | |
d17646 | val | git branches don't really work like that - the branches all relate to the repository. Separating projects, or parts of projects, into separate branches isn't really the right way to go. Eventually, most branches should be merged into a release branch of some type, or discarded.
I have a core branch, and a project branch that uses that core. Now on the project branch, I do mostly commits related to the project...
I'd really like to keep the history of changes to the core files [separate]
From that, I think it sounds a lot like you have a good use case for two separate repositories, one for core-files and one for project - you want to keep the history of core and project separate. Rather than keeping them in the same repository in different branches, I'd advocate pulling the core-files into a git submodule of projects. That'd give you several advantages right off the bat:
*
*project and core history are separate.
*Using core with another project is very simple; import core as a git submodule of the otherproject.
*The history of both project and core is much, much easier to maintain, because you're not having to cherry-pick between branches anymore. | unknown | |
d17647 | val | I found three problems: 1) the template(tableData) must be set to a DOM element, as in $("#output").html(template(tableData)); and 2) that the variable name inside the template must be data; and 3) the code that loads the template must be executed after the DOM is ready. Here is the complete and corrected code:
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title></title>
<link href="Content/kendo/2012.2.710/kendo.common.min.css" rel="stylesheet" />
<link href="Content/kendo/2012.2.710/kendo.default.min.css" rel="stylesheet" />
<script src="Scripts/jquery-1.8.2.min.js"></script>
<script src="Scripts/kendo/2012.2.710/kendo.web.min.js"></script>
<script type="text/javascript">
$(document).ready(function () {
var template = kendo.template($("#javascriptTemplate").html());
var tableData = ["1", "2"];
$("#output").html(template(tableData));
});
</script>
</head>
<body>
<form id="form1" runat="server">
<div id="output"></div>
<script id="javascriptTemplate" type="text/x-kendo-template">
<table>
# for (var i = 0; i < data.length; i++) { #
<tr><td>#=data[i]#</td</tr>
# } #
</table>
</script>
</form>
</body> | unknown | |
d17648 | val | <table>
<tr>
<td style="width:125px">
hi
</td>
<td>bye</td>
</tr>
<tr>
<td style="width:125px">
line of text that will equal more than the above width
</td>
<td>bye</td>
</tr>
</table> | unknown | |
d17649 | val | Let’s take the first test as an example:
Tests:
7/20/2010 is valid.
So in your driver class/test class construct a FunWithCalendars object denoting July 20 2010. The constructor takes three arguments for this purpose. Next call its isValid method. I believe that the idea was that you shouldn’t need to pass the same arguments again. Your isValid method takes two boolean arguments. Instead I believe that it should take no arguments and itself call the two helper methods passing the values that are already inside the FunWithCalendars object. So before you can get your driver class to work, I believe you have to fix your design on this point.
Once you get the call to isValid() to work, store the return value into a variable. Compare it to the expected value (true in this case). If they are equal, print a statement that the test passed. If they are not equal, print a statement containing both the expected and the observed value.
Do similarly for the other tests. Don’t copy-paste the code, though. Instead wrap it in a method and call the method for each test case, passing as arguments the data needed for that particular test. Remember to include the expected result as an argument so the method can compare.
Edit:
… My confusion is in how to construct an object (in general, and also
specifically FunWithCalendars), how to call the isValid method and
have it not take any arguments, how to have the isValid method call
the two helper methods which pass the values that are in the
FunWIthCalendars object.
It’s basic stuff, and I don’t think Stack Overflow is a good place to teach basic stuff. Let’s give it a try, only please set your expectations low.
How to construct an object: You’re already doing this in your driver class using the new operator:
FunWithCalendars test = new FunWithCalendars () ;
Only you need to pass the correct arguments to the constructor. Your constructor takes three int arguments, so it needs to be something like:
FunWithCalendars test = new FunWithCalendars(7, 20, 2020);
How to call the isValid method and have it take no arguments, after the above line:
boolean calculatedValidity = test.isValid();
This stores the value returned from isValid() (false or true) into a newly created boolean variable that I have named calculatedValidity. From there we may check whether it has the expected value, act depending on it and/or print it. The simplest thing is to print it, for example:
System.out.println("Is 7/20/2020 valid? " + calculatedValidity);
Calling with no arguments requires that the method hasn’t got any parameters:
public boolean isValid ()
{
How to have isValid() call the two helper methods: You may simple write the method calls en lieu of mentioning the parameters that were there before. Again remember to pass the right arguments:
if (isValidMonth(month) && isValidDay(month, day, year, isLeapYear(year)) )
In the method calls here I am using the instance variables (fields) of the FunWithCalendars object as arguments. This causes the method to use the numbers that we entered through the constructor and to use the three helper methods.
I have run your code with the above changes. My print statement printed the expected:
Is 7/20/2020 valid? true
PS I am on purpose not saying anything about possible bugs in your code. It’s a lot better for you to have your tests tell you whether there are any. | unknown | |
d17650 | val | Message.mentions.users is a collection. You need to determine if your ID is in the collection. You are comparing equality, which since user is not a collection will always be false. Replace this with a .has You can then add a react to the message. For that, there is a guide here describing how to get a unicode reaction as shown below.
var user = "123456479879541";
if(message.mentions.users.has(user)) {
message.channel.reply('ok');
message.react('');
}
A: You should use Message#react()
So basically for example you could do:
client.on('message', message => {
var user = "123456479879541";
const reaction = message.guild.emojis.find(emoji => emoji.name === 'EMOJI NAME')
if(message.mentions.users.has(user)) {
message.channel.reply('ok');
message.react(reaction)
}}); | unknown | |
d17651 | val | Your question is very nonspecific, but here is one way to do what you are looking for, assuming I understand what you are asking. Note that this may cause an undesirable offset in position which you will have to deal with in some way. Not knowing what point you want to scale the polygon about, these solutions assume the simplest circumstances.
The reason for the square root in all of these formulas is that area tends to change with the square of linear scaling, just as volume does with the cube of linear scaling.
For general polygon:
A = sqrt(R)
for each point in polygon:
point.x := point.x * A
point.y := point.y * A
For circle:
A = sqrt(R)
circle.radius := circle.radius * A
For rectangle in terms of width and height:
A = sqrt(R)
rect.w := rect.w * A
rect.h := rect.h * A | unknown | |
d17652 | val | Change it to this:
<div> <a href="#" onclick="this.parentNode.style.display='none'">Close</a>
The reason is that when using href="javascript:..., this doesn't refer to the element that received the event.
You need to be in an event handler like onclick for that. | unknown | |
d17653 | val | *
*An impure way to do it is to add a filter that checks for a variable before you subscribe, and then change the variable when you don't want the subscribed action to occur:
var isOn = true;
periodicEvent.filter(() => isOn).onValue(() => {
doStuff();
});
*A "pure-r" way to do it would be turn an input into a property of true/false and filter you stream based on the value of that property:
// make an eventstream of a dom element and map the value to true or false
var switch = $('input')
.asEventStream('change')
.map(function(evt) {
return evt.target.value === 'on';
})
.toProperty(true);
var periodEvent = Bacon.interval(1000, {});
// filter based on the property b to stop/execute the subscribed function
periodEvent.filter(switch).onValue(function(val) {
console.log('running ' + val);
});
Here is a jsbin of the above code
There might be an even better/fancier way of doing it using Bacon.when, but I'm not at that level yet. :) | unknown | |
d17654 | val | so actually now i'm able to open http://localhost/xyz/home directly with url rewrite which will point to my index.html.but now the bigger issue is whenever i'm trying to run my project it says unable to start debugging and none of my service is getting called it says 405(metod not allowed).i tried iisrest.but no luck.
so if you have any ideas,please share and thank you. | unknown | |
d17655 | val | Use str.findall:
>>> df['A'].str.findall(r'User \d+').str[-1]
0 User 397335
1 User 525767
2 NaN
3 NaN
4 NaN
163678 NaN
163679 User 347991
163680 NaN
163681 NaN
163682 User 663455
Name: A, dtype: object | unknown | |
d17656 | val | you don't need a loop. In this case you can use postDealyed to repost a runnable in the ui thread queue:
public class TestActivity extends Activity {
int rand;
int counter;
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
final TextView txtShow = (TextView) findViewById(R.id.txtShow);
final Handler handler = new Handler();
handler.postDelayed(new Runnable() {
@Override
public void run() {
if (counter > 4) {
handler.removeCallbacks(this);
return;
}
++counter;
rand = (int) (Math.random() * 9);
txtShow.setText("" + rand);
handler.postDelayed(this, 3000);
}
}, 3000);
}
}
When counter reaches 5, removeCallbacks cancel all the runnables still present in the handler queue and returns. Otherwise counter is increased and handler.postDelayed add the runnable to the handler queue,
A: I think this should work.
private Handler handler;
private TextView txt;
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
handler = new Handler();
txt = (TextView) findViewById(R.id.txt);
new Thread(new Task()).start();
}
class Task implements Runnable {
@Override
public void run() {
for (int i = 0; i <= 5; i++) {
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
handler.post(new Runnable() {
@Override
public void run() {
int rand = (int) (Math.random() * 9);
txt.setText("" + rand);
}
});
}
}
}
A: The problem is that all of the Runnables that are changing the TextView are executing right after one another; the change is so fast that you cannot see the intermediate values and instead only see the final value.
To fix your problem change the delay in postDelayed() from 3000 to 3000 * i. This will ensure that the updates are spaced with 3 seconds between each one instead of just all executing 3 seconds after onCreate() finishes (as it is now).
For things like this you may also want to look into Timer (http://developer.android.com/reference/java/util/Timer.html) | unknown | |
d17657 | val | Bijil,
JRXML is a template that contains the format for the content that is shown on the report.
And from what i understand the xml is containing the input data.
How jasper reports work is, you create JASPER file by compiling the JRXML file (this can be done using iReport or through your java code). To this JASPER file you will attach a object from your java code that contain data for filling the JASPER.
Please see this link for details
Edited: iReport is a designer tool for creating jasper reports, I am not sure if there any tool that can convert xml to jrxml. jrxml will contain the syntax with respect to jasper report.
What we used to do, were try to create a similar report(by comparing the look and feel) as the one client has given using iReport and get the final jrxml.
Compile jrxml in iReport check the look and feel with the sample word doc with the generated sample report
Then use the compiled jasper file in the application directly. The use of jasper has 2 advantages,
*
*you can use unicode characters in your report
*you reduce the overhead of compiling your code every time before generating report.
disadvantage
*
*you need to keep separate track of jrxml, to fix any defect on
previous jasper file.
A: Save your MS WORD file (Template) in report directory and using Apache POI,
Do necessary edits to your template. Then save your file to any place you like.
Apache POI tutorials
Link 01:
To print save your edited file in temp directory and call Desktop.getDesktop().print(file) using desktop
Link 02:
Wish you good luck. | unknown | |
d17658 | val | plotly seems to limit the axis based on the max and min values present in the corresponding axis. I tried each of the properties and came up with a solution.
Approach: The first one is generating what you need, but cant seem to get it to start at 12 in the midnight and end at 12 the next day.
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
from plotly.graph_objs import *
from datetime import datetime, time
init_notebook_mode(connected=True)
# in the example i am using fabricated data. declaring the data:
time_data = [datetime(2017, 7, 28, 21, 37, 19), datetime(2017, 7, 29, 17, 11, 56), datetime(2017,8, 1, 11, 15, 45), datetime(2017, 8, 2, 13, 54, 3)]
x_data = []
y_data = []
# creating the x-row data with dates only, and the y-row data with the time only
for row in time_data:
x_data.append(row.date())
y_data.append(str(datetime.combine(datetime(2017, 1, 1).date(), row.time())))
#declaring the data for the graph
data = [Scatter(x=x_data, y=y_data, name='Times On', mode='markers')]
# creating the hour range
hours = []
for i in range (0, 24):
hours.append(datetime(2017, 1, 1, i, 0, 0))
# declaring the Layout with the 'range' attribute, and Figure
layout = dict(title='Times On', xaxis=dict(type='date'), yaxis={'type': 'date', 'tickformat': '%H:%M',
'nticks': 30, 'tick0': hours[0],
'range': [hours[0], hours[len(hours)-1]],
'autorange': False})
fig = Figure(data=data, layout=layout)
# plotting
iplot(figure_or_data=fig)
The above code is giving me the output: | unknown | |
d17659 | val | For an external module with no exposed types and any values:
declare module 'Foo' {
var x: any;
export = x;
}
This won't let you write foo.cls, though.
If you're stubbing out individual classes, you can write:
declare module 'Foo' {
// The type side
export type cls = any;
// The value side
export var cls: any;
} | unknown | |
d17660 | val | Take a look at http://code.google.com/p/csipsimple, they have already created Java wrapper with SWIG. | unknown | |
d17661 | val | You have likely lost all the plugins - you want to code your app to be a single page app that never as such leaves "index.html" but loads data and page elements into it with Ajax / local templates etc. | unknown | |
d17662 | val | I would pass back the information as JSON. Have something like:
{updateList : nameOfList, output: $line/$output/$vote }
Then on success you could do something like
$('#'+html.updateList).append(html.output);
You have to make sure to let jQuery know that you are sending and to accept json as the type back though.
A: php
class DataObject
{
public $Type;
public $Text;
}
$json=new DataObject();
if(isset($line)){
$json->Type="line";
$json->Text=$line;
return json_encode($json);
} elseif(isset($comment)){
$json->Type="comment";
$json->Text=$comment;
return json_encode($json);
} elseif (isset($vote)){
$json->Type="vote";
$json->Text=$vote;
return json_encode($json);
} else {
//do nothing;
}
javascript
$.ajax({
type: "POST",
url: "ajax.php",
dataType: 'json', //add data type
data: dataString,
cache: false,
success: function(data){
$("ul#"+data.type).append(data.text);
$("ul#"+data.type+"li:last").fadeIn("slow");
}
A: You need some way to differentiate the values of $line and $comment.
I'd suggest sending back JSON from your PHP script:
if(isset($line)){
echo '{"line" : ' . json_encode($line) . '}';
} elseif(isset($comment)){
echo '{"comment" : ' . json_encode($comment) . '}';
} elseif (isset($vote)){
echo '{"vote" : ' . json_encode($vote) . '}';
} else {
//do nothing;
}
Note: PHP isn't my strongest language so there might be a better way to generate the JSON response
success: function(data){
if(data.line) {
$("ul#line").append(html);
$("ul#lineli:last").fadeIn("slow");
}
else if(data.comment) {
$("ul#comment").append(html);
$("ul#commentli:last").fadeIn("slow");
}
} | unknown | |
d17663 | val | Given the following datasets:
val id = Seq((1, 2), (1, 5), (2, 8), (2, 3), (3, 4)).toDF("ID", "BookTime")
scala> id.show
+---+--------+
| ID|BookTime|
+---+--------+
| 1| 2|
| 1| 5|
| 2| 8|
| 2| 3|
| 3| 4|
+---+--------+
val fareRule = Seq((1,3,10), (3,6,20), (6,10,25)).toDF("start", "end", "fare")
scala> fareRule.show
+-----+---+----+
|start|end|fare|
+-----+---+----+
| 1| 3| 10|
| 3| 6| 20|
| 6| 10| 25|
+-----+---+----+
You simply join them together using between expression.
val q = id.join(fareRule).where('BookTime between('start, 'end)).select('id, 'fare)
scala> q.show
+---+----+
| id|fare|
+---+----+
| 1| 10|
| 1| 20|
| 2| 25|
| 2| 10|
| 2| 20|
| 3| 20|
+---+----+
You may want to adjust between so the boundaries are exclusive on one side. between by default uses the lower bound and upper bound, inclusive. | unknown | |
d17664 | val | If your only concern is that you will make typos when entering the literal string then just use NameOf(MyStrings.This_is_a_test_string). | unknown | |
d17665 | val | Your imports look like you are using Jackson 1.9.x which doesn't have a method getFactory() in ObjectMapper. There is a method getJsonFactory(), but you'd probably not need it. Just call mapper.configure( JsonGenerator.Feature.ESCAPE_NON_ASCII, true ); | unknown | |
d17666 | val | For example, when you want to use react-native-snap-carousel, you can follow the instructions in the usage part of that link
https://github.com/archriss/react-native-snap-carousel#usage
And also, If you want to use so simple carousel, you can use
<FlatList horizontal={true}/> | unknown | |
d17667 | val | it is not allowed to use request.binaryread after you have used the request.form collection.
but your
If Request("action")="1" Then
uses the request.form collection because you are not using request.querystring("action").
after that you instantiate the uploader and this uses in line 56 request.BinaryRead
A: As explained in this answer, the default limit for POST request size is 200KB - typical to Microsoft, the error message in case the limit is exceeded is far from helpful.
To fix this error and allow bigger files and/or more files, you need to change the setting in IIS.
For IIS 7.5 (default for Windows 7) first choose the site then double click "ASP" under IIS:
Now write a number bigger than 200000 as value of "Maximum Requesting Entity Body Limit" under "Limit Properties" section: (15728640 is 15 MB which is reasonable limit)
Click "Apply" in the right sidebar and you're done. Happy programming! | unknown | |
d17668 | val | First of all, if you have setup Devise to allow users to edit their account without providing a password then you need to remove current_password field from the view as well as configure_permitted_parameters method.
def configure_permitted_parameters
...
devise_parameter_sanitizer.for(:account_update) do |u| u.permit(
:email,
:password,
:password_confirmation,
## :current_password, ## REMOVE THIS LINE
:name,
profile_attributes: [:birthday, :phone, :address, :about, :restrictions, :avatar]
)
end
end
By specifying current_password you are permitting it for mass-assignment on account_update.
UPDATE
class User < ActiveRecord::Base
has_one :profile, dependent: :destroy, inverse_of :user
before_create :build_profile # Creates profile at user creation
accepts_nested_attributes_for :profile
...
end
class Profile < ActiveRecord::Base
belongs_to :user, inverse_of :profile
validates :user_id, presence: true
end | unknown | |
d17669 | val | Need to define a Fragment in XML Like This.
<RelativeLayout
android:id="@+id/main_tasklist_layout"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_toRightOf="@+id/main_viewmenu_layout"
android:layout_below="@+id/main_tasklist_outer">
<fragment
class="com.Organisemee.fragment.TaskListFragment"
android:id="@+id/tasklistfrag"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
</RelativeLayout>
To Call the Fragment put this code in java file
Fragment f = listFragment();
FragmentTransaction ListFragft = getFragmentManager().beginTransaction();
ListFragft.replace(R.id.main_tasklist_layout, f);
ListFragft.setTransition(FragmentTransaction.TRANSIT_FRAGMENT_FADE);
ListFragft.addToBackStack(null);
ListFragft.commit();
This Works fine in change both orientation, landscape to portrait and vice versa. | unknown | |
d17670 | val | bytes remaining in buffer, encoded? For quite a while now I've been struggling with DMA communication with two STM32 boards in some form or another. My current issue is as follows.
I have a host (a Raspberry Pi) running the following code, waiting for the board to initialise communication:
#include <fcntl.h>
#include <stdio.h>
int main(int argc, char *argv[]) {
unsigned int usbdev;
unsigned char c;
system("stty -F /dev/ttyUSB0 921600 icanon raw");
usbdev = open("/dev/ttyUSB0", O_RDWR);
setbuf(stdout, NULL);
fprintf(stderr, "Waiting for signal..\n");
while (!read(usbdev, &c, 1));
unsigned char buf[] = "Defend at noon\r\n";
write(usbdev, buf, 16);
fprintf(stderr, "Written 16 bytes\r\n");
while (1) {
while (!read(usbdev, &c, 1));
printf("%c", c);
}
return 0;
}
Basically it waits for a single byte of data before it'll send "Defend at noon" to the board, after which it prints everything that is sent back.
The boards first send out a single byte, and then wait for all incoming data, replace a few bytes and send it back. See the code at the end of this post. The board can be either an STM32L100C or an STM32F407 (in practice, the discovery boards); I'm experiencing the same behaviour with both at this point.
The output I'm seeing (on a good day - on a bad day it hangs on Written 16 bytes) is the following:
Waiting for signal..
Written 16 bytes
^JDefend adawnon
As you can see, the data is sent and four bytes are replaced as expected, but there's an extra two characters in front (^J, or 0x5E and 0x4A). These turn out to be a direct consequence of the signal_host function. When I replace the character with something arbitrary (e.g. x), that is what's being output at that position. It is interesting to note that \n actually gets converted to its caret notation ^J somewhere along the road. It appears that this occurs in the communication to the board, because when I simply hardcode a string in the buffer and use dma_transmit to send that to an non-interactive host program, it gets printed just fine.
It looks like I've somehow miss-configured DMA in the sense that there's some buffer that is not being cleared properly. Additionally, I do not really trust the way the host-side program is using stty.. However, I've actually had communication working flawlessly in the past, using this exact code. I compared it to the code stored in my git history across several months, and I cannot find the difference/flaw.
Note that the code below uses libopencm3 and is based on examples from libopencm3-examples.
STM32L1 code:
#include <libopencm3/stm32/rcc.h>
#include <libopencm3/stm32/gpio.h>
#include <libopencm3/stm32/usart.h>
#include <libopencm3/cm3/nvic.h>
#include <libopencm3/stm32/dma.h>
void clock_setup(void)
{
rcc_clock_setup_pll(&clock_config[CLOCK_VRANGE1_HSI_PLL_32MHZ]);
rcc_periph_clock_enable(RCC_GPIOA);
rcc_periph_clock_enable(RCC_USART2);
rcc_periph_clock_enable(RCC_DMA1);
}
void gpio_setup(void)
{
gpio_mode_setup(GPIOA, GPIO_MODE_AF, GPIO_PUPD_NONE, GPIO2 | GPIO3);
gpio_set_af(GPIOA, GPIO_AF7, GPIO2 | GPIO3);
}
void usart_setup(int baud)
{
usart_set_baudrate(USART2, baud);
usart_set_databits(USART2, 8);
usart_set_stopbits(USART2, USART_STOPBITS_1);
usart_set_mode(USART2, USART_MODE_TX_RX);
usart_set_parity(USART2, USART_PARITY_NONE);
usart_set_flow_control(USART2, USART_FLOWCONTROL_NONE);
usart_enable(USART2);
}
void dma_request_setup(void)
{
dma_channel_reset(DMA1, DMA_CHANNEL6);
nvic_enable_irq(NVIC_DMA1_CHANNEL6_IRQ);
dma_set_peripheral_address(DMA1, DMA_CHANNEL6, (uint32_t) &USART2_DR);
dma_set_read_from_peripheral(DMA1, DMA_CHANNEL6);
dma_set_peripheral_size(DMA1, DMA_CHANNEL6, DMA_CCR_PSIZE_8BIT);
dma_set_memory_size(DMA1, DMA_CHANNEL6, DMA_CCR_MSIZE_8BIT);
dma_set_priority(DMA1, DMA_CHANNEL6, DMA_CCR_PL_VERY_HIGH);
dma_disable_peripheral_increment_mode(DMA1, DMA_CHANNEL6);
dma_enable_memory_increment_mode(DMA1, DMA_CHANNEL6);
dma_disable_transfer_error_interrupt(DMA1, DMA_CHANNEL6);
dma_disable_half_transfer_interrupt(DMA1, DMA_CHANNEL6);
dma_enable_transfer_complete_interrupt(DMA1, DMA_CHANNEL6);
}
void dma_transmit_setup(void)
{
dma_channel_reset(DMA1, DMA_CHANNEL7);
nvic_enable_irq(NVIC_DMA1_CHANNEL7_IRQ);
dma_set_peripheral_address(DMA1, DMA_CHANNEL7, (uint32_t) &USART2_DR);
dma_set_read_from_memory(DMA1, DMA_CHANNEL7);
dma_set_peripheral_size(DMA1, DMA_CHANNEL7, DMA_CCR_PSIZE_8BIT);
dma_set_memory_size(DMA1, DMA_CHANNEL7, DMA_CCR_MSIZE_8BIT);
dma_set_priority(DMA1, DMA_CHANNEL7, DMA_CCR_PL_VERY_HIGH);
dma_disable_peripheral_increment_mode(DMA1, DMA_CHANNEL7);
dma_enable_memory_increment_mode(DMA1, DMA_CHANNEL7);
dma_disable_transfer_error_interrupt(DMA1, DMA_CHANNEL7);
dma_disable_half_transfer_interrupt(DMA1, DMA_CHANNEL7);
dma_enable_transfer_complete_interrupt(DMA1, DMA_CHANNEL7);
}
void dma_request(void* buffer, const int datasize)
{
dma_set_memory_address(DMA1, DMA_CHANNEL6, (uint32_t) buffer);
dma_set_number_of_data(DMA1, DMA_CHANNEL6, datasize);
dma_enable_channel(DMA1, DMA_CHANNEL6);
signal_host();
usart_enable_rx_dma(USART2);
}
void dma_transmit(const void* buffer, const int datasize)
{
dma_set_memory_address(DMA1, DMA_CHANNEL7, (uint32_t) buffer);
dma_set_number_of_data(DMA1, DMA_CHANNEL7, datasize);
dma_enable_channel(DMA1, DMA_CHANNEL7);
usart_enable_tx_dma(USART2);
}
int dma_done(void)
{
return !((DMA1_CCR6 | DMA1_CCR7) & 1);
}
void dma1_channel6_isr(void) {
usart_disable_rx_dma(USART2);
dma_clear_interrupt_flags(DMA1, DMA_CHANNEL6, DMA_TCIF);
dma_disable_channel(DMA1, DMA_CHANNEL6);
}
void dma1_channel7_isr(void) {
usart_disable_tx_dma(USART2);
dma_clear_interrupt_flags(DMA1, DMA_CHANNEL7, DMA_TCIF);
dma_disable_channel(DMA1, DMA_CHANNEL7);
}
void signal_host(void) {
usart_send_blocking(USART2, '\n');
}
int main(void)
{
clock_setup();
gpio_setup();
usart_setup(921600);
dma_transmit_setup();
dma_request_setup();
unsigned char buf[16];
dma_request(buf, 16); while (!dma_done());
buf[10] = 'd';
buf[11] = 'a';
buf[12] = 'w';
buf[13] = 'n';
dma_transmit(buf, 16); while (!dma_done());
while(1);
return 0;
}
STM32F4 code:
#include <libopencm3/stm32/rcc.h>
#include <libopencm3/stm32/gpio.h>
#include <libopencm3/stm32/usart.h>
#include <libopencm3/cm3/nvic.h>
#include <libopencm3/stm32/dma.h>
void clock_setup(void)
{
rcc_clock_setup_hse_3v3(&hse_8mhz_3v3[CLOCK_3V3_168MHZ]);
rcc_periph_clock_enable(RCC_GPIOA);
rcc_periph_clock_enable(RCC_USART2);
rcc_periph_clock_enable(RCC_DMA1);
}
void gpio_setup(void)
{
gpio_mode_setup(GPIOA, GPIO_MODE_AF, GPIO_PUPD_NONE, GPIO2 | GPIO3);
gpio_set_af(GPIOA, GPIO_AF7, GPIO2 | GPIO3);
}
void usart_setup(int baud)
{
usart_set_baudrate(USART2, baud);
usart_set_databits(USART2, 8);
usart_set_stopbits(USART2, USART_STOPBITS_1);
usart_set_mode(USART2, USART_MODE_TX_RX);
usart_set_parity(USART2, USART_PARITY_NONE);
usart_set_flow_control(USART2, USART_FLOWCONTROL_NONE);
usart_enable(USART2);
}
void dma_request_setup(void)
{
dma_stream_reset(DMA1, DMA_STREAM5);
nvic_enable_irq(NVIC_DMA1_STREAM5_IRQ);
dma_set_peripheral_address(DMA1, DMA_STREAM5, (uint32_t) &USART2_DR);
dma_set_transfer_mode(DMA1, DMA_STREAM5, DMA_SxCR_DIR_PERIPHERAL_TO_MEM);
dma_set_peripheral_size(DMA1, DMA_STREAM5, DMA_SxCR_PSIZE_8BIT);
dma_set_memory_size(DMA1, DMA_STREAM5, DMA_SxCR_MSIZE_8BIT);
dma_set_priority(DMA1, DMA_STREAM5, DMA_SxCR_PL_VERY_HIGH);
dma_disable_peripheral_increment_mode(DMA1, DMA_SxCR_CHSEL_4);
dma_enable_memory_increment_mode(DMA1, DMA_STREAM5);
dma_disable_transfer_error_interrupt(DMA1, DMA_STREAM5);
dma_disable_half_transfer_interrupt(DMA1, DMA_STREAM5);
dma_disable_direct_mode_error_interrupt(DMA1, DMA_STREAM5);
dma_disable_fifo_error_interrupt(DMA1, DMA_STREAM5);
dma_enable_transfer_complete_interrupt(DMA1, DMA_STREAM5);
}
void dma_transmit_setup(void)
{
dma_stream_reset(DMA1, DMA_STREAM6);
nvic_enable_irq(NVIC_DMA1_STREAM6_IRQ);
dma_set_peripheral_address(DMA1, DMA_STREAM6, (uint32_t) &USART2_DR);
dma_set_transfer_mode(DMA1, DMA_STREAM6, DMA_SxCR_DIR_MEM_TO_PERIPHERAL);
dma_set_peripheral_size(DMA1, DMA_STREAM6, DMA_SxCR_PSIZE_8BIT);
dma_set_memory_size(DMA1, DMA_STREAM6, DMA_SxCR_MSIZE_8BIT);
dma_set_priority(DMA1, DMA_STREAM6, DMA_SxCR_PL_VERY_HIGH);
dma_disable_peripheral_increment_mode(DMA1, DMA_SxCR_CHSEL_4);
dma_enable_memory_increment_mode(DMA1, DMA_STREAM6);
dma_disable_transfer_error_interrupt(DMA1, DMA_STREAM6);
dma_disable_half_transfer_interrupt(DMA1, DMA_STREAM6);
dma_disable_direct_mode_error_interrupt(DMA1, DMA_STREAM6);
dma_disable_fifo_error_interrupt(DMA1, DMA_STREAM6);
dma_enable_transfer_complete_interrupt(DMA1, DMA_STREAM6);
}
void dma_request(void* buffer, const int datasize)
{
dma_set_memory_address(DMA1, DMA_STREAM5, (uint32_t) buffer);
dma_set_number_of_data(DMA1, DMA_STREAM5, datasize);
dma_channel_select(DMA1, DMA_STREAM5, DMA_SxCR_CHSEL_4);
dma_enable_stream(DMA1, DMA_STREAM5);
signal_host();
usart_enable_rx_dma(USART2);
}
void dma_transmit(const void* buffer, const int datasize)
{
dma_set_memory_address(DMA1, DMA_STREAM6, (uint32_t) buffer);
dma_set_number_of_data(DMA1, DMA_STREAM6, datasize);
dma_channel_select(DMA1, DMA_STREAM6, DMA_SxCR_CHSEL_4);
dma_enable_stream(DMA1, DMA_STREAM6);
usart_enable_tx_dma(USART2);
}
int dma_done(void)
{
return !((DMA1_S5CR | DMA1_S6CR) & 1);
}
void dma1_stream5_isr(void) {
usart_disable_rx_dma(USART2);
dma_clear_interrupt_flags(DMA1, DMA_STREAM5, DMA_TCIF);
dma_disable_stream(DMA1, DMA_STREAM5);
}
void dma1_stream6_isr(void) {
usart_disable_tx_dma(USART2);
dma_clear_interrupt_flags(DMA1, DMA_STREAM6, DMA_TCIF);
dma_disable_stream(DMA1, DMA_STREAM6);
}
void signal_host(void) {
usart_send_blocking(USART2, '\n');
}
int main(void)
{
clock_setup();
gpio_setup();
usart_setup(921600);
dma_transmit_setup();
dma_request_setup();
unsigned char buf[16];
dma_request(buf, 16); while (!dma_done());
buf[10] = 'd';
buf[11] = 'a';
buf[12] = 'w';
buf[13] = 'n';
dma_transmit(buf, 16); while (!dma_done());
while(1);
return 0;
}
A: Well, I can be brief about this one.
I recommend against using stty for this sort of thing. I realise I have probably not configured stty properly, and with some option-tweaking it is probably possible to get it right, but it's completely unclear. I ended up throwing it out the window and using pyserial instead. I should've done that weeks ago. The above STM32 code works fine and the required Python code is completely trivial.
#!/usr/bin/env python3
import serial
dev = serial.Serial("/dev/ttyUSB0", 921600)
dev.read(1) # wait for the signal
dev.write("Defend at noon\r\n".encode('utf-8'))
while True:
x = dev.read()
print(x.decode('utf-8'), end='', flush=True) | unknown | |
d17671 | val | Here is the simple solution with toggleClass:
$('.ShowHideClicker').on('click', function(){
$(this).next().toggleClass('hidden');
});
http://jsfiddle.net/LcYLY/
A: You should be using the .toggle() this way: JSFIDDLE and make sure you have included jQuery and jQuery UI in your header ("drop" is a jQuery UI feature)
jQuery:
$(document).ready(function(){
$('.ShowHideClicker').click(function(){
$('.ShowHideList').toggle('drop', 1000);
});
});
CSS:
.ShowHideList { display: none; } | unknown | |
d17672 | val | When you are hitting the end to load more, your load code is just re-loading the same 5 entries. You need to check what you have already loaded and validate if it is the end or not to stop adding entries.
A: try this one (exchange limit with offset):
query = "SELECT * FROM " + tabelaCLIENTES + " WHERE credencial_id = " + mSessao.getString("id_credencial") + " LIMIT " + limit + ", " + offset; | unknown | |
d17673 | val | Add the target attribute to open in new window:
$("SeriesId").attr("target", "_blank"); | unknown | |
d17674 | val | I figured it out! For anyone curious, the loop is:
for %a in (0 1 2 3 4 5 6 7 8 9 10 11 1
2 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
39) do DebayerGPU.exe -demosaic DFPD_R -CPU -pattern GRBG -i single%a..pgm -o si
ngle%a.ppm
A: for /l %a in (0,1,39) do DebayerGPU.exe -demosaic DFPD_R -CPU -pattern GRBG -i single%a..pgm -o single%a.ppm
is less prone to typos. This command runs happily directly from the prompt, but if it's a line within a batch file, you'd need to double the % for each instance of the metavariable %a (i.e. %a becomes %%a within a batch file.
A: Here is a good short explanation of for loops, with examples.
http://www.robvanderwoude.com/for.php
Take a note of the 2 important key points:
*
*The code you write in command prompt will not work when you put it in batch files. The syntax of loops is different in those 2 cases due to access to variables as %%A instead of %A
*Specifically for your problem it is easier do do dir *.pgm and than run a for loop over all the files. Thus your program will work for any amount of files and not just hard coded 40. This can be done as explained here: Iterate all files in a directory using a 'for' loop | unknown | |
d17675 | val | You need to use event delegation for attaching events to dynamically added elements:
$('body').on('click','.contactlist',function(e) {
e.stopPropagation();
var sub = $('> ul', this);
if(sub.length) {
if(sub.is(':visible')) {
sub.hide();
sub.removeClass('open');
} else {
$('.contactlist .open').hide().removeClass('open');
sub.show();
sub.parents('ul:not(.contactlist)').addClass('open').show();
sub.addClass('open');
}
}
}); | unknown | |
d17676 | val | There are a few possibility.
*
*Make sure that UITableView protocol is implemented in the header file.
Eg @interface TestingViewController : UIViewController <UITableViewDelegate, UITableViewDataSource>
*Check that your connection from in the Interface Builder and make sure its linked properly | unknown | |
d17677 | val | You need to set the width of the calendar. It is showing the fullCalendar completely, but the overflow is hidden behind the smaller div you placed the calendar in. Make the available space smaller and that should fix the problem...
As for the promotion... If my answer is accepted, use Stackoverflow as the promotion as I do not run any kind of professional establishment. I just do my own projects.
I am not in a position where I can further inspect the code and give you a proper answer, but you can use firebug to find the class/id of the div you need to change. Hope it helps. | unknown | |
d17678 | val | Yes, by changing your output layer (the last layer) from Dense(1) to Dense(6). Of course you also have to change your y_train and y_test to have shape (1,6) instead of (1,1).
Best of luck. | unknown | |
d17679 | val | The errors warning: Exception condition detected on fd 536 and Remote communication error. Target disconnected.: No such file or directory. almost always mean that the remote target has died unexpectedly.
You didn't mention if you are using standard gdbserver, or some other remote target, but if you start your remote target in a separate terminal, and then connect GDB and step through, you should notice that your remote target exits (crashes maybe?), and this will correspond to the time when GDB throws the above error.
You should file a bug against whoever provides your remote target. If this is stock gdbserver, then that would be the GDB project itself, but if it is some other remote target, then you should approach them. | unknown | |
d17680 | val | You're creating a reference, instead of a copy. In order to make a complete copy and leave the original untouched, you need copy.deepcopy(). So:
from copy import deepcopy
dictionary_new = deepcopy(dictionary_old)
Just using a = dict(b) or a = b.copy() will make a shallow copy and leave any lists in your dictionary as references to each other (so that although editing other items won't cause problems, editing the list in one dictionary will cause changes in the other dictionary, too).
A: You are just giving making newdictionary point to the same reference olddictionary points to.
See this page (it's about lists, but it is also applicable to dicts).
Use .copy() instead (note: this creates a shallow copy):
newdictionary = olddictionary.copy()
To create a deep copy, you can use .deepcopy() from the copy module
newdictionary = copy.deepcopy(olddictionary)
Wikepedia :
Shallow vs Deep Copy
A: Assignment like that in Python just makes the newdictionary name refer to the same thing as olddictionary, as you've noticed. You can create a new dictionary with the dict() constructor:
newdictionary = dict(olddictionary)
Note that this makes a shallow copy. For deep copies, see the copy standard library module.
A: newdictionary = dict(olddictionary.items())
This creates a new copy (more specifically, it feeds the contents of olddict as (key,value) pairs to dict, which constructs a new dictionary from (key,value) pairs).
Edit: Oh yeah, copy - totally forgot it, that's the right way to do it.
a = b
just copies a reference, but not the object.
A: You are merely creating another reference to the same dictionary.
You need to make a copy: use one of the following (after checking in the docs what each does):
new = dict(old)
new = old.copy()
import copy
new = copy.copy(old)
import copy
new = copy.deepcopy(old)
A: I think you need a deep copy for what you are asking. See here.
It looks like dict.copy() does a shallow copy, which is what Rick does not want.
from copy import deepcopy
d = {}
d['names'] = ['Alfred', 'Bertrand']
c = d.copy()
dc = deepcopy(d) | unknown | |
d17681 | val | Ok, there's a much better way to do this, but since I'm on a phone that's dying and you have been waiting a year...
var info = $("#div0").html();
// if Js in a php file you can do var info = <?php echo $logtext ?>; To bring it to JS
$.get("phpfilehere.php", {info:info}, function(data){
alert(data);
});
The mouseover function...
$("#div0").on("mouseover", function(){
// my JS code above goes here
});
PHP file:
if(isset($_GET['info'])){
$log = $_GET['info'];
// Put ur stuff here, make sure u only echo when u want ur php script to stop and be sent back to Ajax function as data var.
// insert $log
echo "test";
} else {
echo "no get info supplied":
}
And here is a tool I made to teach people how to write prepared statements for SQL queries :) if you need it...
http://wbr.bz/QueryPro/index.php?query_type=prepared_insert | unknown | |
d17682 | val | Servers usually have limits on file sizes that can be uploaded. It sounds like you're running into the servers limit. If you own the server, you can raise the cap, otherwise you could try asking the server's admin. | unknown | |
d17683 | val | var (...) (and const (...) are just shorthand that let you avoid repeating the var keyword. It doesn't make a lot of sense with a single variable like this, but if you have multiple variables it can look nicer to group them this way.
It doesn't have anything to do with exporting. Variables declared in this way are exported (or not) based on the capitalization of their name, just like variables declared without the parentheses.
A: This code
// What's this syntax ? Is it exported ?
var (
rootDir = path.Join(home(), ".coolconfig")
)
is just a longer way of writing
var rootDir = path.Join(home(), ".coolconfig")
However it is useful when declaring lots of vars at once. Instead of
var one string
var two string
var three string
You can write
var (
one string
two string
three string
)
The same trick works with const and type too. | unknown | |
d17684 | val | You could categorize each metric (CPU load, available memory, swap memory, network IO) with the day and time as bad or good for each metric.
Come up with a set of data for a given time frame with metric values and whether they are good or bad. Train a model using 70% of the data with the good and bad answers in the data.
Then test the trained model using the other 30% of data without the answers to see if you get the predicted results (good,bad) from the model. You could use a classification algorithm. | unknown | |
d17685 | val | Wrap your element(s) in a temp <div> and then get its .innerHTML.
var select = document.createElement("select"),
textDiv = document.createElement("div"),
tempDiv = document.createElement("div");
tempDiv.appendChild(select);
textDiv.innerHTML = data[i].text.replace(pattern, tempDiv.innerHTML);
A: By using innerHTML, you're using markup. So one option is, just use markup (but more options follow):
var textDiv = document.createElement("div");
textDiv.innerHTML = data[i].text.replace(pattern, "<select></select>");
Live example:
var data = {
0: {
text: "This is a <strong>test</strong> {0} Testing <em>1 2 3</em>"
}
};
var i = 0;
var pattern = /\{0\}/i;
var textDiv = document.createElement("div");
textDiv.innerHTML = data[i].text.replace(pattern, "<select></select>");
document.body.appendChild(textDiv);
If you don't want to use markup, you can append the part of the string before the {0}, then the element, then the part of the string after the {0}:
var select = document.createElement("select"),
textDiv = document.createElement("div"),
text = data[i].text,
index = text.indexOf("{0}"); // No need for case-insensitivity
if (index === -1) {
index = text.length;
}
textDiv.innerHTML = text.substring(0, index);
textDiv.appendChild(select);
if (index < text.length) {
textDiv.insertAdjacentHTML("beforeend", text.substring(index + 3));
}
var data = {
0: {
text: "This is a <strong>test</strong> {0} Testing <em>1 2 3</em>"
}
};
var i = 0;
var select = document.createElement("select"),
textDiv = document.createElement("div"),
text = data[i].text,
index = text.indexOf("{0}"); // No need for case-insensitivity
if (index === -1) {
index = text.length;
}
textDiv.innerHTML = text.substring(0, index);
textDiv.appendChild(select);
if (index < text.length) {
textDiv.insertAdjacentHTML("beforeend", text.substring(index + 3));
}
document.body.appendChild(textDiv);
Or if the pattern has to be a regex:
var select = document.createElement("select"),
textDiv = document.createElement("div"),
text = data[i].text,
match = pattern.exec(text),
index = match ? match.index : text.length;
textDiv.innerHTML = text.substring(0, index);
textDiv.appendChild(select);
if (match) {
textDiv.insertAdjacentHTML("beforeend", text.substring(index + match[0].length));
}
var data = {
0: {
text: "This is a <strong>test</strong> {0} Testing <em>1 2 3</em>"
}
};
var i = 0;
var pattern = /\{0\}/i;
var select = document.createElement("select"),
textDiv = document.createElement("div"),
text = data[i].text,
match = pattern.exec(text),
index = match ? match.index : text.length;
textDiv.innerHTML = text.substring(0, index);
textDiv.appendChild(select);
if (match) {
textDiv.insertAdjacentHTML("beforeend", text.substring(index + match[0].length));
}
document.body.appendChild(textDiv);
A: Here's the simplest solution. Use select.outerHTML
var select = document.createElement("select"),
textDiv = document.createElement("div");
textDiv.innerHTML = data[i].text.replace(pattern, select.outerHTML);
and widely supported.
Live example:
var text = "This is a <strong>test</strong> {0} Testing <em>1 2 3</em>"
var pattern = /\{0\}/i;
var select = document.createElement("select"),
textDiv = document.createElement("div");
textDiv.innerHTML = text
document.body.appendChild(textDiv);
setTimeout(function(){
textDiv.innerHTML = text.replace(pattern, select.outerHTML);
}, 1000) | unknown | |
d17686 | val | You can do it by:
for (char alphabet = 'A'; alphabet <= 'Z'; alphabet++) {
System.out.println(alphabet);
} | unknown | |
d17687 | val | Your underscore.js version is too old. Try to use the new version (1.7):
<script src="http://underscorejs.org/underscore.js"></script> | unknown | |
d17688 | val | The reference (to the created copy) as return value (of a function) would be useful, but as Worksheet.Copy is a method of one worksheet (in opposite to Worksheets.Add what is a method of the worksheets-collection), they didn't created it. But as you know where you created it (before or after the worksheet you specified in arguments, if you did), you can get its reference by that position (before or after).
In a function returning the reference:
Public Enum WorkdheetInsertPosition
InsertAfter
InsertBefore
End Enum
Public Function CopyAndRenameWorksheet(ByRef sourceWs As Worksheet, ByRef targetPosWs As Worksheet, ByVal insertPos As WorkdheetInsertPosition, ByVal NewName As String) As Worksheet
'If isWsNameInUse(NewName) then 'Function isWsNameInUse needs to be created to check name!
'Debug.Print NewName & " alredy in use"
'Exit Function
'End If
With sourceWs
Dim n As Long
Select Case insertPos
Case InsertAfter
.Copy After:=targetPosWs
n = 1
Case InsertBefore
.Copy Before:=targetPosWs
n = -1
Case Else
'should not happen unless enum is extended
End Select
End With
Dim NewWorksheet As Worksheet
Set NewWorksheet = targetPosWs.Parent.Worksheets(targetPosWs.Index + n) 'Worksheet.Parent returns the Workbook reference to targetPosWs
NewWorksheet.Name = NewName ' if name already in use an error occurs, should be tested before
Set CopyWorksheetAndRename = NewWorksheet
End Function
usage (insert after):
Private Sub testCopyWorkSheet()
Debug.Print CopyAndRenameWorksheet(ActiveWorkbook.Sheets("Template"), ActiveWorkbook.Sheets("Student info"), InsertAfter, Student_name).Name
End Sub
to insert the copy before the target worksheet, change third argument to InsertBefore (enumeration of options).
New Worksheet.Name needs to be unique or you'll get an error (as long you not implemented the isWsNameInUse function to check that).
Also note that there is a difference between .Sheets and .Worksheets
You can get the links to the documentation by moving the cursor (with mouse left-click) in the code over the object/method you want more infos on and then press F1 | unknown | |
d17689 | val | If the data must last as long as the application session lasts, then caching them as JSON objects would be suitable. You could use GSON to quickly convert them to your JAVA model, but the objects also sound simple enough to parse using Android's out of the box JSONObject class. If the data must persist beyond the application's session, you can still store them in the SharedPreferences as JSON objects. I wouldn't use SQLite, because the data doesn't sound heavy & complex. The data sounds small & light enough for caching or SharedPreferences based on the data's persistence. | unknown | |
d17690 | val | Use a JDialog , problem solved!
See this java tutorial for more help : How to Make Dialogs
A: I'm not sure why no one has suggested CardLayout yet, but this is likely your best solution. The Swing tutorials have a good section on this: How to use CardLayout
A: In a nutshell (a simple solution), you register a listener with the JButton and then have the listener perform the tasks you want it to perform:
setVisible(true) for one frame.
setVisible(false) for the other one.
Regards!
A: One way to approach this would be to create another jFrame and then add a listener onto your button like so:
jFrameNew.setVisible(true);
This way you have a whole new frame to work with. If you want to just have a pop-up message you can also try using the jDialog frames.
Depending on which IDE you are using...for example Netbeans has a gui that makes designing interfaces slightly easier, so you can test out the different frames. | unknown | |
d17691 | val | You can create your own category method. Something like
@interface NSString (Utilities)
+ (NSString *)stringWithFloat:(CGFloat)float
@end
@implementation NSString (Utilities)
+ (NSString *)stringWithFloat:(CGFloat)float
{
NSString *string = [NSString stringWithFormat:@"%f", float];
return string;
}
@end
Edit
Changed this to a class method and also changed the type from float to CGFloat.
You can use it as:
NSString *myFloat = [NSString stringWithFloat:2.1f];
A: You could use NSNumber
NSString *myString = [[NSNumber numberWithFloat:myFloat] stringValue];
But there's no problem doing it the way you are, in fact the way you have in your question is better.
A: float someVal = 22.3422f;
NSNumber* value = [NSNumber numberWithFloat:someVal];
NSLog(@"%@",[value stringValue]);
A: in a simple way
NSString *floatString = @(myFloat).stringValue; | unknown | |
d17692 | val | One of your errors is being caused by not using a JsonResponse in your view instead of an HttpResponse. Here’s how to fix that issue:
from django.http import JsonResponse
def getEvents(request):
eventList = Events.objects.all()
events=[]
for event in eventList:
events.append({"name": event.name, "start": event.start, "end": event.end})
return JsonResponse(events)
From the docs, the JsonResponse is
An HttpResponse subclass that helps to create a JSON-encoded response.
The reason that your regular HttpResponse didn’t work, is because you have to manually serialize the data to JSON when using an HttpResponse, e.g., something like:
import json
response_data = json.dumps(events)
return HttpResponse(response_data, content_type="application/json")
Otherwise, I think what will happen is that you will get a call to __repr__ on the events list which will get you python ast serialized data and not JSON serialized data.
A: First of all, there's a typo of your sucess function, it should be success.
Secondly, JSON response should be a dict object rather than a list, if you really want to get a JSON array response anyway, then you have to specify safe=False when you serializing the data by using JsonResponse(events, safe=False), otherwise you'll get a TypeError like TypeError: In order to allow non-dict objects to be serialized set the safe parameter to False.
So the code sample should be:
def getEvents(request):
eventList = Events.objects.all().values("name", "start", "end")
return JsonResponse({"events": eventList})
And for frontend:
$.ajax({
url: 'getEvents/',
datatype: 'json',
type: 'GET',
success: function(data) {
$.each(data.events, function(index, element) {
$('body').append($('<div>', {
text: element.name
}));
});
}
}); | unknown | |
d17693 | val | Here's an adaptation of yuk's answer using find:
[ib, ia] = find(true(size(b, 1), size(a, 1)));
needed = [a(ia(:), :), b(ib(:), :)];
This should be much faster than using kron and repmat.
Benchmark
a = [1 2 3; 4 5 6];
b = [7 8; 9 10];
tic
for k = 1:1e3
[ib, ia] = find(true(size(b, 1), size(a, 1)));
needed = [a(ia(:), :), b(ib(:), :)];
end
toc
tic
for k = 1:1e3
needed = [kron(a, ones(size(b,1),1)), repmat(b, [size(a, 1), 1])];
end
toc
The results:
Elapsed time is 0.030021 seconds.
Elapsed time is 0.17028 seconds.
A: Use a Kronecker product for a and repmat for b:
[kron(a, ones(size(b,1),1)), repmat(b, [size(a, 1), 1])]
ans =
1 2 3 7 8
1 2 3 9 10
4 5 6 7 8
4 5 6 9 10
A: It gives the desired result but you might need something else then array_merge if you have duplicated items.
$a = array(array(1, 2, 3), array(4, 5, 6));
$b = array(array(7, 8), array(9, 10));
$acc = array_reduce($a, function ($acc, $r) use ($b) {
foreach ($b as $br) {
$acc []= array_merge($r, $br);
}
return $acc;
}, array());
var_dump($acc);
Edit: Sorry I've just noticed the "without loops" section. You can change the foreach to array_reduce to. | unknown | |
d17694 | val | You could specify constructor parameters:
kernel
.Bind<IDbAccessLayer>()
.To<DAL>()
.WithConstructorArgument("connectionString", "YOUR CONNECTION STRING HERE");
And instead of hardcoding the connection string in your Global.asax you could read it from your web.config using:
ConfigurationManager.ConnectionStrings["CNName"].ConnectionString
and now your DAL class could take the connection string as parameter:
public class DAL: IDbAccessLayer
{
private readonly string _connectionString;
public DAL(string connectionString)
{
_connectionString = connectionString;
}
... implementation of the IDbAccessLayer methods
}
A: Create a parameter-less constructor that calls the one-parameter constructor with a default connection string.
public DAL() : this("default connection string") {
}
public DAL(string connectionString) {
// do something with connection string
}
A: I've not worked with ninject, just a bit with Unity. But all the IOC containers seem to gravitate towards you making your own factory class that takes your stateful parameters (your connection string), which returns your real object. For example, if you had a Person class which requires a "name" and "age" for the constructor, then you must make a factory which would interact with Unity rather like this:
IPerson foo = container.Resolve<IPersonFactory>().Create("George", 25);
This is one of the things I don't like about IOC containers, but it's generally where it goes...
A: Just stupid idea having no knowledge of ninject:
kernel.Bind<IMyConnectionString>().To<MyConnectionString>();
And to your DAL constructor accepting IMyConnectionString | unknown | |
d17695 | val | Here's a PoC that will rethrow any "possibly unhandled rejections". These will subsequently trigger Restify's uncaughtException event:
var Promise = require('bluebird');
var restify = require('restify');
var server = restify.createServer();
server.listen(3000);
Promise.onPossiblyUnhandledRejection(function(err) {
throw err;
});
server.get('/', function (req, res, next) {
Promise.reject(new Error('xxx')); // no `.catch()` needed
});
server.on('uncaughtException', function (req, res, route, err) {
console.log('uncaught error', err);
return res.send(500, 'foo');
}); | unknown | |
d17696 | val | You are missing a curly brace after the method createControlpanel.
private JPanel createControlPanel() {
...
parseButton.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
...
tree.addTreeSelectionListener(new MyTreeSelectionListener());
}
});
} // missing this one. | unknown | |
d17697 | val | No. You can create your own column with sequential values using an identity column. This is usually a primary key.
Alternatively, when you query the table, you can assign a sequential number (with no gaps) using row_number(). In general, you want a column that specifies the ordering:
select t.*, row_number() over (order by <ordering column>) as my_sequential_column
from t; | unknown | |
d17698 | val | This will be rejected. See guideline 17.2 here:
https://developer.apple.com/app-store/review/guidelines/
A: simply create a session for 30 days, and expire that session in 30 days...
Apple have no issues in expired session plenty of my apps are live with it...
Just give a message you need to login to access the application features or something like that when user get logged out due to session expiration.
Kudos
A: This can be done if the client creates an enterprise app. With the enterprise app the app will have to be downloaded from the client's account and is not subject to Apple's restrictions above. | unknown | |
d17699 | val | You must disable preflight in your TailwindCSS configuration to prevent defaults overriding MUI styling:
// tailwind.config.js
module.exports = {
corePlugins: {
preflight: false,
}
}
If you have not done so already, you should also follow MUI instructions for changing the CSS injection order if you are going to use external style sheets in order to reliably style MUI components using TailwindCSS. | unknown | |
d17700 | val | You probably haven't added this new rule to the profile that you are using for your project.
The fact that you provided a "pmd-extensions.xml" file just means that you added this rule to the rule repository. But if you do not activate this rule on a single profile, it will remain inactive and will never get executed. | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.