_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d11601 | train | ModelSim makes a similar error/warning, so it's maybe a VHDL standard issues.
A workaround is to declare ArrayofElementType as part of the package, like:
package SortListGenericPkg is
generic (
type ElementType -- e.g. integer
);
type ArrayofElementType is array (integer range <>) of ElementType;
function inside(constant E : ElementType; constant A : in ArrayofElementType) return boolean;
end package;
and then convert the argument when inside is called, like:
... inside(int, ArrayofElementType(int_vec));
or simple use ArrayofElementType as type when declaring the argument if possible/feasible. | unknown | |
d11602 | train | One way of doing this, if you can alter the table structure, is to add a persisted computed column for the year part, and then add a primary key for (id, computer_col), like this:
CREATE TABLE myTable (
id INT NOT NULL,
d DATE NOT NULL,
y AS DATEPART(YEAR,d) PERSISTED NOT NULL,
PRIMARY KEY(id,y)
)
I'm not saying this is a good solution in any way, but it should work. Using a trigger on insert or a check constraint might be better.
Using your test data this will allow the first insert statement, but disallow the second as it violates the primary key constraint. | unknown | |
d11603 | train | Normalize the case with str.lower():
for item in mylist2:
print item.lower() in mylist1
The in containment operator already returns True or False, easiest just to print that:
>>> mylist1 = ['fbh_q1ba8', 'fhh_q1ba9', 'fbh_q1ba10','hoot']
>>> mylist2 = ['FBH_q1ba8', 'trick','FBH_q1ba9', 'FBH_q1ba10','maj','joe','civic']
>>> for item in mylist2:
... print item.lower() in mylist1
...
True
False
False
True
False
False
False
If mylist1 contains mixed case values, you'll need to make the loop explicit; use a generator expression to produce lowercased values; testing against this ensures only as many elements are lowercased as needed to find a match:
for item in mylist2:
print item.lower() in (element.lower() for element in mylist1)
Demo
>>> mylist1 = ['fbH_q1ba8', 'fHh_Q1ba9', 'fbh_q1bA10','hoot']
>>> for item in mylist2:
... print item.lower() in (element.lower() for element in mylist1)
...
True
False
False
True
False
False
False
Another approach is to use any():
for item in mylist2:
print any(item.lower() == element.lower() for element in mylist1)
any() also short-circuits; as soon as a True value has been found (a matching element is found), the generator expression iteration is stopped early. This does have to lowercase item each iteration, so is slightly less efficient.
Another demo:
>>> for item in mylist2:
... print any(item.lower() == element.lower() for element in mylist1)
...
True
False
False
True
False
False
False
A: Why not just do:
for item in mylist2:
if item.lower() in [j.lower() for j in mylist1]:
print "true"
else:
print "false"
This uses .lower() to make the comparison which gives the desired result.
A: The other answers are correct. But they dont account for mixed cases in both lists. Just in case you need that:
mylist1 = ['fbh_q1ba8', 'fbh_q1ba9', 'fbh_q1ba10','hoot']
mylist2 = ['FBH_q1ba8', 'trick','FBH_q1ba9', 'FBH_q1ba10','maj','joe','civic']
for item in mylist2:
found = "false"
for item2 in mylist1:
if item.lower() == item2.lower():
found = "true"
print found | unknown | |
d11604 | train | Hi you can use setter method:
@Stateless
@Remote(MyEBean.class)
public class MyEBean extends FormEBean implements MyEBeanRemote {
final Logger logger = LoggerFactory.getLogger(MyEBean.class);
@PersistenceContext(unitName = "siat-ejbPU")
@Override
public void setEmCrud(EntityManager em) {
super.setEmCrud(em)
}
Worked for me.
A: You need to make your CRUD methods generic (create/edit/remove).
The class named FormEBean should NOT be generic.
If you make the methods generic in stead of the class, you can implement them once and use them with any entity class. Generic crud methods might look something like this:
public <T> T create(T someEntity) {
em.persist(someEntity);
return someEntity;
}
public <T> void create(Collection<T> entities) {
for (T entity : entities) {
em.persist(entity);
}
}
public <T> void edit(T entity) {
em.merge(entity);
}
public <T> void edit(Collection<T> entities) {
for (T currentEntity : entities) {
em.merge(currentEntity);
}
}
Put those in your session bean and use them anywhere to operate on any entity.
/**
* Example managed bean that uses our
* stateless session bean's generic CRUD
* methods.
*
*/
class ExampleManagedBean {
@EJB
MyCrudBeanLocal crudBean;
public void createStuff() {
// create two test objects
Customer cust = createRandomCustomer();
FunkyItem item = createRandomItem();
// use generic method to persist them
crudBean.create(cust);
crudBean.create(item);
}
}
This answer does exactly what I describe and provides example code:
*
*EJB 3 Session Bean Design for Simple CRUD
Another example:
*
*EJB with Generic Methods
A: I found a solution that solves the problem.
It's based on jahroy's answer, but uses inheritance to deal with multiple persitence units.
The common code is a base class (not generic but with generic methods):
public class FormEBean {
final Logger logger = LoggerFactory.getLogger(FormEBean.class);
protected EntityManager emCrud;
public EntityManager getEmCrud() {
return emCrud;
}
public void setEmCrud(EntityManager em) {
emCrud = em;
}
public <T> String create(T entity) {
String exception = null;
try {
emCrud.persist(entity);
emCrud.flush();
} catch (Exception ex) {
//ex.printStackTrace();
exception = ex.getLocalizedMessage();
}
return exception;
}
public <T> void create(List<T> entityList) {
for (T entity : entityList) {
emCrud.persist(entity);
}
}
public <T> void edit(T entity) {
emCrud.merge(entity);
}
public <T> void edit(Set<T> entitySet) {
Iterator<T> it = entitySet.iterator();
while (it.hasNext()) {
T entity = it.next();
emCrud.merge(entity);
emCrud.flush();
}
}
public <T> void remove(T entity) {
emCrud.remove(emCrud.merge(entity));
}
public <T> void remove(T[] listaDaRimuovere) {
for (T entity : listaDaRimuovere) {
emCrud.remove(emCrud.merge(entity));
}
}
public <T> void remove(List<T> listaDaRimuovere) {
for (T entity : listaDaRimuovere) {
emCrud.remove(emCrud.merge(entity));
}
}
}
... and this is the interface:
public interface FormEBeanRemote {
public void setEmCrud(EntityManager em);
public <T> String create(T entity);
public <T> void create(List<T> entityList);
public <T> void edit(T entity);
public <T> void edit(Set<T> entitySet);
public <T> void remove(T entity);
public <T> void remove(T[] listaDaRimuovere);
public <T> void remove(List<T> listaDaRimuovere);
}
The EJB (stateless session bean) looks like this:
@Stateless
@Remote(MyEBean.class)
public class MyEBean extends FormEBean implements MyEBeanRemote {
final Logger logger = LoggerFactory.getLogger(MyEBean.class);
@PersistenceContext(unitName = "siat-ejbPU")
private EntityManager em;
public EntityManager getEm() {
return em;
}
public void setEm(EntityManager em) {
this.em = em;
}
@PostConstruct
public void postConstruct() {
this.setEmCrud(em);
}
...where
@Remote
public interface MyEBeanRemote extends FormEBeanRemote {
......
}
Note that the EJB uses the postConstruct method to set the entityManager, which is delegated to perform CRUD operations on a specific persistence unit.
It's working like a charm so far.
If someone finds any pitfalls please let me know. | unknown | |
d11605 | train | You have to compare the position of each messages with the scrolled value.
So you need to loop throught them.
Here is something working:
var messages=$(".msg");
$(window).scroll(function(){
var counter=0;
for(i=0;i<messages.length;i++){
if( messages.eq(i).offset().top < $(window).scrollTop() ){
counter++;
}
}
// Display result.
$("#messages").html(counter);
});
Updated Fiddle | unknown | |
d11606 | train | Sure, there is a way to do it with a single iteration.
You could do it using reduce-kv function:
(reduce-kv #(assoc %1 %3 (get m %2)) {} new-names)
or just a for loop:
(into {} (for [[k v] new-names] [v (get m k)]))
If you want a really simple piece of code, you could use fmap function from algo.generic library:
(fmap m (map-invert new-names)) | unknown | |
d11607 | train | If you are doing this from a script, you can run this after you have established a connection (and hence the database is already selected):
SELECT character_maximum_length
FROM information_schema.columns
WHERE table_name = ? AND column_name = ?
Replace the ?'s with the name of your table and the name of your column, respectively.
A: If you want to find out the size of column=COLUMN_NAME from table=TABLE_NAME, you can always run a query like this:
SELECT sum(char_length(COLUMN_NAME))
FROM TABLE_NAME;
Size returned is in bytes. If you want it in kb, you could just divide it by 1024, like so:
SELECT sum(char_length(COLUMN_NAME))/1024
FROM TABLE_NAME;
A: SELECT column_name,
character_maximum_length
FROM information_schema.columns
WHERE table_schema = Database()
AND -- name of your database
table_name = 'turno'
AND -- name of your table
column_name = 'nombreTurno' -- name of the column
*
*SQLFiddle Demo
INFORMATION_SCHEMA.COLUMNS
If you wanna Whole Table size then use this
SELECT table_name AS "Tables",
Round(( ( data_length + index_length ) / 1024 / 1024 ), 2) "Size in MB"
FROM information_schema.tables
WHERE table_schema = "$db_name"
ORDER BY ( data_length + index_length ) DESC;
Edit
SELECT column_name,
character_maximum_length
FROM information_schema.columns
WHERE table_schema = 'websi_db1'
AND table_name = 'thread'
AND column_name = 'title'
Source | unknown | |
d11608 | train | You can call it as follows:
myCallback?.invoke()
The () syntax on variables of function types is simply syntax sugar for the invoke() operator, which can be called using the regular safe call syntax if you expand it. | unknown | |
d11609 | train | lookup <- data.frame(
am = c(1, 0, 2), a = c('a1', 'a2', 'a3'), b = 'a2', c = c(0, 1, 2)
)
left_join(dt1, lookup, 'am') | unknown | |
d11610 | train | Ok. What I did was to make a copy of the file and load the data on the new file, and everytime the ssis loads it overwrites the file, so I always have new data..........Thanks!! | unknown | |
d11611 | train | If you're only interested in the enum's value, and not its type, you should be able to use a constexpr function to convert the value to an integer, avoiding repeating the type name.
enum class Animal { Cat, Dog, Horse };
template <typename T> constexpr int val(T t)
{
return static_cast<int>(t);
}
template <int Val, typename T> bool magic(T &t)
{
return magical_traits<Val>::invoke(t);
}
magic<val(Animal::Cat)>(t);
However, as pointed out already by others, if you want to make this depend on the type as well, it will not work.
A: You can do it like this, if you can use C++17
#include <type_traits>
enum class Animal { Cat, Dog, Horse };
template <typename EnumClass, EnumClass EnumVal>
void magic_impl()
{
static_assert(std::is_same_v<EnumClass, Animal>);
static_assert(EnumVal == Animal::Cat);
}
template <auto EnumVal>
void magic()
{
magic_impl<decltype(EnumVal), EnumVal>();
}
int main()
{
magic<Animal::Cat>();
}
demo:
http://coliru.stacked-crooked.com/a/9ac5095e8434c9da
A: This question has an accepted answer (upvoted).
While refactoring my own code, I figured out a more complete solution:
Step 1: using code I was writing:
template<typename V, typename EnumClass, EnumClass Discriminator>
class strong_type final // type-safe wrapper for input parameters
{
V value;
public:
constexpr explicit strong_type(V x): value{x} {}
constexpr auto get() const { return value; }
};
Step 2: client code:
enum class color { red, green, blue, alpha };
// the part OP was asking about:
template<color C>
using color_channel = strong_type<std::uint8_t, color, C>;
using red = color_channel<color::red>; // single argument here
using green = color_channel<color::green>;
using blue = color_channel<color::blue>;
using alpha = color_channel<color::alpha>;
A: I'm sorry, I have to tell you that
It is not possible
Take the macro, put it into a scary named header and protect it from your colleague's cleanup script. Hope for the best. | unknown | |
d11612 | train | You should check logs/catalina.date.log and logs/localhost..log files.
If you are under unix execute:
grep SEVERE logs/*
to get the errors.
The real error associated with
Context [/my-service] startup failed due to previous errors
Is before in the logs | unknown | |
d11613 | train | I would tend to start with an enumeration ProjectSize {Small, Medium, Large} and a simple function to return the appropriate enum given a numberOfManuals. From there, I would write different ServiceHourCalculators, the WritingServiceHourCalculator and the AnalysisServiceHourCalculator (because their logic is sufficiently different). Each would take a numberOfManuals, a ProjectSize, and return the number of hours. I'd probably create a map from string to ServiceHourCalculator, so I could say:
ProjectSize projectSize = getProjectSize(_numberOfManuals);
int hours = serviceMap.getService(_serviceType).getHours(projectSize, _numberOfManuals);
This way, when I added a new project size, the compiler would balk at some unhandled cases for each service. It's not all handled in one place, but it is all handled before it will compile again, and that's all I need.
Update
I know Java, not C# (very well), so this may not be 100% right, but creating the map would be something like this:
Map<String, ServiceHourCalculator> serviceMap = new HashMap<String, ServiceHourCalculator>();
serviceMap.put("writing", new WritingServiceHourCalculator());
serviceMap.put("analysis", new AnalysisServiceHourCalculator());
A: A good start would be to extract the conditional statement into a method(although only a small method) and give it a really explicit name. Then extract the logic within the if statement into their own methods - again with really explicit names. (Don't worry if the method names are long - as long as they do what they're called)
I would write this out in code but it would be better for you to pick names.
I would then move onto more complicated refactoring methods and patterns. Its only when your looking at a series of method calls will it seem appropriate to start applying patterns etc..
Make your first goal to write clean, easy to read and comprehend code. It is easy to get excited about patterns (speaking from experience) but they are very hard to apply if you can't describe your existing code in abstractions.
EDIT:
So to clarify - you should aim to get your if statement looking like this
if( isBox() )
{
doBoxAction();
}
else if( isSquirrel() )
{
doSquirrelAction();
}
Once you do this, in my opinion, then it is easier to apply some of the patterns mentioned here. But once you still have calculatios etc... in your if statement, then it is harder to see the wood from the trees as you are at too low of an abstraction.
A: You don't need the Factory if your subclasses filter themselves on what they want to charge for. That requires a Project class to hold the data, if nothing else:
class Project {
TaskType Type { get; set; }
int? NumberOfHours { get; set; }
}
Since you want to add new calculations easily, you need an interface:
IProjectHours {
public void SetHours(IEnumerable<Project> projects);
}
And, some classes to implement the interface:
class AnalysisProjectHours : IProjectHours {
public void SetHours(IEnumerable<Project> projects) {
projects.Where(p => p.Type == TaskType.Analysis)
.Each(p => p.NumberOfHours += 30);
}
}
// Non-LINQ equivalent
class AnalysisProjectHours : IProjectHours {
public void SetHours(IEnumerable<Project> projects) {
foreach (Project p in projects) {
if (p.Type == TaskType.Analysis) {
p.NumberOfHours += 30;
}
}
}
}
class WritingProjectHours : IProjectHours {
public void SetHours(IEnumerable<Project> projects) {
projects.Where(p => p.Type == TaskType.Writing)
.Skip(0).Take(2).Each(p => p.NumberOfHours += 30);
projects.Where(p => p.Type == TaskType.Writing)
.Skip(2).Take(6).Each(p => p.NumberOfHours += 20);
projects.Where(p => p.Type == TaskType.Writing)
.Skip(8).Each(p => p.NumberOfHours += 10);
}
}
// Non-LINQ equivalent
class WritingProjectHours : IProjectHours {
public void SetHours(IEnumerable<Project> projects) {
int writingProjectsCount = 0;
foreach (Project p in projects) {
if (p.Type != TaskType.Writing) {
continue;
}
writingProjectsCount++;
switch (writingProjectsCount) {
case 1: case 2:
p.NumberOfHours += 30;
break;
case 3: case 4: case 5: case 6: case 7: case 8:
p.NumberOfHours += 20;
break;
default:
p.NumberOfHours += 10;
break;
}
}
}
}
class NewProjectHours : IProjectHours {
public void SetHours(IEnumerable<Project> projects) {
projects.Where(p => p.Id == null).Each(p => p.NumberOfHours += 5);
}
}
// Non-LINQ equivalent
class NewProjectHours : IProjectHours {
public void SetHours(IEnumerable<Project> projects) {
foreach (Project p in projects) {
if (p.Id == null) {
// Add 5 additional hours to each new project
p.NumberOfHours += 5;
}
}
}
}
The calling code can either dynamically load IProjectHours implementors (or static them) and then just walk the list of Projects through them:
foreach (var h in AssemblyHelper.GetImplementors<IProjectHours>()) {
h.SetHours(projects);
}
Console.WriteLine(projects.Sum(p => p.NumberOfHours));
// Non-LINQ equivalent
int totalNumberHours = 0;
foreach (Project p in projects) {
totalNumberOfHours += p.NumberOfHours;
}
Console.WriteLine(totalNumberOfHours);
A: this is a common problem, there are a few options that i can think of. There are two design pattern that come to mind, firstly the Strategy Pattern and secondly the Factory Pattern. With the strategy pattern it is possible to encapsulate the calculation into an object, for example you could encapsulate your GetHours method into individual classes, each one would represent a calculation based on size. Once we have defined the different calculation strategies we wrap then in a factory. The factory would be responsible for selecting the strategy to perform the calculation just like your if statement in the GetHours method. Any way have a look at the code below and see what you think
At any point you could create a new strategy to perform a different calculation. The strategy can be shared between different objects allowing the same calculation to be used in multiple places. Also the factory could dynamically work out which strategy to use based on configuration, for example
class Program
{
static void Main(string[] args)
{
var factory = new HourCalculationStrategyFactory();
var strategy = factory.CreateStrategy(1, "writing");
Console.WriteLine(strategy.Calculate());
}
}
public class HourCalculationStrategy
{
public const int Small = 2;
public const int Medium = 8;
private readonly string _serviceType;
private readonly int _numberOfManuals;
public HourCalculationStrategy(int numberOfManuals, string serviceType)
{
_serviceType = serviceType;
_numberOfManuals = numberOfManuals;
}
public int Calculate()
{
return this.CalculateImplementation(_numberOfManuals, _serviceType);
}
protected virtual int CalculateImplementation(int numberOfManuals, string serviceType)
{
if (serviceType == "writing")
return (Small * 30) + (20 * (Medium - Small)) + (10 * numberOfManuals - Medium);
if (serviceType == "analysis")
return 30;
return 0;
}
}
public class SmallHourCalculationStrategy : HourCalculationStrategy
{
public SmallHourCalculationStrategy(int numberOfManuals, string serviceType) : base(numberOfManuals, serviceType)
{
}
protected override int CalculateImplementation(int numberOfManuals, string serviceType)
{
if (serviceType == "writing")
return 30 * numberOfManuals;
if (serviceType == "analysis")
return 10;
return 0;
}
}
public class MediumHourCalculationStrategy : HourCalculationStrategy
{
public MediumHourCalculationStrategy(int numberOfManuals, string serviceType) : base(numberOfManuals, serviceType)
{
}
protected override int CalculateImplementation(int numberOfManuals, string serviceType)
{
if (serviceType == "writing")
return (Small * 30) + (20 * numberOfManuals - Small);
if (serviceType == "analysis")
return 20;
return 0;
}
}
public class HourCalculationStrategyFactory
{
public HourCalculationStrategy CreateStrategy(int numberOfManuals, string serviceType)
{
if (numberOfManuals <= HourCalculationStrategy.Small)
{
return new SmallHourCalculationStrategy(numberOfManuals, serviceType);
}
if (numberOfManuals <= HourCalculationStrategy.Medium)
{
return new MediumHourCalculationStrategy(numberOfManuals, serviceType);
}
return new HourCalculationStrategy(numberOfManuals, serviceType);
}
}
A: I would go with a strategy pattern derivative. This adds additional classes, but is more maintainable over the long haul. Also, keep in mind that there are still opporunities for refactoring here:
public class Conditional
{
private int _numberOfManuals;
private string _serviceType;
public const int SMALL = 2;
public const int MEDIUM = 8;
public int NumberOfManuals { get { return _numberOfManuals; } }
public string ServiceType { get { return _serviceType; } }
private Dictionary<int, IResult> resultStrategy;
public Conditional(int numberOfManuals, string serviceType)
{
_numberOfManuals = numberOfManuals;
_serviceType = serviceType;
resultStrategy = new Dictionary<int, IResult>
{
{ SMALL, new SmallResult() },
{ MEDIUM, new MediumResult() },
{ MEDIUM + 1, new LargeResult() }
};
}
public int GetHours()
{
return resultStrategy.Where(k => _numberOfManuals <= k.Key).First().Value.GetResult(this);
}
}
public interface IResult
{
int GetResult(Conditional conditional);
}
public class SmallResult : IResult
{
public int GetResult(Conditional conditional)
{
return conditional.ServiceType.IsWriting() ? WritingResult(conditional) : AnalysisResult(conditional); ;
}
private int WritingResult(Conditional conditional)
{
return 30 * conditional.NumberOfManuals;
}
private int AnalysisResult(Conditional conditional)
{
return 10;
}
}
public class MediumResult : IResult
{
public int GetResult(Conditional conditional)
{
return conditional.ServiceType.IsWriting() ? WritingResult(conditional) : AnalysisResult(conditional); ;
}
private int WritingResult(Conditional conditional)
{
return (Conditional.SMALL * 30) + (20 * conditional.NumberOfManuals - Conditional.SMALL);
}
private int AnalysisResult(Conditional conditional)
{
return 20;
}
}
public class LargeResult : IResult
{
public int GetResult(Conditional conditional)
{
return conditional.ServiceType.IsWriting() ? WritingResult(conditional) : AnalysisResult(conditional); ;
}
private int WritingResult(Conditional conditional)
{
return (Conditional.SMALL * 30) + (20 * (Conditional.MEDIUM - Conditional.SMALL)) + (10 * conditional.NumberOfManuals - Conditional.MEDIUM);
}
private int AnalysisResult(Conditional conditional)
{
return 30;
}
}
public static class ExtensionMethods
{
public static bool IsWriting(this string value)
{
return value == "writing";
}
} | unknown | |
d11614 | train | I have the same issue. It appears the cause is related to the way tables checks every single node value to create a list of keys. I've raised this to pandas dev.
If you want to check whether a key is in the store then
store.__contains__(key)
will do the job and is much faster.
https://github.com/pandas-dev/pandas/issues/17593 | unknown | |
d11615 | train | The MSDN docs do a nice job of displaying the distinction:
The Popup Class:
Represents a pop-up window that has
content.
The ContextMenu Class:
Represents a pop-up menu that enables
a control to expose functionality that
is specific to the context of the
control.
So the ContextMenu is a more-specific version of a Popup - it's meant to be bound to a specific control, providing ways to interact with that control. Read further on the MSDN page: the ContextMenu has built-in facilities for displaying itself when you right-click on the associated control, and it is automatically displayed within a Popup.
The Popup class is much more general: it simply defines a barebones window (no default borders or decoration) that can display any arbitrary UIElement on top of other controls (notice that the Popup class is part of the Primitives namespace, meaning it's meant to be part of the composition of other controls, such as the ContextMenu). | unknown | |
d11616 | train | You may not want to signal to the user there is a problem, but rather just do it in the background. If a user has 64 notifications for one app and hasn't opened the app, then they probably aren't using the app. Once a notification has fired it isn't in the array anymore. So you will have room every time a notification is fired off. They do however remain in notification centre, which you have to clear out yourself.
Its usually better to not present possible problems to the user, but rather handle them in a way that makes sense internally if that is an option. Look up the delegate methods for the appDelegate and you will most likely find ways you can handle what you are trying to do.
Thought I would make a post in case you wanted to accept the answer.
Best of luck. | unknown | |
d11617 | train | I wrote an answer to your question, which works, maybe not completly as you expect but it should give you enough to work with
note the following:
*
*after you wrote to the console\file it is very difficult to return and change printed values
*you must define your desiered output matrix and prepare the entire output before you print anything
the logic behine my example is this:
*
*create a matrix (80*22)
*fill the matrix with spaces
*fill the matrix by columns
*print the entire matrix by char;
#include <stdio.h>
#include <math.h>
#include <string.h>
int main()
{
// declarations
int col, row;
char str[] = "This is a test string";
/* define the screen for print (80 width * 22 length)*/
char printout[80][22];
// setting entire matrix to spaces
for (row=0; row < 22 ; row++)
{
for(col = 0; col < 80; col++)
printout[col][row] = ' ';
}
/* fill in the columns modulo the string to allow continuous output */
for(col = 0; col < 80 ; col++)
{
printout[col][10 + (int) (10 * sin(M_PI * (float) col /10))] = str[( col % strlen(str) )];
}
/* printout the entire matrix formatted */
for (row = 0 ; row < 22 ; row++) {
for (col = 0 ; col < 80 ; col++) {
printf("%c", printout[col][row]);
}
printf("\n");
}
// exit
return 0;
}
there are many thing to correct in this code - the rows should consider the size of the string, you should parse it as string not char etc.
but again it does give you what you want and it might help you to continue...
s t s
t t s s e t
t r s t e s t
s i e r t t s
e n t i r a t
T t g n a i
h T a g n s
i a h T s g i
s i s h i T
s s i i h s
i s i | unknown | |
d11618 | train | Here's a base R solution with rle and cumsum:
result <- rep(0,length(trig))
result[head(cumsum(rle(trig)$lengths)+c(1,0),-1)] <- 1
all.equal(result,trig_result)
#[1] TRUE
Note that this solution assumes the data begins and ends with 0.
A: Here is another base R solution, using logical vectors.
borders <- function(x, b = 1){
n <- length(x)
d1 <- c(x[1] == b, diff(x) != 0 & x[-1] == b)
d2 <- c(rev(diff(rev(x)) != 0 & rev(x[-n]) == b), x[n] == b)
d1 + d2
}
trig <- c(rep(0,20),rep(c(rep(1,10), rep(0,10)),4))
tr <- borders(trig)
The result is not identical() to the expected output because its class is different but the values are all.equal().
trig_result <- c(rep(0,20), rep(c(1, rep(0,8),1,rep(0,10)),4))
identical(trig_result, tr) # FALSE
all.equal(trig_result, tr) # TRUE
class(trig_result)
#[1] "numeric"
class(tr)
#[1] "integer"
A: One option is to create a grouping index with rle or rleid (from data.table)
library(data.table)
out <- ave(trig, rleid(trig), FUN = function(x)
x == 1 & (!duplicated(x) | !duplicated(x, fromLast = TRUE)))
identical(trig_result, out)
#[1] TRUE
A: You'd like to find the starts and ends of runs of 1s, and remove all 1s that aren't the start or end of a run.
The start of a run of ones is where the value of the current row is a 1, and the value of the previous row is a 0. You can access the value of previous row using the lag function.
The end of a run of 1s is where the current row is a 1, and the next row is a zero. You can access the value of the next row using the lead function.
library(tidyverse)
result = tibble(Trig = trig) %>%
mutate(StartOfRun = Trig == 1 & lag(Trig == 0),
EndOfRun = Trig == 1 & lead(Trig == 0),
Result = ifelse(StartOfRun | EndOfRun, 1, 0)) %>%
pull(Result) | unknown | |
d11619 | train | So part of the problem was I needed to run:
npm install -D @types/requirejs
npm install -D @types/redux
and then in my tsconfig.json, add:
"types": [
"node",
"lodash",
"react",
"react-dom",
"redux",
"react-redux",
"async",
"requirejs"
],
"typeRoots": [
"node_modules/@types"
],
but also, to address the problem of TypeScript not understand where <script> tag dependencies come from in the front-end, it looks like we can do something like this:
https://weblog.west-wind.com/posts/2016/Sep/12/External-JavaScript-dependencies-in-Typescript-and-Angular-2
De-referencing Globals In order to keep the Typescript compiler happy
and not end up with compilation errors, or have a boat load of type
imports you may only use once or twice, it's sometimes easier to
simply manage the external libraries yourself. Import it using a
regular script tag, or packaged as part of a separate vendor bundle
and then simply referenced in the main page.
So rather than using import to pull in the library, we can just import
using tag as in the past:
Then in any Typescript class/component where you want to use these
libraries explicitly dereference each of the library globals by
explicitly using declare and casting them to any:
declare var redux:any;
declare var socketio: any; | unknown | |
d11620 | train | I don't believe it is a bug rather TF gives us freedom in choosing each method. While we can mix match the layer subclass with keras functional api, I guess we can't make the model subclass work with the Model api of keras. This is where, in my opinion the distinction between eager execution and keras graph mode comes into conflict giving rise to this "SymbolicException".
Making TF aware beforehand what mode it should execute solves it.
A: Ok finally got it working. The first I did it upgraded:
Keras: 2.2.4
TF: 1.15.0
TF: 0.12.0
Next changed my code to use the right version of ELMO model:
import tensorflow_hub as hub
import tensorflow as tf
elmo = hub.Module("https://tfhub.dev/google/elmo/3", trainable=False)
from tensorflow.keras.layers import Input, Lambda, Bidirectional, Dense, Dropout, Flatten, LSTM
from tensorflow.keras.models import Model
def ELMoEmbedding(input_text):
return elmo(tf.reshape(tf.cast(input_text, tf.string), [-1]), signature="default", as_dict=True)["elmo"]
def build_model():
input_layer = Input(shape=(1,), dtype="string", name="Input_layer")
embedding_layer = Lambda(ELMoEmbedding, output_shape=(1024, ), name="Elmo_Embedding")(input_layer)
BiLSTM = Bidirectional(LSTM(128, return_sequences= False, recurrent_dropout=0.2, dropout=0.2), name="BiLSTM")(embedding_layer)
Dense_layer_1 = Dense(64, activation='relu')(BiLSTM)
Dropout_layer_1 = Dropout(0.5)(Dense_layer_1)
Dense_layer_2 = Dense(32, activation='relu')(Dropout_layer_1)
Dropout_layer_2 = Dropout(0.5)(Dense_layer_2)
output_layer = Dense(3, activation='sigmoid')(Dropout_layer_2)
model = Model(inputs=[input_layer], outputs=output_layer, name="BiLSTM with ELMo Embeddings")
model.summary()
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
return model
elmo_BiDirectional_model = build_model()
import numpy as np
import io
import re
from tensorflow import keras
i = 0
max_cells = 300
x_data = np.zeros((max_cells, 1), dtype='object')
y_data = np.zeros((max_cells, 3), dtype='float32')
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
model_elmo = elmo_BiDirectional_model.fit(x_data, y_data, epochs=100, batch_size=5)
train_prediction = elmo_BiDirectional_model.predict(x_data) | unknown | |
d11621 | train | Use DataFrame.xs for select all rows with dividing by DataFrame.div:
sl = df.groupby(['site_id', 'device']).sum()
a = sl.div(sl.xs('all', level=1))
print (a)
nb_uniq_visitors
site_id device
74.0 Camera 0.000000
Car browser 0.000000
Console 0.000053
Desktop 0.561604
Feature phone 0.000000
Phablet 0.009490
Portable media player 0.000213
Smart display 0.000000
Smartphone 0.370795
Tablet 0.054486
Tv 0.000053
Unknown 0.003305
all 1.000000
96.0 Camera 0.000000
Car browser 0.000011
Console 0.000032
Desktop 0.637412
Feature phone 0.000000
Phablet 0.005724
Portable media player 0.000394
Smart display 0.000000
Smartphone 0.287490
Tablet 0.059664
Tv 0.000011
Unknown 0.009263
all 1.000000
Detail:
print (sl.xs('all', level=1))
nb_uniq_visitors
site_id
74.0 18757.0
96.0 185370.0 | unknown | |
d11622 | train | I don't think you're passing a dict to json.dumps() at all. qr.data is clearly a string, as you .decode() it. Presumably it's a json string, so you want to do something like this:
formatted_data = json.dumps(json.load(qr.data.decode()), indent=2)
print(formatted_data) | unknown | |
d11623 | train | You create one BehaviorSubject for all your tests, where you subscribe to it and never unsubscribe so it stays alive while all your tests are being executed.
Angular runs TestBed.resetTestingModule() on each beforeEach which basically destroys your Angular application and causes AppComponent view to be destroyed. But your subscriptions are still there.
beforeEach(() => {
behaviorSubject.next(false); (3) // will run all subscriptions from previous tests
...
});
...
// FAILING TEST!
it('should render jumbotron if the user is not logged in', () => {
appServiceStub.shouldHide().subscribe((li) => { // (1)
// will be executed
1) once you've subscribed since it's BehaviorSubject
2) when you call behaviorSubject.next in the current test
3) when you call behaviorSubject.next in beforeEach block
which causes the error since AppComponent has been already destoryed
fixture.detectChanges();
....
});
behaviorSubject.next(false); // (2)
});
To solve that problem you have to either unsubscribe in each of tests or don't use the same subject for all your tests:
let behaviorSubject;
...
beforeEach(async(() => {
behaviorSubject = new BehaviorSubject<boolean>(false)
TestBed.configureTestingModule({
...
}).compileComponents();
})); | unknown | |
d11624 | train | You are using the Html.BeginForm helper method incorrectly! You mixed route values and html attributes to a single object !
Your current call matches the below overload
public static MvcForm BeginForm(
this HtmlHelper htmlHelper,
string actionName,
string controllerName,
FormMethod method,
IDictionary<string, object> htmlAttributes
)
The last parameter is the htmlAttributes. So with your code, it will generate the form tag markup like this
<form action="/Expenses/Edit" actionname="someActionName" enctype="multipart/form-data"
id="22" method="post">
</form>
You can see that Id and action became 2 attributes of the form !
Try this overload where you can specify both route values and htmlAttributes
@using (Html.BeginForm("AddToCart", "Home", new { actionName = "Edit", id = 34 },
FormMethod.Post,new { enctype = "multipart/form-data", }))
{
<input type="submit"/>
}
which will generate the correct form action attribute value using the route values you provided. | unknown | |
d11625 | train | You are getting that error message because (apparently) the string sometimes doesn't have the word "Notifications", so the theScanner2 sets its scan location to the end of the string. Then, when you try to set the scan location 13 characters ahead, it's past the end of the string, and you get an out of range error.
A: Trying to setScanLocation in NSScanner which is out of bound means the end of the string and beyond which doesnot exists by setting scan location | unknown | |
d11626 | train | Try casting the strings to integers in the code below
salaries = [int(salary) for emp_no, name, age, pos, salary, yrs_emp in emp_data_list]
Also, welcome to Stack Overflow! Mark this as the answer if it works for you :)
A: The apostrophes which you see in your print output show up there to indicate that the values are strings. I guess you need to convert your variables to integers before plotting:
salaries = list(map(int, salaries)) | unknown | |
d11627 | train | This got resolved by add overflow: hidden; - thanks to Akshay !
A: Check it out here
Calculate the height of .progressbar by using the CSS calc() function, more information about this here.
height: calc(56px / 3); /* Height of wrapper devided by number of divs */ | unknown | |
d11628 | train | I wouldn't use the SpecialCells property at all. Just iterate through every row in the UsedRange and check the Hidden property as you go.
Not sure what language you are using but here's an example in VBA:
Dim rowIndex As Range
With Worksheets("Sheet1")
For Each rowIndex In .UsedRange.Rows
If (rowIndex.Hidden) Then
' do nothing - row is filtered out
Else
' do something
End If
Next rowIndex
End With
Each row (or rather each Range object referenced by rowIndex) will contain all of the columns including the hidden ones. If you need to determine whether or not a column is hidden then just check the Hidden property but remember that this only applies to entire columns or rows:
Dim rowIndex As Range
Dim colNumber As Integer
With Worksheets("Sheet1")
For Each rowIndex In .UsedRange.Rows
If (rowIndex.Hidden) Then
' do nothing - row is filtered out
Else
For colNumber = 1 To .UsedRange.Columns.Count
' Need to check the Columns property of the Worksheet
' to ensure we get the entire column
If (.Columns(colNumber).Hidden) Then
' do something with rowIndex.Cells(1, colNumber)
Else
' do something else with rowIndex.Cells(1, colNumber)
End If
Next colNumber
End If
Next rowIndex
End With
A: Thread Necro time. Here's what I do.
'Count the total number of used rows in the worksheet (Using Column A to count on)
numFilteredCells = Application.WorksheetFunction.Subtotal(3, Range("A1:A" & Cells.Find("*", SearchOrder:=xlByRows, SearchDirection:=xlPrevious).Row))
'Find Last filtered row with content
j = Range("A1").Cells(Rows.Count, 1).End(xlUp).Offset(0, 0).Row
'Subtract the total number of filtered rows with content, + 1
jTargetDataRow = j - numFilteredCells + 1
jTargetDataRow now contains the first filtered row with content, j contains the last, and numFilteredCells contains the total number of filtered rows that have content. | unknown | |
d11629 | train | string[] oldNameDistinct = oldname.Where(s => !newname.Contains(s)).ToArray();
string[] newNameDistinct = newname.Where(s => !oldname.Contains(s)).ToArray();
A: Let the two arrays were defined like the following:
string[] oldname = new[] { "arun", "jack", "tom" };
string[] newname = new string[] { "jack", "hardy", "arun" };
Then you can use the Extension method .Except to achieve the result that you are looking for. Consider the following code and the working example
var distinctInOld = oldname.Except(newname);
var distinctInNew = newname.Except(oldname);
A: Try this :
string[] oldname = new string[] { "arun", "jack", "tom" };
string[] newname = new string[] { "jack", "hardy", "arun" };
List<string> distinctoldname = new List<string>();
List<string> distinctnewname = new List<string>();
foreach (string txt in oldname)
{
if (Array.IndexOf(newname, txt) == -1)
distinctoldname.Add(txt);
}
foreach (string txt in newname)
{
if (Array.IndexOf(oldname, txt) == -1)
distinctnewname.Add(txt);
}
//here you can get both the arrays separately
Hope this help :)
A: string[] oldname = new []{"arun","jack","tom"};
string[] newname = new []{"jack","hardy","arun"};
// use linq to loop through through each list and return values not included in the other list.
var distinctOldName = oldname.Where(o => newname.All(n => n != o));
var distinctNewName = newname.Where(n => oldname.All(o => o != n));
distinctOldName.Dump(); // result is tom
distinctNewName.Dump(); // result is hardy | unknown | |
d11630 | train | Absolutely. If you use a tool like kimonolabs.com this can be relatively easy. You click the data that you want on the page, so instead of getting all images including advertisements, Kimono uses the CSS selectors of the data you clicked to know which data to scrape.
You can use Kimono to scrape data within links as well. It's actually a very common use. Here's a break-down of that strategy: https://help.kimonolabs.com/hc/en-us/articles/203438300-Source-URLs-to-crawl-from-another-kimono-API
This might be a helpful solution for you, especially if you're not a programmer because it doesn't require coding experience. It's a pretty powerful tool.
A: I think if you are ok with PHP programming then give a look into php simple html dome parser. I have used it a lot and scrapped number of websites. | unknown | |
d11631 | train | Well, you need to create ACL Functionality.
in which you need to create a pivot table.
id (int) 11
user_id (int) 11
controller (text)
action (text)
Database Records can be :
| 1 | 3 | users | dashboard |
| 1 | 3 | users | profile |
| 1 | 3 | users | password |
and you can make an interface to update user with their allowed controllers/actions
In this way, you can allow users to access their respective actions.
To get the permissions, you need to run the query with current user id and get controller/action from URL or getRoutes function. | unknown | |
d11632 | train | The current epoch time (AKA unix timestamp), 1554637856 is the number of seconds since 01-01-1970, not milliseconds.
Date.now() returns the epoch time in milliseconds, so you'd want seconds:
if (endTime <= now / 1000) {
...
A: As of this writing the time in seconds since the UNIX epoch is about 1 554 637 931. So, the time in milliseconds—the JavaScript time—is about 1 554 637 931 654.
It’s been about 1.55 gigaseconds since the epoch. Your JavaScript timestamps are, in fact, milliseconds. | unknown | |
d11633 | train | Your best bet would be to do some off to the side calculations. For example, with columns t, X, and Y:
*
*Your first point (x, y) will be any point on the circle with radius r.
*Your next point will use the math here https://www.mathopenref.com/coordparamcircle.html based on the first point and whatever t you wish (smaller increments will produce a more accurate circle)
*Keep using that math until you hit 360 degrees.
Assuming r = 5, centered at (5, 10), and using t increments of 0.25: | unknown | |
d11634 | train | For a huge website like and I would not use a Free Analytics. I would use something like Web trends or some other paid analytics. We cannot blame GA for this after all its a free service ;-)
GA has page view limits too. (5 Million page views)
Just curious. How long did you take to add the analytics code to your pages? ;-)
A: In Advanced Web Metrics with Google Analytics Brian Clifton writes that above a certain number of page views, Google Analytics is no more able to list all the seperate page views and starts aggregating the small amount ones under „(other)” entry.
By default, Google Analytics collects
pageview data for every visitor. For
very high traffic sites, the amount
of data can be overwhelming, leading
to large parts of the “long tail” of
information to be missing from your
reports, simply because they are too
far down in the report tables. You can
diminish this issue by creating
separate profiles of visitor
segments—for example, /blog, /forum,
/support, etc. However, another option
is to sample your visitors.
A: I get about 3.5 million hits a month on one of my sites using GA. I don't see (other) listed anywhere. Specifically what report are you viewing? Is (other) the title or URL of the page?
A: You can get a loooonnnngggg way on Google Analytics. I had a site doing about 25mm uniques/mo. and it was working for us just fine. The "other" bucket fills up when you hit a certain limit of pageviews/etc. The way around this is to create different filters on the data.
A: For a huge website (millions of page views per day), you should try out SnowPlow:
https://github.com/snowplow/snowplow
This will give you granular data down to the individual page URLs (unlike Google Analytics at that volume) and, because it is based on Hadoop/Hive/Infobright, it will happily scale up to billions of page views.
A: Its more to do with a daily limit of unique values for a metric they will report on. if your site uses querystring parameters, all those unique values and parameter variations are seen as separate pages and cause the report to go over the limit of 50,000 unique values in a day for a metric. To eliminate, you should add all the big culprits querystring names to be ignored, making sure however to not add any search querystring names if search is on.
On the Profile Settings, add them to the Exclude URL Query Parameters textbox field, delimited by commas. Once I did this, the (other) went away from the reports. It takes affect at the point they are added, previous days will still have (other) displaying. | unknown | |
d11635 | train | Your code download 5000 images 50 times. Try following:
import concurrent.futures
import urllib.request
catname = 'amateur'
def getimg(count):
localpath = '{0}/images/{0}{1}.jpg'.format(catname, count)
urllib.request.urlretrieve(URLS[count], localpath)
URLS[count] = localpath
with concurrent.futures.ThreadPoolExecutor(max_workers=50) as e:
for i in range(5000):
e.submit(getimg, i) | unknown | |
d11636 | train | But is it OK to use the stack this way? Or is there a better way to do this?
Absolutely; BASIC does it all the time, as do many routines in the kernal.
But, there is no right answer to this, it comes down to at least speed, portability, and style.
*
*If you use the stack a lot, there are some speed considerations. Your typical pha txa pha tya pha at the start, and then the reverse (pla tay pla tax pla) eats up 3 bytes of your stack, and adds in some cycle time due to the 2 x 5 operations
*You could use zero page, but that takes away some portability between different machines; VIC-20, C64, C128, the free zero page addresses may not be the same across platforms. And your routine can't be called "more than once" without exiting first (e.g. no recursion) because if it is called while it is active, it will overwrite zero page with new values. But, you don't need to use zero page...
*...because you can just create your own memory locations as part of your code:
myroutine = *
; do some stuff..
rts
mymem =*
.byt 0, 0, 0
*the downside to this is that your routine can only be called "once", otherwise subsequent calls will overwrite your storage areas (e.g. no recursion allowed!!, same problem as before!)
*You could write your own mini-stack
put_registers =*
sei ; turn off interrupts so we make this atomic
sty temp
ldy index
sta a_reg,y
stx x_reg,y
lda temp
sta y_reg,y
inc index
cli
rts
get_registers =*
sei ; turn off interrupts so we make this atomic
dec index
ldy index
lda y_reg,y
sta temp
lda a_reg,y
ldx x_reg,y
ldy temp
cli
rts
a_reg .buf 256
x_reg .buf 256
y_reg .buf 256
index .byt 0
temp .byt 0
*This has the added benefit that you now have 3 virtual stacks (one for each of .A, .X, .Y), but at a cost (not exactly a quick routine). And because we are using SEI and CLI, you may need to re-think this if doing it from an interrupt handler. But this also keeps the "true" stack clean and more than triples your available space. | unknown | |
d11637 | train | Couple of things going on here. First, the file not found is happening because it is looking for a file called "submit" since you have:
<form action=submit method="post">. You don't need this property, nor do you need the method="post" because you're not sending your form data anywhere.
The second thing happening is that you're not passing your Hey() function an int. Since you're not using post, the form data isn't going anywhere. When you call Hey() you need to pass in document.getElementById('Guess').value since that's where you have it OR do something like this:
// no args
function Hey() {
let Guess = document.getElementById('Guess').value;
Hope that helps! | unknown | |
d11638 | train | try adding the file name in the remotepath parameter. From the API docs for put:
"remotepath (str) – the destination path on the SFTP server. Note that the filename should be included. Only specifying a directory may result in an error."
http://docs.paramiko.org/en/2.4/api/sftp.html#paramiko.sftp_client.SFTPClient
import os
import paramiko
server = "sample_server.net"
ssh = paramiko.SSHClient()
ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts")))
ssh.connect(server, username="cb", password="pass")
sftp = ssh.open_sftp()
sftp.put("test_upload.xml", "/home/sample/root/cb/test_upload.xml")
sftp.close()
ssh.close()
Doing this worked for me. | unknown | |
d11639 | train | You don't really need a function when you can use train.speed += amount. You will want to initialize the speed as 0, not an empty tuple, though
Without more clarity, I'm guessing instructions are looking for
def accelerate(self, amount):
self.speed += amount
def decelerate(self, amount):
self.accelerate(-1*amount) | unknown | |
d11640 | train | As @MichaelFehr pointed out, version2 only has the initialization vector and the encrypted bytes concatenated together before converting the bytes back to string. I have tested that if I concatenate the string the same way as version2 in version1, the result string will become the same. | unknown | |
d11641 | train | You will also have to install a release agent on the target server where you will be deploying the database, assign it to a Deployment Group, create your release pipeline template and then run a release. I wrote a blog post about how to deploy a database to an on-prem SQL Server by leveraging Azure DevOps: https://jpvelasco.com/deploying-a-sql-server-database-onto-an-on-prem-server-using-azure-devops/
Hope this helps.
A: If you already created a Deployment Group, within your Pipeline:
*
*Click to Add a new stage: [1]: https://i.stack.imgur.com/vc5TI.png
*On the right side, (SELECT A TEMPLATE screen) type SQL in the search box
*Select: IIS website and SQL database deployment, this will add a Stage with Two tasks: IIS deployment and SQL DB Deploy.
*Delete the IIS Deployment Task
*Configure the SQL DB Deploy task - It does not say it is deprecated. | unknown | |
d11642 | train | I believe you are not get back the username as a string. Try using PFUser.current()!.username instead | unknown | |
d11643 | train | The problem is all about *ngIf. First time its not able to make it true. that's why I am setting it true using setTimeout().
If you still have issue do let me know. I will try to help.
Working link
https://stackblitz.com/edit/deferred-expansion-panel-broken-b2vurz?file=app%2Fside-menu%2Fside-menu.component.ts
A: This is definitely problem with 5.x version, this should work out of the box.
Nothing worked if I had expansion panel [expanded] property set in html or initialised during constructor or component life cycle events.
Even working example from stackblitz is not working locally inside dynamic component!
https://stackblitz.com/angular/bbjxooxpqmy?file=app%2Fexpansion-overview-example.html
However if structural directives are used with async data / Observables that are emitting after component is initialised and rendered it will work...
Using interval or timeouts works, but its not good solution because load and render times differ greatly on different devices such as desktop and mobile!
A: Just a note on this, that it might be because you need to wrap the expanded property in square brackets, otherwise you are passing in a string of "false", which will evaluate to true.
It should be:
<mat-expansion-panel [expanded]="false"> | unknown | |
d11644 | train | You'd want to put the optional chain's question mark after the ), just before the . for the syntax to be valid, but you also can't call Object.keys on something that isn't defined. Object.keys will return an array or throw, so the optional chain for the .map isn't needed.
Try something like
{Object.keys(component?.external_links ?? {}).map((item, index) => {
})} | unknown | |
d11645 | train | You don't need JSP or JSF; all you need is a servlet. It's an HTTP listener class. You can do REST with that.
The moment you say that you have to deploy your servlet in a WAR on a servlet/JSP engine. Tomcat is a good choice.
Google for a servlet tutorial and you'll be on your way.
My First Tomcat Servlet
A: Ok, thanks to duffymos answer and comments i realized i was actualy searching with the wrong keywords.
Embedded web server is the thing i was looking for.
Like Simple or build in HTTPServer class in java. | unknown | |
d11646 | train | No, the Swift standard libraries do not provide a method to reverse the order of bits in an integer, see for example the discussion Bit reversal in the Swift forum.
One can use the C methods from Bit Twiddling Hacks, either by importing C code to Swift, or by translating it to Swift.
As an example, I have taken the loop-based variant
unsigned int s = sizeof(v) * CHAR_BIT; // bit size; must be power of 2
unsigned int mask = ~0;
while ((s >>= 1) > 0)
{
mask ^= (mask << s);
v = ((v >> s) & mask) | ((v << s) & ~mask);
}
because that is not restricted to a certain integer size. It can be translated to Swift as an extension to FixedWidthInteger so that it works with integers of all sizes:
extension FixedWidthInteger {
var bitSwapped: Self {
var v = self
var s = Self(v.bitWidth)
precondition(s.nonzeroBitCount == 1, "Bit width must be a power of two")
var mask = ~Self(0)
repeat {
s = s >> 1
mask ^= mask << s
v = ((v >> s) & mask) | ((v << s) & ~mask)
} while s > 1
return v
}
}
Examples:
print(String(UInt64(1).bitSwapped, radix: 16))
// 8000000000000000
print(String(UInt64(0x8070605004030201).bitSwapped, radix: 16))
// 8040c0200a060e01
print(String(UInt16(0x1234).bitSwapped, radix: 16))
// 2c48
Another option would be to use byteSwapped to reverse the order of bytes first, and then reverse the order of the bits in each byte with a (precomputed) lookup table:
fileprivate let bitReverseTable256: [UInt8] = [
0, 128, 64, 192, 32, 160, 96, 224, 16, 144, 80, 208, 48, 176, 112, 240,
8, 136, 72, 200, 40, 168, 104, 232, 24, 152, 88, 216, 56, 184, 120, 248,
4, 132, 68, 196, 36, 164, 100, 228, 20, 148, 84, 212, 52, 180, 116, 244,
12, 140, 76, 204, 44, 172, 108, 236, 28, 156, 92, 220, 60, 188, 124, 252,
2, 130, 66, 194, 34, 162, 98, 226, 18, 146, 82, 210, 50, 178, 114, 242,
10, 138, 74, 202, 42, 170, 106, 234, 26, 154, 90, 218, 58, 186, 122, 250,
6, 134, 70, 198, 38, 166, 102, 230, 22, 150, 86, 214, 54, 182, 118, 246,
14, 142, 78, 206, 46, 174, 110, 238, 30, 158, 94, 222, 62, 190, 126, 254,
1, 129, 65, 193, 33, 161, 97, 225, 17, 145, 81, 209, 49, 177, 113, 241,
9, 137, 73, 201, 41, 169, 105, 233, 25, 153, 89, 217, 57, 185, 121, 249,
5, 133, 69, 197, 37, 165, 101, 229, 21, 149, 85, 213, 53, 181, 117, 245,
13, 141, 77, 205, 45, 173, 109, 237, 29, 157, 93, 221, 61, 189, 125, 253,
3, 131, 67, 195, 35, 163, 99, 227, 19, 147, 83, 211, 51, 179, 115, 243,
11, 139, 75, 203, 43, 171, 107, 235, 27, 155, 91, 219, 59, 187, 123, 251,
7, 135, 71, 199, 39, 167, 103, 231, 23, 151, 87, 215, 55, 183, 119, 247,
15, 143, 79, 207, 47, 175, 111, 239, 31, 159, 95, 223, 63, 191, 127, 255]
extension FixedWidthInteger {
var bitSwapped: Self {
var value = self.byteSwapped
withUnsafeMutableBytes(of: &value) {
let bytes = $0.bindMemory(to: UInt8.self)
for i in 0..<bytes.count {
bytes[i] = bitReverseTable256[Int(bytes[i])]
}
}
return value
}
} | unknown | |
d11647 | train | Instead of changing the class header, replace everywhere in the class where you used R with Bar<R>.
So the class header stays the same:
class Foo<T, R extends CustomClass>
But let's say you have a field of type R. That needs to be changed to Bar<R>:
Bar<R> someField;
A: It looks like what you need may be:
class Foo<T, R extends CustomClass, S extends Bar<R>>
A: You would use
class Foo<T extends CustomClass, R extends Bar<T>> {}
If Bar is fixed, i.e., if the user does not need another class extending Bar, then you need to get rid of R and just use Bar<T> inside Foo | unknown | |
d11648 | train | In spec/spec_helper.rb, try adding
FactoryGirl.find_definitions
under
require 'factory_girl_rails'
or make sure you follow factory_bot's Getting Started guide.
A: This must be your answer.
The required addition should be made in spec/support/factory_girl.rb
https://stackoverflow.com/a/25649064/1503970
A: I'm just putting this here for anyone clumsy like me. Everything was fine, but I was calling FactoryGirl in one place, when I only have FactoryBot installed. Newbie error. Hope it might help. | unknown | |
d11649 | train | User ADO service hooks: https://learn.microsoft.com/en-us/azure/devops/extend/develop/add-service-hook?view=azure-devops
Search through available list and there you will see possible actions to react on. API hooks allows you to receive data based on Boards changes (e.g. Task status change etc). | unknown | |
d11650 | train | Have a look at custom scalars: https://www.apollographql.com/docs/graphql-tools/scalars.html
create a new scalar in your schema:
scalar Date
type MyType {
created: Date
}
and create a new resolver:
import { GraphQLScalarType } from 'graphql';
import { Kind } from 'graphql/language';
const resolverMap = {
Date: new GraphQLScalarType({
name: 'Date',
description: 'Date custom scalar type',
parseValue(value) {
return new Date(value); // value from the client
},
serialize(value) {
return value.getTime(); // value sent to the client
},
parseLiteral(ast) {
if (ast.kind === Kind.INT) {
return parseInt(ast.value, 10); // ast value is always in string format
}
return null;
},
})
};
A: Primitive scalar types in GraphQL are Int, Float, String, Boolean and ID. For JSON and Date you need to define your own custom scalar types, the documentation is pretty clear on how to do this.
In your schema you have to add:
scalar Date
type MyType {
created: Date
}
Then, in your code you have to add the type implementation:
import { GraphQLScalarType } from 'graphql';
const dateScalar = new GraphQLScalarType({
name: 'Date',
parseValue(value) {
return new Date(value);
},
serialize(value) {
return value.toISOString();
},
})
Finally, you have to include this custom scalar type in your resolvers:
const server = new ApolloServer({
typeDefs,
resolvers: {
Date: dateScalar,
// Remaining resolvers..
},
});
This Date implementation will parse any string accepted by the Date constructor, and will return the date as a string in ISO format.
For JSON you might use graphql-type-json and import it as shown here. | unknown | |
d11651 | train | Your id shouldn't have the #, that's for the selector, it should just be id="radio10".
Change that, and this is what you should be after:
$(".class_a :radio").change(function () {
$(".block-cms").toggle($("#radio10:checked").length > 0);
});
You can test it out here.
A: First of all the id on the element should be radio10 and not #radio10.
Then use this code
$("input[name='color']").change(function () {
if ($('input#radio10').is(':checked') ) {
$('.block-cms').show()
}
else {
$('.block-cms').hide();
}
});
A: Here's another solution (IMO having an id on an <input type="radio"> seems a bit wrong to me):
$("input[name='color']").change(function () {
if ($(this).val() == 1) {
$('.block-cms').show()
}
else {
$('.block-cms').hide();
}
}); | unknown | |
d11652 | train | Your Trigger does not work because the default Template of the button has its own trigger that changes the background brush of the root border when the IsMouseOver Property is set. This means: As long as the mouse is on top of the button, the Background-property of the button control will be ignored by its template.
The easiest way to check out the default style of your button is to: right-click at the button in the visual studio wpf designer. Click 'Edit template' -> 'Edit a copy' and select a location.
A copy of the default style is created at the specified location. You will see a trigger like this:
<Trigger Property="IsMouseOver" Value="true">
<Setter Property="Background" TargetName="border" Value="{StaticResource Button.MouseOver.Background}"/>
<Setter Property="BorderBrush" TargetName="border" Value="{StaticResource Button.MouseOver.Border}"/>
</Trigger>
On top of designer created style definition it also created the brushes that are used within the style:
<SolidColorBrush x:Key="Button.MouseOver.Background" Color="#FFBEE6FD"/>
<SolidColorBrush x:Key="Button.MouseOver.Border" Color="#FF3C7FB1"/>
You can either change these brushes or the Value property of the setters above to change the background when the mouse is over.
Or you create your own Template like you did in your old implementation. | unknown | |
d11653 | train | Change the following lines of code
project(Projection.projection("count",
Projection.expression("$size","colors"))
to
Projection.expression("count",new BasicDBObject("$size","$colors")))
A: Did you try
Projection.expression("$size","$colors")));
With dollar before colors? | unknown | |
d11654 | train | Override Field.setEditable(boolean editable) to track your own custom editable boolean:
private boolean customEditable = true;
public void setEditable(boolean editable) {
super.setEditable(editable);
customEditable = editable;
// invalidate(); forces paint(Graphics graphics) to be called
}
Override navigationClick(int status, int time) to use that boolean to detect whether to react on click events:
protected boolean navigationClick(int status, int time) {
if (customEditable) fieldChangeNotify(1);
return true;
}
If you need a custom visual appearance for disabled state, then also override paint(Graphics graphics) to use another color. In this case you'll also need to call invalidate() from the setEditable(). | unknown | |
d11655 | train | bool usingInternalSpeakers()
{
AudioDeviceID defaultDevice = 0;
UInt32 defaultSize = sizeof(AudioDeviceID);
const AudioObjectPropertyAddress defaultAddr = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
AudioObjectGetPropertyData(kAudioObjectSystemObject, &defaultAddr, 0, NULL, &defaultSize, &defaultDevice);
AudioObjectPropertyAddress property;
property.mSelector = kAudioDevicePropertyDataSource;
property.mScope = kAudioDevicePropertyScopeOutput;
property.mElement = kAudioObjectPropertyElementMaster;
UInt32 data;
UInt32 size = sizeof(UInt32);
AudioObjectGetPropertyData(defaultDevice, &property, 0, NULL, &size, &data);
return data == 'ispk';
}
int main(int argc, const char * argv[])
{
if (usingInternalSpeakers())
printf("I'm using the speakers!");
else
printf("i'm using the headphones!");
return 0;
} | unknown | |
d11656 | train | 0xF7 encodes ÷ in Windows-1252. Are you just passing data directly to database?
You should use an email library that reads the email headers correctly, which state the character encoding that is being used in the email. The library would then ideally convert from that encoding to UTF-8 before handing it to you.
mb_detect_encoding is virtually useless because it just has access to the bytes and doesn't apply any heuristics either. It is especially useless if it gives UTF-8 for a string that has 0xF7, which cannot appear in UTF-8 | unknown | |
d11657 | train | Your problem appears to be occurring before the data gets to Turf. Running the GeoJSON from your GitHub issue through a GeoJSON validator reveals two errors. The first is that you only include a geometry object for each feature, and GeoJSON requires that all features also have a properties object, even if it's empty. Second, and more importantly, a valid GeoJSON polygon must be a closed loop, with identical coordinates for the first and last points. This second problem appears to be what's causing Turf to throw its error. The polygons will successfully merge once the first set of coordinates is copied to the end to close the ring.
After displaying the data on a map, it also becomes clear that your latitude and longitude are reversed. Coordinates are supposed to be lon,lat in GeoJSON, and because yours are in lat,lon, the polygons show up in the middle of the Indian Ocean. Once that is corrected, they show up in the correct place.
Here is a fiddle showing their successful merging:
http://fiddle.jshell.net/nathansnider/p7kfxvk7/ | unknown | |
d11658 | train | SELECT [Room Name], [Animal], COUNT(*) FROM TableName GROUP BY [Room Name], [Animal]
This would return
Room 1 | Cat | 1
Room 1 | Dog | 2
Room 2 | Cat | 2
Room 2 | Dog | 2
A: select room_name, animal, count(*)
from table
group by room_name, animal | unknown | |
d11659 | train | In the DevOps git repo, PR syntax is invalid.
The only way you can trigger the pipeline via PR in DevOps is through branch settings.
1, Go to branch settings.
2, Add a build validation policy for all of the branches.
https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/azure-repos-git?view=azure-devops&tabs=yaml#pr-triggers
A: You can do that with pr triggers. You can include which branches do you want to trigger a build. In the below example when you create a pull request for current branch the pipeline will run. You can also use regex syntax like feature/*
trigger:
pr:
branches:
include:
- current
- xyz
exclude:
- uat
https://blog.geralexgr.com/devops/build-triggers-on-azure-devops-pipelines | unknown | |
d11660 | train | Its bcoz of the StatusBar. It reserve 20px of screen . You can remove this space by do change in Status bar is initially hidden in plist and set status bar to NONE in IB.
A: The iPhone 5 has a taller screen. The most flexible way to lay out your xib is via AutoLayout. Here is a tutorial to get you started:
http://www.raywenderlich.com/20881/beginning-auto-layout-part-1-of-2
http://www.raywenderlich.com/20897/beginning-auto-layout-part-2-of-2
Basically, you want the ad banner to be contrained to the bottom of the view. | unknown | |
d11661 | train | Make always uses /bin/sh as the shell it invokes, both for recipes and for $(shell ...) functions. /bin/sh is a POSIX-conforming shell. The syntax you're using is not POSIX shell syntax: it's special enhanced syntax that is only available in the bash shell.
You can either rewrite your scripting to work in POSIX shells (probably by using the expr program to do the math), or add this to your makefile to tell it you want to use bash instead of /bin/sh:
SHELL := /bin/bash
Note, of course, that your makefile will now no longer work on any system that doesn't have a /bin/bash shell. | unknown | |
d11662 | train | Does your fragment have setRetainInstance(true)? If so, that may be causing you an issue here, especially if you are using a fragment apart of FragmentStatePagerAdapter.
A: This can happen with a combination of dismissAllowingStateLoss after onSaveInstanceState and retainInstanceState.
See this helpful example with steps to reproduce (that site does not allow commenting, but it helped me diagnose the issue)
Steps to reproduce:
*
*Open page and show dialog fragment with retainInstance = true
*Background app, onSaveInstanceState is called
*dismiss dialog in an async task via dismissAllowingStateLoss
*perform configuration change, for example by changing language or orientation
*open app
*crash "Unable to start activity... java.lang.IllegalStateException: Could not find active fragment with index -1"
Under the scenes what's going on is that FragmentManagerImpl.restoreAllState now has an active fragment with an index of -1 because dismissAllowingStateLoss removes the fragment from the backstack, BUT, it is still part of nonConfigFragments because the commit part of dismissAllowingStateLoss was ignored as it was called after onSaveInstanceState.
To fix this will require one of:
*
*not using retainInstanceState on Dialogs that can be dismissed via dismissAllowStateLoss, or
*not calling dismiss after state loss
and implementing the desired behavior in a different way. | unknown | |
d11663 | train | It is only possible to edit the codegen to change this.
But you can just use the body of the return value
<restMethod>(<paramters>).then(respose: <request.Response>) {
let responseObject: Array<ListMovies> = response.body as Array<ListMovies>;
...
}
If you want to adapt the codegen, pull it from git and change the following files:
lib/codegen.js:
var getViewForSwagger2 = function(opts, type){
var swagger = opts.swagger;
var methods = [];
var authorizedMethods = ['GET', 'POST', 'PUT', 'DELETE', 'PATCH', 'COPY', 'HEAD', 'OPTIONS', 'LINK', 'UNLIK', 'PURGE', 'LOCK', 'UNLOCK', 'PROPFIND'];
var data = {
isNode: type === 'node' || type === 'react',
isES6: opts.isES6 || type === 'react',
description: swagger.info.description,
isSecure: swagger.securityDefinitions !== undefined,
moduleName: opts.moduleName,
className: opts.className,
imports: opts.imports,
domain: (swagger.schemes && swagger.schemes.length > 0 && swagger.host && swagger.basePath) ? swagger.schemes[0] + '://' + swagger.host + swagger.basePath.replace(/\/+$/g,'') : '',
methods: [],
definitions: []
};
_.forEach(swagger.definitions, function(definition, name){
data.definitions.push({
name: name,
description: definition.description,
tsType: ts.convertType(definition, swagger)
});
});
the last _.forEach is moved from the bottom of the method to here.
var method = {
path: path,
className: opts.className,
methodName: methodName,
method: M,
isGET: M === 'GET',
isPOST: M === 'POST',
summary: op.description || op.summary,
externalDocs: op.externalDocs,
isSecure: swagger.security !== undefined || op.security !== undefined,
isSecureToken: secureTypes.indexOf('oauth2') !== -1,
isSecureApiKey: secureTypes.indexOf('apiKey') !== -1,
isSecureBasic: secureTypes.indexOf('basic') !== -1,
parameters: [],
responseSchema: {},
headers: []
};
if (op.responses && op.responses["200"]) {
method.responseSchema = ts.convertType(op.responses["200"], swagger);
} else {
method.responseSchema = {
"tsType": "any"
}
}
this is starting at line 102. add responseSchema to method and append the if / else
templates/typescript-class.mustache (line 68)
resolve(response.body);
templates/typescript-method.mustache (line 69)
}): Promise<{{#responseSchema.isRef}}{{responseSchema.target}}{{/responseSchema.isRef}}{{^responseSchema.isRef}}{{responseSchema.tsType}}{{/responseSchema.isRef}}{{#responseSchema.isArray}}<{{responseSchema.elementType.target}}>{{/responseSchema.isArray}}> {
That only works for simple types, Object types and arrays of simple / object types. I did not implement the enum types.
A: That seems like the expected behaviour.
The TypeScript Mustache template outputs a fixed value:
...
}): Promise<request.Response> {
... | unknown | |
d11664 | train | Why java ThreadPoolExecutor kill thread when RuntimeException occurs?
I can only guess that the reason why ThreadPoolExecutor.execute(...) has the thread call runnable.run() directly and not wrap it in a FutureTask is so you would not incur the overhead of the FutureTask if you didn't care about the result.
If your thread throws a RuntimeException, which is hopefully a rare thing, and there is no mechanism to return the exception to the caller then why pay for the wrapping class? So worst case, the thread is killed and will be reaped and restarted by the thread-pool.
A: There is no way to handle exception properly. Exception can't be propagated to caller thread and can't be simply swallowed.
Unhandled exception is thrown in thread is delegated to ThreadGroup.uncaughtException method, which prints output to System.err.print, until desired behavior is overridden for ThreadGroup.
So this is expected behavior, it can be compared with throwing unhanded exception in main method. In this case, JVM terminates execution and prints exception to the output.
But I'm not sure, why ThreadPoolExecutor does not handle it itself, ThreadPoolExecutor can log it itself. Creating new Thread is not so cheap.
Maybe there is an assumption, that some resources (native, threadLocal, threadStack, etc) associated with Thread should be released. | unknown | |
d11665 | train | In method hello(View view) you don't need this string:
TextView textView = (TextView)findViewById(R.id.tx_id);
becouse the view in hello(View view) this is our TextView. Just cast it to TextView and get text from it:
String id = ((TextView)view).getText().toString();
Another and most universal approach: to change
TextView textView = (TextView)findViewById(R.id.tx_id);
to
TextView textView = (TextView)((ViewGroup)(view.getParent())).findViewById(R.id.tx_id);
In this way you can use any R.id for current list item. | unknown | |
d11666 | train | Creating the desired result with merging dataframes can be a complicated process.
The above used login of merging will not be able to satisfy all types of graphs. Have a look at the below method.
# Create graph
graph = {}
for pair in pairs:
if pair['source'] in graph.keys():
graph[pair['source']].append(pair['target'])
else:
graph[pair['source']] = [pair['target']]
# Graph
print(graph)
{
'A1': ['B1', 'D1'],
'B1': ['C1'],
'C2': ['A2'],
'A2': ['B2']
}
# Generating list of nodes
start = 'A1' # Starting node parameter
result = [start]
for each in result:
if each in graph.keys():
result.extend(graph[each])
result = list(set(result))
# Output
print(result)
['A1', 'B1', 'C1', 'D1'] | unknown | |
d11667 | train | Theres a good Q&A about this on the MSDN forums. Most interesting bit:
InsertAllOnSubmit() simply loops over
all the elements in the IEnumerable
collection and calls InsertOnSubmit()
for each element.
A: InsertOnSubmit adds a single record. InsertAllOnSubmit does the same, but for a set (IEnumerable<T>) of records. That's about it.
A: I found this example of InsertAllOnSubmit() at the very bottom of this page. Just remember to add a using statement for System.Collections.Generic
// Create list with new employees
List<Employee> employeesToAdd = new List<Employee>();
employeesToAdd.Add(new Employee() { EmployeeID = 1000, FirstName = "Jan", LastName = "Jansen", Country = "BE" });
employeesToAdd.Add(new Employee() { EmployeeID = 1001, FirstName = "Piet", LastName = "Pieters", Country = "BE" });
employeesToAdd.Add(new Employee() { EmployeeID = 1002, FirstName = "John", LastName = "Johnson", Country = "BE" });
// Add all employees to the Employees entityset
dc.Employees.InsertAllOnSubmit(employeesToAdd);
// Apply changes to database
dc.SubmitChanges(); | unknown | |
d11668 | train | You can set dgrid3d to fill in missing values:
set dgrid3d
splot 'input.txt' with pm3d | unknown | |
d11669 | train | Ok found the pages. Master pages for both the site pages and application pages are listed on the _catalogs/masterpage/Forms/AllItems.aspx page on the site.
One master page can be found in the Sharepoint designer (after connecting to the site) and the other one is located in the C:\Program Files\Common Files\microsoft shared\Web Server Extensions\12\TEMPLATE\LAYOUTS\ folder. | unknown | |
d11670 | train | pthread_create is not a template, and it does not understand C++ types. It takes a void*, which is what C libraries do in order to fake templates (kind of).
You can pass a casted pointer instead of a C++ reference wrapper object:
int rc = pthread_create(&threads, NULL, myfunction, static_cast<void*>(&myMap));
// ...
void* myfunction(void* arg)
{
using T = std::map<std::pair<std::string, std::string>, std::vector<std::string>>;
T& myMap = *static_cast<T*>(arg);
…or, better yet, use boost::thread (C++98) or std::thread (C++11 and later) to get type safety and a longer lifespan. You're not writing a C program. | unknown | |
d11671 | train | Check out the distribution section of the Expo documentation: https://docs.expo.io/distribution/introduction/ | unknown | |
d11672 | train | You can simply add class open on-hover and remove it on mouse leave.
See below example,
$(document).ready(function() {
$('.navbar .dropdown').hover(function() {
$(this).addClass('open');
},
function() {
$(this).removeClass('open');
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://getbootstrap.com/dist/js/bootstrap.min.js"></script>
<link href="https://getbootstrap.com/dist/css/bootstrap.min.css" rel="stylesheet"/>
<nav class="navbar navbar-default">
<div class="container-fluid">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="#">Brand</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav">
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Dropdown <span class="caret"></span></a>
<ul class="dropdown-menu">
<li><a href="#">Action</a></li>
<li><a href="#">Another action</a></li>
<li><a href="#">Something else here</a></li>
<li role="separator" class="divider"></li>
<li><a href="#">Separated link</a></li>
<li role="separator" class="divider"></li>
<li><a href="#">One more separated link</a></li>
</ul>
</li>
</ul>
</div>
<!-- /.navbar-collapse -->
</div>
<!-- /.container-fluid -->
</nav> | unknown | |
d11673 | train | if you are doing
MongoClient client = new MongoClient(
"mongodb://localhost:27017/databaseName?maxPoolSize=200");
then dont do that, instead do as following,
MongoClient client = new MongoClient(
new MongoClientURI(
"mongodb://localhost:27017/databaseName?maxPoolSize=200"));
because you need to tell mongo that you are passing some options along the connection string.
if you think i misunderstood your question. please post the piece of code where you are trying to get a connection.
A: You can try something like this.
MongoClientURI uri = new MongoClientURI("mongodb://localhost:27017/databaseName?maxPoolSize=200");
MongoClient mongoClient = new MongoClient(uri);
Morphia morphia = new Morphia();
Datastore datastore = morphia.createDatastore(mongoClient, "dbname");
Alternatively
MongoClientOptions.Builder options = new MongoClientOptions.Builder();
//set your connection option here.
options.connectionsPerHost(200); //max pool size
MongoClient mongoClient = new MongoClient(new ServerAddress("localhost", 27017), options.build());
Morphia morphia = new Morphia();
Datastore datastore = morphia.createDatastore(mongoClient, "dbname"); | unknown | |
d11674 | train | Though I must admit that it seems strange to me to create tables with data-depending names, but technically the solution is to put the table name in square parentheses. This is the way to escape special characters like '@'.
Something like this:
Sql = "CREATE TABLE IF NOT EXISTS [℅s] (℅s text, ℅s text)" ℅ (Username, "first", " Second ") | unknown | |
d11675 | train | As seen here, your error should be like:
(missing) code.tar.gz (f2b4bf22bcb011fef16f80532247665d15edbb9051***)
Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done.
hint: Your push was rejected due to missing or corrupt local objects.
hint: You can disable this check with: 'git config lfs.allowincompletepush true'
error: failed to push some refs to '[email protected]:group/project.git'
Following this issue, start with
git lfs fetch --all
See if the error persists then.
If it does, try it from a fresh cloned repo (git lfs clone my-repo)
From the discussion:
*
*git lfs fetch --all could not work, since what is missing was never pushed in the first place
*a new git lfs clone, followed by reporting the old repository local work did get added, committed and pushed successfully. | unknown | |
d11676 | train | Your question is: "Can I use the Spotify iOS SDK in India using a US based premium account without any proxy network?". Based on the fact that you're trying to create a streaming app, I'd think that the question you intended to ask is: "Is it possible to enable users to stream Spotify music in India?"
My answer:
I've just read (parts) of the Developer terms of the Spotify SDK. One thing I noticed was:
Streaming via the Spotify Platform shall only be made available to
subscribers to the Premium Spotify Service.
So if I'm right, it is only possible to stream Spotify's music if your users have a Premium Spotify account, which currently isn't available in India, all by there self. It is however possible to use the iOS SDK in India, but this will exclude the possibility to stream (preview-)audio for non-premium users.
A: As far as I know, it's the account's country and not the location of the user what is used to determine if a user can access Spotify. In other words, with a US Premium account you can use Spotify wherever you are in the world as long as you keep paying for the subscription. Read more on this thread.
You should be able to use your account while developing your app. Of course, a user trying to create a Spotify account from India will see a "Spotify is not available in your country" message when trying to sign up, but users accessing from countries where Spotify is available won't have any problem in signing up and logging in to your app.
A: Three simple steps:
*
*Download any VPN app from App Store i.e., VPN 24.
*Go to App store account, change your region and country to USA by filling any sample address.
*Now search Spotify Music on App Store and Download it!
You'll have to keep VPN turned on while using Spotify otherwise it won't work. | unknown | |
d11677 | train | I will suggest using your second solution but passing the value of i and creating another function which encloses that variable. By this I mean the following:
(function(){
var index = i;
http.get(process.argv[i], function (response) {
response.setEncoding('utf8');
response.on('data', handleGetFrom(index));
response.on('error', handleError);
response.on('end', handleEnd);
})
}());
and the handleGetFrom:
var handleGetFrom = function(i) {
return function(data) {
results[i-2] += data;
}
}
Edited my original answer.
A: You can use the 'this' object:
var http = require('http');
http.get('http://www.google.com', function (response) {
response.setEncoding('utf8');
response.on('data', function(chunk) {
console.log(this); // Holds the respons object
console.log(this.headers.location); // Holds the request url
});
}); | unknown | |
d11678 | train | I finally figured it out. Hours of trial and error. Here is the code that did it:
private void startConversionPDF(File file) throws IOException {
if (args == null) {
throw new IllegalStateException("No conversion arguments set.");
}
PDFConvert data = new PDFConvert();
data.setInput("upload");
data.setOutputformat("pdf");
ConverterOptions converteroptions = new ConverterOptions();
converteroptions.setMargin_top(60);
converteroptions.setMargin_bottom(60);
converteroptions.setMargin_left(30);
converteroptions.setMargin_right(30);
data.setConverteroptions(converteroptions);
MultiPart multipart = new FormDataMultiPart()
.bodyPart(new FormDataBodyPart("json", data, MediaType.APPLICATION_JSON_TYPE))
.bodyPart(new FileDataBodyPart("file", file));
root.request(MediaType.APPLICATION_JSON).post(Entity.entity(multipart, multipart.getMediaType()));
} | unknown | |
d11679 | train | did you try this:
.success(function(data, status, headers, config) {
if(status === 200) {
var return_data = data;
if(return_data==="1"){
location.href = "home.html?username=" + fn;
}
else{
ons.notification.alert({message: 'Login Failed!'});
}
}
}
A: $scope.ajaxLogin = function(){
var fn = document.getElementById("username").value;
var pw = document.getElementById("password").value;
$http({
url: "myurl",
method: "POST",
headers: {'Content-Type': 'application/x-www-form-urlencoded'},
data: { username: fn, password: pw }
}).success(function(data, status, headers, config) {
//Your server output is received here.
if(data){console.log('username and password matches');}
else {console.log('username and password does not match');}
$scope.showAlertError();
}).
error(function(data, status, headers, config) {
// called asynchronously if an error occurs
// or server returns response with an error status.
$scope.showAlertError();
});
}; | unknown | |
d11680 | train | Before change activited_at (datetime) in AddActivationToUsers file. You must rollback AddActivationToUsers in db.
*
*rails db:rollback STEP=n (n migrations where n is the number of recent migrations you want to rollback)
*You change activited_at :datetime and save
*rails db:migration | unknown | |
d11681 | train | There's a good deal of customization available to control what app names traffic is reported to and whether particular transactions are reported to New Relic. But if you're currently seeing three different app names appearing under the 'Applications' menu, the easiest thing to do is just click the gear icon and select 'hide app'
Beyond that, an API call can cause any selected transactions to be ignored by the agent. https://newrelic.com/docs/java/java-agent-api
And if that doesn't have it resolved, you'll need to open a ticket at support.newrelic.com so we can take a look at your dashboard/setup. | unknown | |
d11682 | train | As far as I can tell, you cannot do this in .NET 4.0. The only way to create a method body without using ILGenerator is by using MethodBuilder.CreateMethodBody, but that does not allow you to set exception handling info. And ILGenerator forces the leave instruction you're asking about.
However, if .NET 4.5 is an option for you (it seems to be), take a look at MethodBuilder.SetMethodBody. This allows you to create the IL yourself, but still pass through exception handling information. You can wrap this in a custom ILGenerator-like class of your own, with Emit methods taking an OpCode argument, and reading OpCode.Size and OpCode.Value to get the corresponding bytes.
And of course there's always Mono.Cecil, but that probably requires more extensive changes to code you've already written.
Edit: you appear to have already figured this out yourself, but you left this question open. You can post answers to your own questions and accept them, if you've figured it out on your own. This would have let me know I shouldn't have wasted time searching, and which would have let other people with the same question know what to do. | unknown | |
d11683 | train | The best way would be to open the "template dashboard", add the new data source, then go to menu DATA => Replace data source and change the old data source to the new data source.
At this point close the old data source.
The fastest way, but this might not work, is to open the .twb file of the dashboard with Notepad and replace the old database name with the new one...of course something could be broke.
If you have a .twbx file you can open it with WinZip, WinRar, ... and find the .twb file inside.
A: Possibly you could try 2 Options:
*
*Take a copy of your template. In the Data Source tab, go to Connections -> Edit Connection to point it to the new data source.
*In the template, create a new data source. Replace the Old data source with the new one by Data -> Replace Data Source.
For the formulas and filters to reflect on the new data source, make sure the field names and data types are the same as the old data source.
Kindly note: Editing the XML is unsupported by Tableau! Create a copy of your original workbook before attempting any XML editing, as this could potentially corrupt your workbook. | unknown | |
d11684 | train | no need of events you can simply call the function from other function
var sampleView = Backbone.View.extend({
initialize: function () {
this.ResetQuestions();
},
Show: function () {
alert('i am at show');
},
ResetQuestions: function () {
// Execute all you code atlast call Show function as below.
this.Show();
}
});
var view = new sampleView();
A: var sampleView = Backbone.View.extend({
initialize: function(){
this.ResetQuestions().promise().done(function() { // Using promise here.
this.Show();
});
},
Show: function(){
},
ResetQuestions: function(){
// Execute all you code atlast call Show function as below.
}
});
Then initiate your view,
var view = new sampleView();
Hope this works!!
A: Perhaps you just got confused what runs what and by naming event and method with same name Show. I have created a jsfiddle with your code - http://jsfiddle.net/yuraji/aqymbeyy/ - you call ResetQuestion method, it triggers Show event, and the Show event runs Show method.
EDIT: I have updated the fiddle to demonstrate that you probably have to bind the methods to the instance, I used _.bindAll for that. If you don't do that you may get event as the context (this).
EDIT: Then, if your ResetQuestions runs asynchronous code, like an ajax request to get new questions, you will have to make sure that your Show event is triggered when the request is completed. | unknown | |
d11685 | train | In Windows 7 .NET framework 3.5 is part of the operating system so all machines should have it.
In Windows 8 or windows 8.1 .NET framework 3.5 is NOT automatically installed (though all machines that are upgraded from win 7 -> win 8 should have it).
To run apps that require the .NET Framework 3.5 on Windows 8 or later, you must enable version 3.5 on your computer. There are two ways you can do this: by installing or running an app that requires the .NET Framework 3.5 (that is, by installing the .NET Framework 3.5 on demand), or by enabling the .NET Framework 3.5 in Control Panel. Both options require an Internet connection.
If an app requires the .NET Framework 3.5, but doesn't find that version enabled on your computer, it displays a message box, either during installation, or when you run the app for the first time. In the message box, choose Install this feature to enable the .NET Framework 3.5.
The above require an internet connection. If this is not possible you will have to include the .exe files of .NET 3.5 in your distribution
however as MSDN states:
The .NET Framework 4.5 and its point releases are backward-compatible
with apps that were built with earlier versions of the .NET Framework.
In other words, apps and components built with previous versions will
work without modification on the .NET Framework 4.5. However, by
default, apps run on the version of the common language runtime for
which they were developed, so you may have to provide a configuration
file to enable your app to run on the .NET Framework 4.5
So build your project for 3.5 and just deploy it to windows 8 machines. It should run, but its not the "best" environment for the app. The "best" would be to have .NET 3.5 installed. | unknown | |
d11686 | train | You should write width: 60 instead of width: '60px'
you can check this on the Documentation, hope it helps.
https://github.com/angular-ui/ui-grid/wiki/Defining-columns | unknown | |
d11687 | train | the way i fixed this... (similar to Artur Kędzior)
use version 1.14.5 of Gstreamer https://gstreamer.freedesktop.org/pkg/windows/1.14.5/gstreamer-1.0-x86_64-1.14.5.msi - complete setup
use version 1.13 of Microsoft.CognitiveServices.Speech (Nuget package)
Go to environment variables on your pc and add to the User variable called path the following C:\gstreamer\1.0\x86_64\bin
then add a system variable called "GSTREAMER_ROOT_X86_64" (without the quotes) and the value to "C:\gstreamer\1.0\x86_64"
you may need to reboot if still having issues. but this is now working for me.
A: Got help with it here https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/764
Basically:
*
*Do not use the latest version of Gstreamer
*Use this one https://gstreamer.freedesktop.org/pkg/windows/1.14.5/gstreamer-1.0-x86_64-1.14.5.msi
*Set PATH to bin folder (C:\gstreamer\1.0\x86_64\bin)
*Set GSTREAMER_ROOT_X86_64 variable (C:\gstreamer\1.0\x86_64)
*Reboot the machine
*Set Visual Studio build configuration to x64 | unknown | |
d11688 | train | How about:
from itertools import product
def filler(word, from_char, to_char):
options = [(c,) if c != from_char else (from_char, to_char) for c in word]
return (''.join(o) for o in product(*options))
which gives
>>> filler("1xxx1", "x", "5")
<generator object <genexpr> at 0x8fa798c>
>>> list(filler("1xxx1", "x", "5"))
['1xxx1', '1xx51', '1x5x1', '1x551', '15xx1', '15x51', '155x1', '15551']
(Note that you seem to be missing 15x51.)
Basically, first we make a list of every possible target for each letter in the source word:
>>> word = '1xxx1'
>>> from_char = 'x'
>>> to_char = '5'
>>> [(c,) if c != from_char else (from_char, to_char) for c in word]
[('1',), ('x', '5'), ('x', '5'), ('x', '5'), ('1',)]
And then we use itertools.product to get the Cartesian product of these possibilities and join the results together.
For bonus points, modify to accept a dictionary of replacements. :^)
A: Generate the candidate values for each possible position - even if there is only one candidate for most positions - then create a Cartesian product of those values.
In the OP's example, the candidates are ['x', '5'] for any position where an 'x' appears in the input; for each other position, the candidates are a list with a single possibility (the original letter). Thus:
def candidates(letter):
return ['x', '5'] if letter == 'x' else [letter]
Then we can produce the patterns by producing a list of candidates for positions, using itertools.product, and combining them:
from itertools import product
def combine(candidate_list):
return ''.join(candidate_list)
def patterns(data):
all_candidates = [candidates(element) for element in data]
for result in product(*all_candidates):
yield combine(result)
Let's test it:
>>> list(patterns('1xxx1'))
['1xxx1', '1xx51', '1x5x1', '1x551', '15xx1', '15x51', '155x1', '15551']
Notice that the algorithm in the generator is fully general; all that varies is the detail of how to generate candidates and how to process results. For example, suppose we want to replace "placeholders" within a string - then we need to split the string into placeholders and non-placeholders, and have a candidates function that generates all the possible replacements for placeholders, and the literal string for non-placeholders.
For example, with this setup:
keywords = {'wouldyou': ["can you", "would you", "please"], 'please': ["please", "ASAP"]}
template = '((wouldyou)) give me something ((please))'
First we would split the template, for example with a regular expression:
import re
def tokenize(t):
return re.split(r'(\(\(.*?\)\))', t)
This tokenizer will give empty strings before and after the placeholders, but this doesn't cause a problem:
>>> tokenize(template)
['', '((wouldyou))', ' give me something ', '((please))', '']
To generate replacements, we can use something like:
def candidates(part):
if part.startswith('((') and part.endswith('))'):
return keywords.get(part[2:-2], [part[2:-2]])
else:
return [part]
That is: placeholder-parts are identified by the parentheses, stripped of those parentheses, and looked up in the dictionary.
Trying it with the other existing definitions:
>>> list(patterns(tokenize(template)))
['can you give me something please', 'can you give me something ASAP', 'would you give me something please', 'would you give me something ASAP', 'please give me something please', 'please give me something ASAP']
To generalize patterns properly, rather than depending on other global functions combine and candidates, we should use dependency injection - by simply passing those as parameters which are higher-order functions. Thus:
from itertools import product
def patterns(data, candidates, combine):
all_candidates = [candidates(element) for element in data]
for result in product(*all_candidates):
yield combine(result)
Now the same core code solves whatever problem. Examples might look like:
def euler_51(s):
for pattern in patterns(
s,
lambda letter: ['x', '5'] if letter == 'x' else [letter],
''.join
):
print(pattern)
euler_51('1xxx1')
or
def replace_in_template(template, replacement_lookup):
tokens = re.split(r'(\(\(.*?\)\))', template)
return list(patterns(
tokens,
lambda part: (
keywords.get(part[2:-2], [part[2:-2]])
if part.startswith('((') and part.endswith('))')
else [part]
),
''.join
))
replace_in_template(
'((wouldyou)) give me something ((please))',
{
'wouldyou': ["can you", "would you", "please"],
'please': ["please", "ASAP"]
}
) | unknown | |
d11689 | train | For those having problems with this I have solved it as following -
Compatibility.getCompatibility().setWebSettingsCache(webSettings);
Make sure to implement a Compatibility layer, since following method doesn't work in SDK_INT < 11.
webViewInstance.setLayerType(View.LAYER_TYPE_SOFTWARE, null);
A: I have similar problem with AdMob, which I solved by adding:
android:layer="software"
to the AdView in my xml layout
A: try adding the attribute android:fadingEdge="none" to the ScrollView in your layout.
A: The answer to this was removing millenial ads. It seems that their animation to bring the ad visible was interfering with the webview stuff. | unknown | |
d11690 | train | You may try replace for same. It seems like IP address, so you may check any 0 after . need to be removed i,e: .0 replaces with '.' .
select replace ( @str, '.0','.')
A: MySQL 8+ has regexp_replace() which does exactly what you want:
select regexp_replace('1.2.03.00004', '[.]0+', '.')
EDIT:
If you want to replace only the leading zeros on the third value, then you can use substring_index():
select concat_ws('.',
substring_index(col, '.', 2),
substring_index( substring_index(col, '.', 3), '.', -1 ) + 0, -- convert to number
substring_index(col, '.', -1)
) | unknown | |
d11691 | train | Seems you need to handle the actions async so you can use a custom middleware like redux-thuk to do something like this:
actions.js
function refreshTables() {
return {
type: REFRESH_TABLES
}
}
function refreshFooter(tables) {
return {
type: REFRESH_FOOTER,
tables
}
}
export function refresh() {
return function (dispatch, getState) {
dispatch(refreshTables())
.then(() => dispatch(refreshFooter(getState().tables)))
}
}
component
const refreshButton = React.createClass({
refresh () {
this.props.refresh();
},
{/* ... */}
});
A: Although splitting it asynchronous may help, the issue may be in the fact that you are using combineReducers. You should not have to rely on the tables from props, you want to use the source of truth which is state.
You need to look at rewriting the root reducer so you have access to all of state. I have done so by writing it like this.
const rootReducer = (state, action) => ({
tables: tableReducer(state.tables, action, state),
footer: footerReducer(state.footer, action, state)
});
With that you now have access to full state in both reducers so you shouldn't have to pass it around from props.
Your reducer could then looks like this.
const footerReducer = (state, action, { tables }) => {
...
};
That way you are not actually pulling in all parts of state as it starts to grow and only access what you need. | unknown | |
d11692 | train | I recently created a module allows you to simply bind a localStorage key to a $scope variable and also store Objects, Arrays, Booleans and more directly inside the localStorage.
Github localStorage Module
A: There is an angular localStorage module:
https://github.com/grevory/angular-local-storage
var DemoCtrl = function($scope, localStorageService) {
localStorageService.clearAll();
$scope.$watch('localStorageDemo', function(value){
localStorageService.add('localStorageDemo',value);
$scope.localStorageDemoValue = localStorageService.get('localStorageDemo');
});
$scope.storageType = 'Local storage';
if (!localStorageService.isSupported()) {
$scope.storageType = 'Cookie';
}
};
After further thought you may need to change the module to broadcast on setItem so that you can get notified if the localStorage has been changed. Maybe fork and around line 50:
localStorage.setItem(prefix+key, value);
$rootScope.$broadcast('LocalStorageModule.notification.setItem',{key: prefix+key, newvalue: value}); // you could broadcast the old value if you want
or in the recent version of the library the casing was changed
$rootScope.$broadcast('LocalStorageModule.notification.setitem',{key: prefix+key, newvalue: value});
Then in your controller you can:
$scope.$on('LocalStorageModule.notification.setItem', function(event, parameters) {
parameters.key; // contains the key that changed
parameters.newvalue; // contains the new value
});
Here is a demo of the 2nd option:
Demo: http://beta.plnkr.co/lpAm6SZdm2oRBm4LoIi1
** Updated **
I forked that project and have included the notifications here in the event you want to use this project: https://github.com/sbosell/angular-local-storage/blob/master/localStorageModule.js
I believe the original library accepted my PR. The reason I like this library is that it has a cookie backup in case the browser doesn't support local storage.
A: Incidentally, I've created yet another localStorage module for AngularJS which is called ngStorage:
https://github.com/gsklee/ngStorage
Usage is ultra simple:
JavaScript
$scope.$storage = $localStorage.$default({
x: 42
});
HTML
<button ng-click="$storage.x = $storage.x + 1">{{$storage.x}}</button>
And every change is automagically sync'd - even changes happening in other browser tabs!
Check out the GitHub project page for more demos and examples ;)
A: $scope.$on("LocalStorageModule.notification.setitem", function (key, newVal, type) {
console.log("LocalStorageModule.notification.setitem", key, newVal, type);
});
$scope.$on("LocalStorageModule.notification.removeitem", function (key, type) {
console.log("LocalStorageModule.notification.removeitem", key, type);
});
$scope.$on("LocalStorageModule.notification.warning", function (warning) {
console.log("LocalStorageModule.notification.warning", warning);
});
$scope.$on("LocalStorageModule.notification.error", function (errorMessage) {
console.log("LocalStorageModule.notification.error", errorMessage);
});
this event calling when using
https://github.com/grevory/angular-local-storage#getstoragetype
in app config
myApp.config(function (localStorageServiceProvider) {
localStorageServiceProvider
.setPrefix('myApp')
.setStorageType('sessionStorage')
.setNotify(true, true)
}); | unknown | |
d11693 | train | That link is for the .NET control -- not the JSP tag library.
Maybe someone changed the Target Language on the Publication Target you are using (or it's published to the wrong target)? Another possibility is that it is hard-coded in the template instead of using TCDL.
A: Thanks to Peter Kjaer pointing me in the direction, this issue is now solved. It is not, however, due to publication targets and language (since they were set correctly - but I wouldn't completely rule out the possibilty of pubtargets conflicting).
The issue turned out to be the deployer_conf xml file on the HTTP upload server role, with its (default) lines within the TCDLEngine node - Properties.
<!--<Property Name="tcdl.dotnet.style" Value="controls"/>-->
<Property Name="tcdl.jsp.style" Value="tags"/>
As you can see, I commented the dotnet line. It will now properly resolve links, as well as componentlinks not including 'runat's anymore. | unknown | |
d11694 | train | This normally means that you took too long to respond to the Interaction. You can add an interaction.deferReply() to defer the reply. | unknown | |
d11695 | train | after some search and see similar problems I solved this problam like this :
first add a user meta for user status so we can checking if user is active or not then we can disable or enable users.
add_filter( 'authenticate', 'chk_active_user',100,2);
function chk_active_user ($user,$username)
{
$user_data = $user->data;
$user_id = $user_data->ID;
$user_sts = get_user_meta($user_id,"user_active_status",true);
if ($user_sts==="no")
{
return new WP_Error( 'disabled_account','this account is disabled');
}
else
{
return $user;
}
return $user;
} | unknown | |
d11696 | train | If you want to send data from a js script to a C# controller, then you can use a Jquery-ajax call instead of @Url.Action, if I'm not mistaken, you can't even use @Url.Action on a js source code.
const sendId = () => {
const controllerName = 'MyController';
const id = 1;
$.ajax({
contentType: 'application/json',
data: { myId: id },
url: `${controllerName}/methodName`, //template string
type: 'POST',
success: function (data) {
//...
},
failed: function () {
//...
}
});
} | unknown | |
d11697 | train | You can change the "From" text via the woocommerce_get_price_html_from_text filter.
You would do so, like this:
add_filter( 'woocommerce_get_price_html_from_text', 'so_43054760_price_html_from_text' );
function so_43054760_price_html_from_text( $text ){
return __( 'whatever', 'your-plugin-textdomain' );
}
Keep in mind this is WooCommerce 3.0-specific code. I'm not sure it is back-compatible with WC 2.6.x. | unknown | |
d11698 | train | Probably you are receiving a syntax error because the HQL not support a SELECT after a the FROM clause:
"select * from " +
"(select
You need to rethink your SQL to write it on HQL. | unknown | |
d11699 | train | #import <CoreData/CoreData.h> and don't forget to link it in.
A: Also, beware adding just anything to your .pch file. When you do so, those header files will be included all throughout your projectYou should only really put things there that are truly going to be universally required all through your project. | unknown | |
d11700 | train | here are some more information:
this class might be interesting as well:
/**
* Servlet 3.0+ environments allow to replace the web.xml file with a programmatic configuration.
* <p/>
* Created by owahlen on 01.01.14.
*/
public class Deployment extends SpringBootServletInitializer {
@Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(Application.class);
}
/**
* This method is copied from SpringBootServletInitializer.
* Only the registration of the ErrorPageFilter is omitted.
* This was done since errors shall never be sent as redirects but as ErrorDto
* @param servletContext
* @return
*/
protected WebApplicationContext createRootApplicationContext(ServletContext servletContext) {
SpringApplicationBuilder builder = new SpringApplicationBuilder();
ApplicationContext parent = getExistingRootWebApplicationContext(servletContext);
if (parent != null) {
this.logger.info("Root context already created (using as parent).");
servletContext.setAttribute(WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE, null);
builder.initializers(new ParentContextApplicationContextInitializer(parent));
}
builder.initializers(new ServletContextApplicationContextInitializer(servletContext));
builder.contextClass(AnnotationConfigEmbeddedWebApplicationContext.class);
builder = configure(builder);
SpringApplication application = builder.build();
if (application.getSources().isEmpty()
&& AnnotationUtils.findAnnotation(getClass(), Configuration.class) != null) {
application.getSources().add(getClass());
}
Assert.state(application.getSources().size() > 0,
"No SpringApplication sources have been defined. Either override the "
+ "configure method or add an @Configuration annotation");
// Error pages are handled by the ExceptionHandlerController. No ErrorPageFilter is needed.
// application.getSources().add(ErrorPageFilter.class);
return run(application);
}
private ApplicationContext getExistingRootWebApplicationContext(ServletContext servletContext) {
Object context = servletContext.getAttribute(WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE);
if (context instanceof ApplicationContext) {
return (ApplicationContext) context;
}
return null;
}
}
and another configuration - class:
@Configuration
public class H2Console {
protected final Logger logger = LoggerFactory.getLogger(getClass());
/**
* Define the H2 Console Servlet
*
* @return ServletRegistrationBean to be processed by Spring
*/
@Bean(name= "h2servlet")
public ServletRegistrationBean h2servletRegistration() {
ServletRegistrationBean registration = new ServletRegistrationBean(new WebServlet());
registration.addInitParameter("webAllowOthers", "true"); // allow access from URLs other than localhost
registration.addUrlMappings("/console/*");
return registration;
}
}
main build.gradle:
import java.util.concurrent.CountDownLatch
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'org.asciidoctor:asciidoctor-gradle-plugin:1.5.0'
}
}
apply plugin: 'org.asciidoctor.gradle.asciidoctor'
ext {
applicationVersion = 'UNDEFINED'
appBackend = null
}
asciidoctor {
sourceDir = file('asciidoc')
options = [
doctype : 'book',
attributes: [
'source-highlighter': 'coderay',
toc : '',
idprefix : '',
idseparator : '-'
]
]
}
def defaultEnvironment() {
def environment = ["PATH=${System.env.PATH}"]
environment += "HOME=${System.env.HOME}"
return environment
}
def execAsync(command, printStdOutput, dir, expectedOutput) {
println("Starting async command $command")
final CountDownLatch condition = new CountDownLatch(1)
def commandEnvironment = defaultEnvironment()
def proc = command.execute(commandEnvironment, new File(dir as String))
Thread.start {
try {
proc.in.eachLine { line ->
if (printStdOutput) {
println "$line"
}
if (expectedOutput != null && line?.contains(expectedOutput)) {
condition.countDown()
}
}
}
catch (ignored) {
}
}
Thread.start {
try {
proc.err.eachLine { line ->
if (printStdOutput) {
println line
}
}
}
catch (ignored) {
}
}
return [proc, expectedOutput != null ? condition : null]
}
task startServer() {
doLast {
def condBackend
(appBackend, condBackend) = execAsync(["./gradlew", "run"], true, "$projectDir", "Started Application")
condBackend.await()
}
}
task stopProcesses << {
appBackend?.destroy()
}
task e2eReport(dependsOn: [startServer, ':atobcarry-client:clean', ':project-client:e2eTest'])
tasks.getByPath(':project-client:e2eTest').mustRunAfter(startServer)
stopProcesses.mustRunAfter(':project-client:e2eTest')
startServer.finalizedBy(stopProcesses)
e2eReport.finalizedBy(stopProcesses)
tasks.getByPath(':project-client:e2eTest').finalizedBy(stopProcesses)
task validate(dependsOn: [':project-client:grunt_default', ':project-server:cobertura', ':project-server:coberturaCheck'])
the settings.gradle:
// This file includes the gradle subprojects of project project
include 'project-server' // REST webserver backend (WAR)
include 'project-client' // AngularJS frontend (JAR)
include 'project-tracking' // TomTom tracking server (WAR)
include 'project-tracking-commons' // Shared code between tracking and server
querydsl.graddle (from project-server)
configurations {
// configuration to hold the build dependency on the querydsl generator
querydslapt
}
String queryDslVersion = '3.5.1'
dependencies {
querydslapt "com.mysema.querydsl:querydsl-apt:$queryDslVersion"
compile "com.mysema.querydsl:querydsl-jpa:$queryDslVersion"
}
task generateQueryDSL(type: JavaCompile, group: 'build', description: 'Generate the QueryDSL query types.') {
// only process entity classes and enums to avoid compilation errors from code that needs the generated sources
source = fileTree('src/main/java/com/infinit/atobcarry/entity') + fileTree('src/main/java/com/infinit/project/enums')
// include the querydsl generator into the compilation classpath
classpath = configurations.compile + configurations.querydslapt
options.compilerArgs = [
"-proc:only",
"-processor", "com.mysema.query.apt.jpa.JPAAnnotationProcessor"
]
options.warnings = false
// the compiler puts the generated sources into the gensrcDir
destinationDir = gensrcDir
}
gensrc {
// extend the gensrc task to also generate querydsl
dependsOn generateQueryDSL
}
liquibase.graddle (from server)
configurations {
liquibase
}
dependencies {
liquibase 'org.liquibase:liquibase-core:3.3.2'
liquibase 'org.liquibase.ext:liquibase-hibernate4:3.5'
// liquibase 'org.yaml:snakeyaml:1.14'
liquibase 'org.postgresql:postgresql:9.3-1103-jdbc41'
liquibase 'org.springframework:spring-beans'
liquibase 'org.springframework:spring-orm'
liquibase 'org.springframework:spring-context'
liquibase 'org.springframework.boot:spring-boot' // contains the SpringNamingStrategy
liquibase 'org.hibernate.javax.persistence:hibernate-jpa-2.1-api:1.0.0.Final'
}
[
'status' : 'Outputs count (list if --verbose) of unrun change sets.',
'validate' : 'Checks the changelog for errors.',
'changelogSync' : 'Mark all changes as executed in the database.',
'changelogSyncSQL' : 'Writes SQL to mark all changes as executed in the database to STDOUT.',
'listLocks' : 'Lists who currently has locks on the database changelog.',
'releaseLocks' : 'Releases all locks on the database changelog.',
'markNextChangesetRan' : 'Mark the next change set as executed in the database.',
'markNextChangesetRanSQL': 'Writes SQL to mark the next change set as executed in the database to STDOUT.',
'dropAll' : 'Drops all database objects owned by the user. Note that functions, procedures and packages are not dropped (limitation in 1.8.1).',
'clearChecksums' : 'Removes current checksums from database. On next run checksums will be recomputed.',
'generateChangelog' : 'generateChangeLog of the database to standard out. v1.8 requires the dataDir parameter currently.',
'futureRollbackSQL' : 'Writes SQL to roll back the database to the current state after the changes in the changeslog have been applied.',
'update' : 'Updates the database to the current version.',
'updateSQL' : 'Writes SQL to update the database to the current version to STDOUT.',
'updateTestingRollback' : 'Updates the database, then rolls back changes before updating again.',
'diff' : 'Writes description of differences to standard out.',
'diffChangeLog' : 'Writes Change Log XML to update the base database to the target database to standard out.',
'updateCount' : 'Applies the next <liquibaseCommandValue> change sets.',
'updateCountSql' : 'Writes SQL to apply the next <liquibaseCommandValue> change sets to STDOUT.',
'tag' : 'Tags the current database state with <liquibaseCommandValue> for future rollback',
'rollback' : 'Rolls back the database to the state it was in when the <liquibaseCommandValue> tag was applied.',
'rollbackToDate' : 'Rolls back the database to the state it was in at the <liquibaseCommandValue> date/time.',
'rollbackCount' : 'Rolls back the last <liquibaseCommandValue> change sets.',
'rollbackSQL' : 'Writes SQL to roll back the database to the state it was in when the <liquibaseCommandValue> tag was applied to STDOUT.',
'rollbackToDateSQL' : 'Writes SQL to roll back the database to the state it was in at the <liquibaseCommandValue> date/time to STDOUT.',
'rollbackCountSQL' : 'Writes SQL to roll back the last <liquibaseCommandValue> change sets to STDOUT.'
].each { String taskName, String taskDescription ->
String prefixedTaskName = 'dbm' + taskName.capitalize()
task(prefixedTaskName, type: JavaExec) { JavaExec task ->
initLiquibaseTask(task)
args += taskName
String liquibaseCommandValue = project.properties.get("liquibaseCommandValue")
if (liquibaseCommandValue) {
args += liquibaseCommandValue
}
}
}
void initLiquibaseTask(JavaExec task) {
String changeLogFile = 'src/main/resources/db/changelog/db.changelog-master.xml'
task.main = 'liquibase.integration.commandline.Main'
task.classpath = configurations.liquibase + sourceSets.main.runtimeClasspath
task.args = [
// "--logLevel=debug",
"--changeLogFile=${changeLogFile}",
"--url=jdbc:postgresql://localhost:15432/roject",
"--username=project",
"--password=project",
"--referenceUrl=hibernate:spring:com.infinit.atobcarry?dialect=org.hibernate.dialect.PostgreSQL9Dialect&hibernate.ejb.naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy&hibernate.enhanced_id=true",
]
task.group = 'liquibase'
}
jreleaseinfo.gradle
configurations {
// configuration to hold the build dependency on the jreleaseinfo ant task
jreleaseinfo
}
dependencies {
jreleaseinfo 'ch.oscg.jreleaseinfo:jreleaseinfo:1.3.0'
}
task generateJReleaseInfo(group: 'build', description: 'Generate the VersionInfo class.') { Task task ->
Map<String, ?> parameters = [
buildKey: project.hasProperty('buildKey') ? project.buildKey : '',
buildResultKey: project.hasProperty('buildResultKey') ? project.buildResultKey : '',
buildNumber: project.hasProperty('buildNumber') ? project.buildNumber : '',
buildResultsUrl: project.hasProperty('buildResultsUrl') ? project.buildResultsUrl : '',
gitBranch: project.hasProperty('gitBranch') ? project.gitBranch : '',
gitCommit: project.hasProperty('gitCommit') ? project.gitCommit : ''
]
task.inputs.properties(parameters)
task.inputs.property('version', project.version)
task.outputs.file( new File(gensrcDir, 'com/infinit/atobcarry/config/VersionInfo.java') )
task.doLast {
// gradle properties that can be passed to the JReleaseInfoAntTask task
ant.taskdef(name: 'jreleaseinfo', classname: 'ch.oscg.jreleaseinfo.anttask.JReleaseInfoAntTask', classpath: configurations.jreleaseinfo.asPath)
ant.jreleaseinfo(targetDir: gensrcDir, className: 'VersionInfo', packageName: 'com.infinit.atobcarry.config', version: project.version) {
parameters.each { String key, String value ->
parameter(name: key, type: 'String', value: value)
}
}
}
}
gensrc {
// extend the gensrc task to also generate JReleaseInfo
dependsOn generateJReleaseInfo
}
gensrc.gradle
// register directory where generated sources are located with the project
ext.gensrcDir = file('src/main/generated')
// create a wrapper task for source generation that the generators can depend upon
task gensrc(group: 'build', description: 'Execute all tasks that generate source code.')
// include the source code generators
apply from: 'querydsl.gradle'
apply from: 'jreleaseinfo.gradle'
// add the gensrcDir to the sourceSets
sourceSets {
generated
}
sourceSets.generated.java.srcDirs = [gensrcDir]
// extend the conventional compileJava task to also compile the generated sources
compileJava {
dependsOn gensrc
source gensrcDir
}
clean {
delete gensrcDir
}
orm.xml
<?xml version="1.0" encoding="UTF-8"?>
<entity-mappings xmlns="http://java.sun.com/xml/ns/persistence/orm"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm http://java.sun.com/xml/ns/persistence/orm_2_0.xsd"
version="2.0">
<persistence-unit-metadata>
<persistence-unit-defaults>
<entity-listeners>
<entity-listener class="org.springframework.data.jpa.domain.support.AuditingEntityListener"/>
</entity-listeners>
</persistence-unit-defaults>
</persistence-unit-metadata>
</entity-mappings> | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.