Fix some grammar issues in docs

Closes gh-1695
This commit is contained in:
Andy Wilkinson 2014-10-13 13:18:24 +01:00
commit 7211571969
13 changed files with 46 additions and 46 deletions

View File

@ -133,7 +133,7 @@ With the requisite eclipse plugins installed you can select
need to import the root `spring-boot` pom and the `spring-boot-samples` pom separately.
=== Importing into eclipse without m2eclipse
If you prefer not to use m2eclipse you can generate eclipse project meta-data using the
If you prefer not to use m2eclipse you can generate eclipse project metadata using the
following command:
[indent=0]

View File

@ -47,5 +47,5 @@ For Gradle, use the declaration:
authentication events by default. This can be very useful for reporting, and also to
implement a lock-out policy based on authentication failures.
* **Process Monitoring** In Spring Boot Actuator you can find `ApplicationPidListener`
which creates file containing application PID (by default in application directory and
file name is `application.pid`).
which creates a file containing the application PID (by default in the application
directory with a file name of `application.pid`).

View File

@ -295,8 +295,8 @@ content into your application; rather pick only the properties that you need.
spring.hornetq.embedded.serverId= # auto-generated id of the embedded server (integer)
spring.hornetq.embedded.persistent=false # message persistence
spring.hornetq.embedded.data-directory= # location of data content (when persistence is enabled)
spring.hornetq.embedded.queues= # comma separate queues to create on startup
spring.hornetq.embedded.topics= # comma separate topics to create on startup
spring.hornetq.embedded.queues= # comma-separated queues to create on startup
spring.hornetq.embedded.topics= # comma-separated topics to create on startup
spring.hornetq.embedded.cluster-password= # customer password (randomly generated by default)
# JMS ({sc-spring-boot-autoconfigure}/jms/JmsProperties.{sc-ext}[JmsProperties])

View File

@ -15,7 +15,7 @@ curious about the underlying technology, this section provides some background.
=== Nested JARs
Java does not provide any standard way to load nested jar files (i.e. jar files that
are themselves contained within a jar). This can be problematic if you are looking
to distribute a self contained application that you can just run from the command line
to distribute a self-contained application that you can just run from the command line
without unpacking.
To solve this problem, many developers use ``shaded'' jars. A shaded jar simply packages

View File

@ -237,7 +237,7 @@ example:
[[build-tool-plugins-gradle-custom-version-management]]
==== Custom version management
If is possible to customize the versions used by the `ResolutionStrategy` if you need
to deviate from Spring Boot's ``blessed'' dependencies. Alternative version meta-data
to deviate from Spring Boot's ``blessed'' dependencies. Alternative version metadata
is consulted using the `versionManagement` configuration. For example:
[source,groovy,indent=0,subs="verbatim,attributes"]

View File

@ -12,7 +12,7 @@ _cloud's_ notion of a running process.
Two popular cloud providers, Heroku and Cloud Foundry, employ a ``buildpack'' approach.
The buildpack wraps your deployed code in whatever is needed to _start_ your
application: it might be a JDK and a call to `java`, it might be an embedded webserver,
or it might be a full fledged application server. A buildpack is pluggable, but ideally
or it might be a full-fledged application server. A buildpack is pluggable, but ideally
you should be able to get by with as few customizations to it as possible.
This reduces the footprint of functionality that is not under your control. It minimizes
divergence between deployment and production environments.
@ -103,7 +103,7 @@ able to hit the application at the URI given, in this case
[[cloud-deployment-cloud-foundry-services]]
=== Binding to services
By default, meta-data about the running application as well as service connection
By default, metadata about the running application as well as service connection
information is exposed to the application as environment variables (for example:
`$VCAP_SERVICES`). This architecture decision is due to Cloud Foundry's polyglot
(any language and platform can be supported as a buildpack) nature; process-scoped

View File

@ -365,7 +365,7 @@ that and be sure that it has initialized is to add a `@Bean` of type
`ApplicationListener<EmbeddedServletContainerInitializedEvent>` and pull the container
out of the event when it is published.
A really useful thing to do in is to use `@IntegrationTest` to set `server.port=0`
A useful practice for use with `@IntegrationTest`s is to set `server.port=0`
and then inject the actual ('`local`') port as a `@Value`. For example:
[source,java,indent=0,subs="verbatim,quotes,attributes"]
@ -415,7 +415,7 @@ accessible on the filesystem, i.e. it cannot be read from within a jar file.
Generally you can follow the advice from
'<<howto-discover-build-in-options-for-external-properties>>' about
`@ConfigurationProperties` (`ServerProperties` is the main one here), but also look at
`EmbeddedServletContainerCustomizer` and various Tomcat specific `+*Customizers+` that you
`EmbeddedServletContainerCustomizer` and various Tomcat-specific `+*Customizers+` that you
can add in one of those. The Tomcat APIs are quite rich so once you have access to the
`TomcatEmbeddedServletContainerFactory` you can modify it in a number of ways. Or the
nuclear option is to add your own `TomcatEmbeddedServletContainerFactory`.
@ -423,9 +423,9 @@ nuclear option is to add your own `TomcatEmbeddedServletContainerFactory`.
[[howto-enable-multiple-connectors-in-tomcat]]
=== Enable Multiple Connectors Tomcat
=== Enable Multiple Connectors with Tomcat
Add a `org.apache.catalina.connector.Connector` to the
`TomcatEmbeddedServletContainerFactory` which can allow multiple connectors eg a HTTP and
`TomcatEmbeddedServletContainerFactory` which can allow multiple connectors, e.g. HTTP and
HTTPS connector:
[source,java,indent=0,subs="verbatim,quotes,attributes"]
@ -918,7 +918,7 @@ then you can do that in `application.properties` using the "logging.level" prefi
You can also set the location of a file to log to (in addition to the console) using
"logging.file".
To configure the more fine grained settings of a logging system you need to use the native
To configure the more fine-grained settings of a logging system you need to use the native
configuration format supported by the `LoggingSystem` in question. By default Spring Boot
picks up the native configuration from its default location for the system (e.g.
`classpath:logback.xml` for Logback), but you can set the location of the config file
@ -981,10 +981,10 @@ jiggling with excludes, .e.g. in Maven:
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
@ -994,11 +994,11 @@ jiggling with excludes, .e.g. in Maven:
----
To use Log4j 2, simply depend on `spring-boot-starter-log4j2` rather than
`spring-boot-starter-log4j`
`spring-boot-starter-log4j`.
NOTE: The use of the one of the Log4j starters gathers together the dependencies for
NOTE: The use of one of the Log4j starters gathers together the dependencies for
common logging requirements (e.g. including having Tomcat use `java.util.logging` but
configure the output using Log4j or Log4j 2). See the Actuator Log4j or Log4j 2
configuring the output using Log4j or Log4j 2). See the Actuator Log4j or Log4j 2
samples for more detail and to see it in action.
@ -1313,7 +1313,7 @@ that can be used to disable the migrations, or switch off the location checking.
By default Flyway will autowire the (`@Primary`) `DataSource` in your context and
use that for migrations. If you like to use a different `DataSource` you can create
one and mark its `@Bean` as `@FlywayDataSource` - if you do that remember to create
another one and mark it as `@Primary` if you want 2 data sources.
another one and mark it as `@Primary` if you want two data sources.
Or you can use Flyway's native `DataSource` by setting `flyway.[url,user,password]`
in external properties.
@ -1350,7 +1350,7 @@ Spring Batch auto configuration is enabled by adding `@EnableBatchProcessing`
By default it executes *all* `Jobs` in the application context on startup (see
{sc-spring-boot-autoconfigure}/batch/JobLauncherCommandLineRunner.{sc-ext}[JobLauncherCommandLineRunner]
for details). You can narrow down to a specific job or jobs by specifying
`spring.batch.job.names` (comma separated job name patterns).
`spring.batch.job.names` (comma-separated job name patterns).
If the application context includes a `JobRegistry` then the jobs in
`spring.batch.job.names` are looked up in the registry instead of being autowired from the
@ -1593,7 +1593,7 @@ To configure IntelliJ correctly you can use the `idea` Gradle plugin:
----
NOTE: Intellij must be configured to use the same Java version as the command line Gradle
NOTE: IntelliJ must be configured to use the same Java version as the command line Gradle
task and `springloaded` *must* be included as a `buildscript` dependency.
You can also additionally enable '`Make Project Automatically`' inside Intellij to
@ -1622,7 +1622,7 @@ you would add the following:
</properties>
----
NOTE: this only works if your Maven project inherits (directly or indirectly) from
NOTE: This only works if your Maven project inherits (directly or indirectly) from
`spring-boot-dependencies`. If you have added `spring-boot-dependencies` in your
own `dependencyManagement` section with `<scope>import</scope>` you have to redefine
the artifact yourself instead of overriding the property .

View File

@ -605,7 +605,7 @@ from your command:
[[production-ready-remote-shell-plugins]]
==== Remote shell plugins
In addition to new commands, it is also possible to extend other CRaSH shell features.
All Spring Beans that extends `org.crsh.plugin.CRaSHPlugin` will be automatically
All Spring Beans that extend `org.crsh.plugin.CRaSHPlugin` will be automatically
registered with the shell.
For more information please refer to the http://www.crashub.org/[CRaSH reference
@ -887,7 +887,7 @@ In `META-INF/spring.factories` file you have to activate the listener(s):
=== Programmatically
You can also activate a listener by invoking the `SpringApplication.addListeners(...)`
method and passing the appropriate `Writer` object. This method also allows you to
customize file name and path via the `Writer` constructor.
customize the file name and path via the `Writer` constructor.
@ -899,5 +899,5 @@ might want to read about graphing tools such as http://graphite.wikidot.com/[Gra
Otherwise, you can continue on, to read about <<cloud-deployment.adoc#cloud-deployment,
'`cloud deployment options`'>> or jump ahead
for some in depth information about Spring Boot's
for some in-depth information about Spring Boot's
'<<build-tool-plugins.adoc#build-tool-plugins, build tool plugins>>'.

View File

@ -76,7 +76,7 @@ using.
[[cli-run]]
=== Running applications using the CLI
You can compile and run Groovy source code using the `run` command. The Spring Boot CLI
is completely self contained so you don't need any external Groovy installation.
is completely self-contained so you don't need any external Groovy installation.
Here is an example ``hello world'' web application written in Groovy:
@ -275,7 +275,7 @@ executable jar file. For example:
The resulting jar will contain the classes produced by compiling the application and all
of the application's dependencies so that it can then be run using `java -jar`. The jar
file will also contain entries from the application's classpath. You can add explicit
paths to the jar using `--include` and `--exclude` (both are comma separated, and both
paths to the jar using `--include` and `--exclude` (both are comma-separated, and both
accept prefixes to the values ``+'' and ``-'' to signify that they should be removed from
the defaults). The default includes are

View File

@ -1081,7 +1081,7 @@ There is a {github-code}/spring-boot-samples/spring-boot-sample-jersey[Jersey sa
you can see how to set things up. There is also a {github-code}/spring-boot-samples/spring-boot-sample-jersey1[Jersey 1.x sample].
Note that in the Jersey 1.x sample that the spring-boot maven plugin has been configured to
unpack some Jersey jars so they can be scanned by the JAX-RS implementation (the sample
asks for them to be scanned in its `Filter` registration.
asks for them to be scanned in its `Filter` registration).
@ -1100,7 +1100,7 @@ Spring beans. This can be particularly convenient if you want to refer to a valu
your `application.properties` during configuration.
By default, if the context contains only a single Servlet it will be mapped to `/`. In
the case of multiple Servlets beans the bean name will be used as a path prefix. Filters
the case of multiple Servlet beans the bean name will be used as a path prefix. Filters
will map to `+/*+`.
If convention-based mapping is not flexible enough you can use the
@ -1328,7 +1328,7 @@ auto-configured. In this example it's pulled in transitively via
[[boot-features-connect-to-production-database]]
==== Connection to a production database
Production database connections can also be auto-configured using a pooling
`DataSource`. Here's the algorithm for choosing a specific implementation.
`DataSource`. Here's the algorithm for choosing a specific implementation:
* We prefer the Tomcat pooling `DataSource` for its performance and concurrency, so if
that is available we always choose it.
@ -1370,7 +1370,7 @@ loadable.
[[boot-features-connecting-to-a-jndi-datasource]]
==== Connection to a JNDI DataSource
If you are deploying your Spring Boot application to an Application Server you might want
to configure and manage your DataSource using you Application Servers built in features
to configure and manage your DataSource using your Application Servers built-in features
and access it using JNDI.
The `spring.datasource.jndi-name` property can be used as an alternative to the
@ -1532,7 +1532,7 @@ their http://projects.spring.io/spring-data-jpa/[reference documentation].
[[boot-features-creating-and-dropping-jpa-databases]]
==== Creating and dropping JPA databases
By default JPA database will be automatically created *only* if you use an embedded
By default, JPA databases will be automatically created *only* if you use an embedded
database (H2, HSQL or Derby). You can explicitly configure JPA settings using
`+spring.jpa.*+` properties. For example, to create and drop tables you can add the
following to your `application.properties`.
@ -1557,7 +1557,7 @@ passes `hibernate.globally_quoted_identifiers` to the Hibernate entity manager.
By default the DDL execution (or validation) is deferred until
the `ApplicationContext` has started. There is also a `spring.jpa.generate-ddl` flag, but
it is not used if Hibernate autoconfig is active because the `ddl-auto`
settings are more fine grained.
settings are more fine-grained.
@ -1892,7 +1892,7 @@ connect to a broker using the the `netty` transport protocol. When the latter is
configured, Spring Boot configures a `ConnectionFactory` connecting to a broker running
on the local machine with the default settings.
NOTE: if you are using `spring-boot-starter-hornetq` the necessary dependencies to
NOTE: If you are using `spring-boot-starter-hornetq` the necessary dependencies to
connect to an existing HornetQ instance are provided, as well as the Spring infrastructure
to integrate with JMS. Adding `org.hornetq:hornetq-jms-server` to your application allows
you to use the embedded mode.
@ -1909,7 +1909,7 @@ HornetQ configuration is controlled by external configuration properties in
----
When embedding the broker, you can chose if you want to enable persistence, and the list
of destinations that should be made available. These can be specified as a comma separated
of destinations that should be made available. These can be specified as a comma-separated
list to create them with the default options; or you can define bean(s) of type
`org.hornetq.jms.server.config.JMSQueueConfiguration` or
`org.hornetq.jms.server.config.TopicConfiguration`, for advanced queue and topic
@ -2167,7 +2167,7 @@ If you use the
the following provided libraries:
* Spring Test -- integration test support for Spring applications.
* Junit -- The de-facto standard for unit testing Java applications.
* JUnit -- The de-facto standard for unit testing Java applications.
* Hamcrest -- A library of matcher objects (also known as constraints or predicates)
allowing `assertThat` style JUnit assertions.
* Mockito -- A Java mocking framework.
@ -2235,7 +2235,7 @@ it with HTTP (e.g. using `RestTemplate`), annotate your test class (or one of it
superclasses) with `@IntegrationTest`. This can be very useful because it means you can
test the full stack of your application, but also inject its components into the test
class and use them to assert the internal state of the application after an HTTP
interaction. For Example:
interaction. For example:
[source,java,indent=0,subs="verbatim,quotes,attributes"]
----
@ -2440,7 +2440,7 @@ You can use the
{sc-spring-boot-autoconfigure}/AutoConfigureAfter.{sc-ext}[`@AutoConfigureAfter`] or
{sc-spring-boot-autoconfigure}/AutoConfigureBefore.{sc-ext}[`@AutoConfigureBefore`]
annotations if your configuration needs to be applied in a specific order. For example,
if you provide web specific configuration, your class may need to be applied after
if you provide web-specific configuration, your class may need to be applied after
`WebMvcAutoConfiguration`.
@ -2461,7 +2461,7 @@ code by annotating `@Configuration` classes or individual `@Bean` methods.
==== Class conditions
The `@ConditionalOnClass` and `@ConditionalOnMissingClass` annotations allows configuration
to be skipped based on the presence or absence of specific classes. Due to the fact that
annotation meta-data is parsed using http://asm.ow2.org/[ASM] you can actually use the
annotation metadata is parsed using http://asm.ow2.org/[ASM] you can actually use the
`value` attribute to refer to the real class, even though that class might not actually
appear on the running application classpath. You can also use the `name` attribute if you
prefer to specify the class name using a `String` value.

View File

@ -555,7 +555,7 @@ and build system. Most IDEs can import Maven projects directly, for example Ecli
can select `Import...` -> `Existing Maven Projects` from the `File` menu.
If you can't directly import your project into your IDE, you may be able to generate IDE
meta-data using a build plugin. Maven includes plugins for
metadata using a build plugin. Maven includes plugins for
http://maven.apache.org/plugins/maven-eclipse-plugin/[Eclipse] and
http://maven.apache.org/plugins/maven-idea-plugin/[IDEA]; Gradle offers plugins
for http://www.gradle.org/docs/current/userguide/ide_support.html[various IDEs].
@ -644,7 +644,7 @@ See the <<howto.adoc#howto-hotswapping, Hot swapping ``How-to''>> section for de
[[using-boot-packaging-for-production]]
== Packaging your application for production
Executable jars can be used for production deployment. As they are self contained, they
Executable jars can be used for production deployment. As they are self-contained, they
are also ideally suited for cloud-based deployment.
For additional ``production ready'' features, such as health, auditing and metric REST

View File

@ -86,7 +86,7 @@ public class PropertiesLauncher extends Launcher {
/**
* Properties key for classpath entries (directories possibly containing jars).
* Defaults to "lib/" (relative to {@link #HOME loader home directory}). Multiple
* entries can be specified using a comma separeted list.
* entries can be specified using a comma-separated list.
*/
public static final String PATH = "loader.path";

View File

@ -15,7 +15,7 @@
*/
/**
* System that allows self contained JAR/WAR archives to be launched using
* System that allows self-contained JAR/WAR archives to be launched using
* {@code java -jar}. Archives can include nested packaged dependency JARs (there is
* no need to create shade style jars) and are executed without unpacking. The only
* constraint is that nested JARs must be stored in the archive uncompressed.