NUMTOKEN()

NUMTOKEN()

Onliner

Retrieves the number of tokens in a string

Syntax

       NUMTOKEN( <cString>, [<cTokenizer>], [<nSkipWidth>] ) -> nTokenCount

Arguments

<cString> Designates the string that is passed.

<cDelimiter> Designates the delimiter list used by the passer.

<nSkipWidth> Designates after what number of delimiter characters or sequences to count a token. This is helpful for counting empty tokens. The default value indicates that empty tokens are not taken into account.

Returns

The number of tokens contained in the <cString> is returned.

Description

Use NUMTOKEN() to determine how many words (or tokens) are contained in the character string. The function uses the following list of delimiters as a standard: CHR 32, 0, 9, 10, 13, 26, 32, 138, 141 and the characters , .;:!?/\<<>>()ˆ#&%+-* The list can be replaced by your own list of delimiters, <cDelimiter>. Here are some examples of useful delimiters:

      ---------------------------------------------------------------------
      Description         <cDelimiter>
      ---------------------------------------------------------------------
      Pages               CHR(12)(Form Feed)
      Sentences           ".!?"
      File Names          ":\."
      Numerical strings   ",."
      Date strings        "/."
      Time strings        ":."
      ---------------------------------------------------------------------

The skip value designates the number of characters after which a token is counted again. This also allows empty tokens, like blanks within a string, to be counted.

Examples

       .  A character string is searched using the standard delimiter
          list:
          ? NUMTOKEN("Good Morning!")      // Result: 2
       .  Your own list of delimiters can be specified for particular
          reasons.  Since the delimiter list for the following example only
          contains the characters ".!?", the result is 3.
          ? NUMTOKEN("Yes!  That's it. Maybe not?", ".!?")
       .  This example shows how to count empty tokens.  Parameters
          separated by commas are counted, but some of the parameters are
          skipped.  A token is counted after at least one delimiter (comma):
          String  :=  "one,two,,four"
          ? NUMTOKEN(String, ", ", 1)      // Result: 4

Tests

       numtoken( "Hello, World!" ) ==  2
       numtoken( "This is good. See you! How do you do?", ".!?" ) == 3
       numtoken( "one,,three,four,,six", ",", 1 ) ==  6

Compliance

NUMTOKEN() is compatible with CT3’s NUMTOKEN().

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

TOKEN(), ATTOKEN(), TOKENLOWER(), TOKENUPPER(), TOKENSEP()

TokenSep()

TokenSep()

Retrieves the token separators of the last token() call

Syntax

      TokenSep( [<lMode>] ) -> cSeparator

Arguments

[<lMode>] if set to .T., the token separator BEHIND the token retrieved from the token() call will be returned. Default: .F., returns the separator BEFORE the token

Returns

Depending on the setting of <lMode>, the separating character of the the token retrieved from the last token() call will be returned. These separating characters can now also be retrieved with the token() function.

Description

When one does extract tokens from a string with the token() function, one might be interested in the separator characters that have been used to extract a specific token. To get this information you can either use the TokenSep() function after each token() call, or use the new 5th and 6th parameter of the token() function.

Examples

      see TOKEN() function

Compliance

TokenSep() is compatible with CT3’s TokenSep().

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

TOKEN(), NUMTOKEN(), ATTOKEN(), TOKENLOWER(), TOKENUPPER()

TokenLower()

TokenLower()

Change the first letter of tokens to lower case

Syntax

      TokenLower( <[@]cString>, [<cTokenizer>], [<nTokenCount>],
                  [<nSkipWidth>] ) -> cString

Arguments

<[@]cString> is the processed string

[<cTokenizer>] is a list of characters separating the tokens in <cString> Default: chr(0) + chr(9) + chr(10) + chr(13) + chr(26) + chr(32) + chr(32) + chr(138) + chr(141) + “, .;:!\?/\\<>()#&%+-*”

[<nTokenCount>] specifies the number of tokens that should be processed Default: all tokens

[<nSkipWidth>] specifies the maximum number of successive tokenizing characters that are combined as ONE token stop, e.g. specifying 1 can yield to empty token Default: 0, any number of successive tokenizing characters are combined as ONE token stop

Returns

<cString> the string with the lowercased tokens

Description

The TokenLower() function changes the first letter of tokens in <cString> to lower case. To do this, it uses the same tokenizing mechanism as the token() function. If TokenLower() extracts a token that starts with a letter, this letter will be changed to lower case.

You can omit the return value of this function by setting the CSETREF() switch to .T., but you must then pass <cString> by reference to get the result.

Examples

      ? TokenLower( "Hello, World, here I am!" )         
                    // "hello, world, here i am!"
      ? TokenLower( "Hello, World, here I am!",, 3 )     
                    // "hello, world, here I am!"
      ? TokenLower( "Hello, World, here I am!", ",", 3 ) 
                    // "hello, World, here I am!"
      ? TokenLower( "Hello, World, here I am!", " W" )   
                    // "hello, World, here i am!"

Tests

      TokenLower( "Hello, World, here I am!" )         
               == "hello, world, here i am!"
      TokenLower( "Hello, World, here I am!",, 3 )     
               == "hello, world, here I am!"
      TokenLower( "Hello, World, here I am!", ",", 3 ) 
               == "hello, World, here I am!"
      TokenLower( "Hello, World, here I am!", " W" )   
               == "hello, World, here i am!"

Compliance

TokenLower() is compatible with CT3’s TokenLower(), but a new 4th parameter, <nSkipWidth> has been added for synchronization with the the other token functions.

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

TOKEN(), NUMTOKEN(), ATTOKEN(), TOKENUPPER(), TOKENSEP(), CSETREF()

Token()

Token()

Tokens of a string

Syntax

      TOKEN( <cString>, [<cTokenizer>],
             [<nTokenCount], [<nSkipWidth>],
             [<@cPreTokenSep>], [<@cPostTokenSep>] ) -> cToken

Arguments

<cString> is the processed string

[<cTokenizer>] is a list of characters separating the tokens in <cString> Default: chr(0)+chr(9)+chr(10)+chr(13)+chr(26)+ chr(32)+chr(32)+chr(138)+chr(141)+ “, .;:!\?/\\<>()#&%+-*”

[<nTokenCount>] specifies the count of the token that should be extracted Default: last token

[<nSkipWidth>] specifies the maximum number of successive tokenizing characters that are combined as ONE token stop, e.g. specifying 1 can yield to empty token Default: 0, any number of successive tokenizing characters are combined as ONE token stop

[<@cPreTokenSep>] If given by reference, the tokenizer before the actual token will be stored

[<@cPostTokenSep>] If given by reference, the tokenizer after the actual token will be stored

Returns

<cToken> the token specified by the parameters given above

Description

The TOKEN() function extracts the <nTokenCount>th token from the string <cString>. In the course of this, the tokens in the string are separated by the character(s) specified in <cTokenizer>. The function may also extract empty tokens, if you specify a skip width other than zero.

Be aware of the new 5th and 6th parameter there the TOKEN() function stores the tokenizing character before and after the extracted token. Therefore, additional calls to the TOKENSEP() function are not necessary.

Examples

      ? token( "Hello, World!" )            -->  "World"
      ? token( "Hello, World!",, 2, 1 )     --> ""
      ? token( "Hello, World!", ",", 2, 1 ) --> " World!"
      ? token( "Hello, World!", " ", 2, 1 ) --> "World!"

Tests

      token( "Hello, World!" )            == "World"
      token( "Hello, World!",, 2, 1 )     == ""
      token( "Hello, World!", ",", 2, 1 ) == " World!"
      token( "Hello, World!", " ", 2, 1 ) == "World!"

Compliance

TOKEN() is compatible with CT3’s TOKEN, but two additional parameters have been added there the TOKEN() function can store the tokenizers before and after the current token.

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

NUMTOKEN(), ATTOKEN(), TOKENLOWER(), TOKENUPPER(), TOKENSEP()

CSetRef()

CSetRef()

Determine return value of reference sensitive CT3 string functions

Syntax

      CSetRef( [<lNewSwitch>] ) -> lOldSwitch

Arguments

[<lNewSwitch>] .T. -> suppress return value .F. -> do not suppress return value

Returns

lOldSwitch old (if lNewSwitch is a logical value) or current state of the switch

Description

Within the CT3 functions, the following functions do not change the length of a string passed as parameter while transforming this string:

ADDASCII() BLANK() CHARADD() CHARAND() CHARMIRR() CHARNOT() CHAROR() CHARRELREP() CHARREPL() CHARSORT() CHARSWAP() CHARXOR() CRYPT() JUSTLEFT() JUSTRIGHT() POSCHAR() POSREPL() RANGEREPL() REPLALL() REPLLEFT() REPLRIGHT() TOKENLOWER() TOKENUPPER() WORDREPL() WORDSWAP()

Thus, these functions allow to pass the string by reference [@] to the function so that it may not be necessary to return the transformed string. By calling CSetRef (.T.), the above mentioned functions return the value .F. instead of the transformed string if the string is passed by reference to the function. The switch is turned off (.F.) by default.

Compliance

This function is fully CT3 compatible.

Platforms

All

Files

Source is ctstr.c, library is ct3.

Seealso

ADDASCII(), BLANK(), CHARADD(), CHARAND(), CHARMIRR(), CHARNOT(), CHAROR(), CHARRELREP(), CHARREPL(), CHARSORT(), CHARSWAP(), CHARXOR(), CRYPT(), JUSTLEFT(), JUSTRIGHT(), POSCHAR(), POSREPL(), RANGEREPL(), REPLALL(), REPLLEFT(), REPLRIGHT(), TOKENLOWER(), TOKENUPPER(), WORDREPL(), WORDSWAP()

AtToken()

AtToken()

Position of a token in a string

Syntax

      AtToken( <cString>, [<cTokenizer>],
               [<nTokenCount>], [<nSkipWidth>] ) -> nPosition

Arguments

<cString> is the processed string

[<cTokenizer>] is a list of characters separating the tokens in <cString> Default: chr(0) + chr(9) + chr(10) + chr(13) + chr(26) +  chr(32) + chr(32) + chr(138) + chr(141) +  “, .;:!\?/\\<>()#&%+-*”

[<nTokenCount>] specifies the count of the token whose position should be calculated Default: last token

[<nSkipWidth>] specifies the maximum number of successive tokenizing characters that are combined as ONE token stop, e.g. specifying 1 can yield to empty tokens Default: 0, any number of successive tokenizing characters are combined as ONE token stop

Returns

<nPosition> The start position of the specified token or 0 if such a token does not exist in <cString>.

Description

The AtToken() function calculates the start position of tne <nTokenCount>th token in <cString>. By setting the new <nSkipWidth> parameter to a value different than 0, you can specify how many tokenizing characters are combined at most to one token stop. Be aware that this can result to empty tokens there the start position is not defined clearly. Then, AtToken() returns the position there the token WOULD start if its length is larger than 0. To check for empty tokens, simply look if the character at the returned position is within the tokenizer list.

Examples

      AtToken( "Hello, World!" ) // --> 8  // empty strings after tokenizer
                                           // are not a token !

Tests

      AtToken( "Hello, World!" ) == 8
      AtToken( "Hello, World!",, 2 ) == 8
      AtToken( "Hello, World!",, 2, 1 ) == 7
      AtToken( "Hello, World!", " ", 2, 1 ) == 8

Compliance

AtToken() is compatible with CT3’s AtToken, but has an additional 4th parameter to let you specify a skip width equal to that in the TOKEN() function.

Platforms

All

Files

Source is token1.c, library is libct.

Seealso

Token(), NumToken(), TokenLower(), TokenUpper(), TokenSep()

Harbour All Functions – T

TabExpand
TabPack

Tan

TanH

TBrowseDB

TBrowseNew

TFileRead

THtml

Time

TimeValid

TNortonGuide 

Token
TokenAt
TokenEnd
TokenExit
TokenInit
TokenLower
TokenNext
TokenNum
TokenSep
TokenUpper

Tone

TOs2

Transform
Trim

TRtf

TTroff

 Type